Seite - 149 - in Proceedings of the OAGM&ARW Joint Workshop - Vision, Automation and Robotics
Bild der Seite - 149 -
Text der Seite - 149 -
(a) Linear Transform
Augmentation (b) Gaussian Augmen-
tation (c) Morphological
Augmentation
(d) Distorted Ground
Truth Ridge Pattern (e) Refinement Net-
work Input (f) Refinement
Network Output with
minutiae regions.
Fig. 6. Illustration of the refined data.
resized image is used as input to the network. An
example input and output image can be seen in Fig.
6e and Fig. 6f.
B. Refinement Network
The Refinement Network used in this work is based on the
GANparadigm,whereadualoptimizationproblemissolved.
A refiner and a discriminator network are simultaneously
trained against each other. The refiner network tries to fool
the discriminator by applying refinements to a synthetic
fingerprint, while the discriminator is used to discriminate
between fake and real data. The purpose of such a network
is to find a Nash Equilibrium [20] where both networks are
optimal.
The only application of a refinement network to our
knowledge is in [21]. In our work the approach therein is
extended by using noise on the input data to improve the
stability of training such a network [3].
One key observation is that using the input image itself for
regularization is limiting the amount of possible refinements
for fingerprints. Here we propose to use the Hessian of the
image instead of the image itself for regularization. The
Hessian represents the actual ridge pattern of the fingerprint
independent of the pixel intensity values. Mean Squared
Error is used to penalize deviation from the Hessian, while
the refiner network still needs to fool the discriminator
network.
The refiner network uses the same architecture as the
minutiae extraction network (Fig. 3), only smaller in the
number of layer blocks and filters. Wide residual blocks [27] are used for every layer block starting with 32 filters and
doubled on its way down and halved on their way up.
Fingerprints like in Fig. 6f are produced by this method.
Here, the problem observed by current synthetic fingerprint
refiners of modeling noise is addressed by using such a
network [5].
V. EXPERIMENTS
This section showcases the results obtained with our
method. All our models were programmed using the python
framework Keras [6] and trained on a Nvidia Geforce Titan
X. For training, the Adam [13] optimizer is used with an
initial learning rate of 0.001. The learning rate is cut in half,
if the validation error has not decreased for three consecutive
epochs. For other minutiae extraction algorithms, an Intel
Xeon - W3550 CPU was used.
A. Experimental Setup
For training 28.000 fingerprints with five impressions per
fingerprint were generated using Anguli. In total 140.000
fingerprints were used for training, which included a vali-
dation set of 10.000 fingerprints. The different impressions
can be seen in Fig. 1. Out of the impressions three contain
medium noise and the other two use little and heavy noise
respectively.
Non linear distortions are used on 3.000 of those fin-
gerprints and on all of their impressions. All the other
augmentations, as described in Section IV are applied on
the fly.
An annotated real dataset of 300 fingerprints constructed
from 220 samples of the sd04 [26] and the 80 images of the
fvc2000 DB4 B [15] dataset are used additionally to increase
the effectiveness of the classifier. The real dataset used for
the refinement network is the UareU [1] dataset.
B. Deep Learning Experiments
In Fig. 9 the difference in performance for the various
layer blocks defined in Section III can be seen. In contrast
to thefindings in [12]usingdenselyconnectedblocksdidnot
work as well for the minutiae detection problem. Bottleneck
residual blocks performed similarly to wide residual blocks,
which is similar to the findings in the original paper [27].
C. Experiments on FVC2000 databases
Here, the performance of our method is compared to other
minutiae extraction algorithms on the FVC2000 [15] dataset
consisting of real world fingerprints. To match the minutiae
against each other, the minutiae matcher BOZORTH [25]
was used. The results of this experiment can be seen in Fig.
7 and Fig. 8, where GAR and FAR denote the Genuine
Acceptance Rate and False Acceptance Rate accordingly.
Using those metrics the Equal Error Rate (EER) can be
calculated by finding the rate where (2) holds.
GAR=1−FAR (2)
The extracted EER of the evaluated minutiae extractors
is shown in Table I. Our algorithm performs better than
149
Proceedings of the OAGM&ARW Joint Workshop
Vision, Automation and Robotics
- Titel
- Proceedings of the OAGM&ARW Joint Workshop
- Untertitel
- Vision, Automation and Robotics
- Autoren
- Peter M. Roth
- Markus Vincze
- Wilfried Kubinger
- Andreas Müller
- Bernhard Blaschitz
- Svorad Stolc
- Verlag
- Verlag der Technischen Universität Graz
- Ort
- Wien
- Datum
- 2017
- Sprache
- englisch
- Lizenz
- CC BY 4.0
- ISBN
- 978-3-85125-524-9
- Abmessungen
- 21.0 x 29.7 cm
- Seiten
- 188
- Schlagwörter
- Tagungsband
- Kategorien
- International
- Tagungsbände