Page - 146 - in Proceedings of the OAGM&ARW Joint Workshop - Vision, Automation and Robotics
Image of the Page - 146 -
Text of the Page - 146 -
Using a U-Shaped Neural Network for minutiae extraction trained from
refined, synthetic fingerprints
Thomas Pinetz1, Daniel Soukup 1, Reinhold Huber-Mo¨rk 1 and Robert Sablatnig2
Abstract—Minutiae extraction is an important step for
robust fingerprint identification. However, existing minutia
extraction algorithms rely on time consuming and fragile image
enhancement steps in order to work robustly. We propose a
new approach, combining enhancement and extraction into a
Convolutional Neural Network (CNN). This network is trained
from scratch using synthetic fingerprints. To bridge the gap
between synthetic and real fingerprints, refinements are used.
Here, an approach based on Generative Adversarial Networks
(GANs) is used to generate fingerprints suited for training
such a network and improving its matching score on real
fingerprints.
I. INTRODUCTION
Because of their uniqueness and their temporal stabil-
ity [10], fingerprint minutiae are a reliable way to determine
the identityofan individual.Minutiaepointsare irregularities
in ridge patterns, described using coordinates and orienta-
tion [17]. Over 150 different irregularities in fingerprints
have been identified [18]. While the amount of minutiae
on a single fingerprint varies from finger to finger, there
are approximately one hundred of such points comprising
a regular fingerprint [17]. It was reported that only 10 - 15
minutiae are required to reliably identify an individual [17].
Currently fingerprint matchers like BOZORTH [25] work
using minutiae landmarks. Extraction of minutiae is a hard
problem though, which heavily relies on good quality fin-
gerprint images [10]. To combat this, image enhancement
algorithms are used [4]. Still, reliable minutiae extraction on
arbitrary fingerprint images is an open problem as existing
feature extractors largely rely on image quality (focus, reso-
lution, skin condition, etc.) [23].
With the rise of deep learning in similar fields [7], [14],
[19] and the availability of synthetic fingerprint generators
[2], [5], it looks promising to use such methods for minu-
tiae extraction. This paper contributes a new network for
minutiae extraction following the idea to solve an equivalent
segmentation problem. In this work the synthetic fingerprint
generator Anguli [2] is used because of its availability.
Anguli generates the training data needed as is shown in
Fig. 1a. Because of the difference to real data as visualized
in Fig. 1(d-f), augmentations are used (Fig. 1b) as described
in Section IV. Here we contribute a novel technique to refine
fingerprints based on the GANs [8] paradigm. An example
output can be seen in Fig. 1c. Regularization is used to
force the refinement network to retain the annotation data
1Austrian Institute of Technology, Donau-City-Straße 1,
1220 Wien {thomas.pinetz.fl, daniel.soukup,
reinhold.huber-moerk}@ait.ac.at
2TU Wien, Karlsplatz 1, 1040 Wiensab@caa.tuwien.ac.at (a) Anguli generated
fingeprint. (b) Augmented finger-
print. (c) Refined fingerprint
using our GAN based
approach.
(d) Finger taken from
FVC2000 DB1. (e) Finger taken from
FVC2000 DB3. (f) Finger taken from
UareU dataset.
Fig. 1. Illustration of the fingerprint data used in this work. (a-c are
synthetic fingerprints, while d-f are real fingerprints.)
while outputting a refined representation of the simulated
fingerprint.
The rest of the paper is organized as follows. Section II
reviews related work. In Section III and IV the minutiae
extractionalgorithmand the refinementmethodaredescribed
in detail. In Section V the results obtained with our method
arepresented.Finally inSectionVIwedrawourconclusions.
II. RELATED WORK
Minutiae detection for a sufficiently enhanced image is
done by binarization of the grayscale image [10]. Currently
fingerprint minutiae extractors use image enhancement rou-
tines to achieve the desired quality [10], [4], [25], [24].
Recently there has been a similar approach to the minutiae
extraction problem using a pre-trained Convolutional Neural
Network, in a forensic setting [23]. However the CNN
in [23] is used as a pre-processing step to find large regions
containing a minutiae point. Then logistic regression and
region pooling are used to extract the actual minutia position.
In our approach the minutia extraction problem is rede-
fined as a binary segmentation task, which the CNN solves
directly. With our method there is no need for any time
consuming pre- or post-processing. Additionally, synthetic
fingerprint generators are used to train the network from
scratch and make it suitable for the minutiae extraction
problem.
146
Proceedings of the OAGM&ARW Joint Workshop
Vision, Automation and Robotics
- Title
- Proceedings of the OAGM&ARW Joint Workshop
- Subtitle
- Vision, Automation and Robotics
- Authors
- Peter M. Roth
- Markus Vincze
- Wilfried Kubinger
- Andreas Müller
- Bernhard Blaschitz
- Svorad Stolc
- Publisher
- Verlag der Technischen Universität Graz
- Location
- Wien
- Date
- 2017
- Language
- English
- License
- CC BY 4.0
- ISBN
- 978-3-85125-524-9
- Size
- 21.0 x 29.7 cm
- Pages
- 188
- Keywords
- Tagungsband
- Categories
- International
- Tagungsbände