Page - 151 - in Proceedings of the OAGM&ARW Joint Workshop - Vision, Automation and Robotics
Image of the Page - 151 -
Text of the Page - 151 -
(a)FCNNtrainedwith
synthetic data (b) FCNN trained
with augmented data (c) FCNN trained on
augmented + GAN
data.
Fig. 11. Comparison of the output of the same network trained only on
synthetic, on augmented and on refined data.
Fig. 12. 0% match of two impressions of the same finger taken from FVC
2002 [16] database with minutiae extracted using the FCNN algorithm.
VI. CONCLUSION
In this work the possibility of reformulating the fingerprint
minutiae extraction problem as a binary segmentation task is
shown. Deep learning is used to address this problem. Even
with synthetic data as a substitute to annotated real data, the
algorithm is able to detect reasonable minutiae with better
results than MINDTCT on the FVC2000 dataset without
fine tuning of any parameters. Additionally, the performance
gain of using our refinement approach was clearly illustrated
and advances in training GANs are likely to bring better
performance for this minutiae extraction algorithm. A first
step is made by using the Hessian instead of the image itself
for regularization. However, this performance gain illustrates
the dependence on good training data.
Currently, the angle of the minutiae points are calculated
using an orientation field. In a future network, we want to
learn the orientation of the minutiae by using the orientation
field of the ground truth ridge pattern. We believe that better
than state-of-the-art performance can be reached using deep
learning given sufficiently diverse training data.
ACKNOWLEDGMENT
We thank Peter Wild and Thomas Pock for their helpful
insights.
REFERENCES
[1] “UareU Database,” http://www.neurotechnology.com/download.html,
2007, [Online; accessed 01-March-2017].
[2] A. H. Ansari, “Generation and storage of large synthetic fingerprint
database,” Ph.D. dissertation, Indian Institute of Science Bangalore,
2011. [3] M. Arjovsky and L. Bottou, “Towards principled methods for training
generative adversarial networks,” in NIPS 2016 Workshop on Adver-
sarial Training. In review for ICLR, vol. 2016, 2017.
[4] K. Cao, E. Liu, and A. K. Jain, “Segmentation and enhancement of
latent fingerprints: A coarse to fine ridgestructure dictionary,” IEEE
Transactions on Pattern Analysis and Machine Intelligence, vol. 36,
no. 9, pp. 1847–1859, 2014.
[5] R. Cappelli, D. Maio, and D. Maltoni, “Sfinge: an approach to syn-
thetic fingerprint generation,” in International Workshop on Biometric
Technologies (BT2004), 2004, pp. 147–154.
[6] F. Chollet, “Keras,” https://github.com/fchollet/keras, 2015.
[7] M.Drozdzal,E.Vorontsov,G.Chartrand,S.Kadoury, andC.Pal, “The
importance of skip connections in biomedical image segmentation,”
in International Workshop on Large-Scale Annotation of Biomedical
Data and Expert Label Synthesis. Springer, 2016, pp. 179–187.
[8] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley,
S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in
Advances in neural information processing systems, 2014, pp. 2672–
2680.
[9] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for
image recognition,” arXiv preprint arXiv:1512.03385, 2015.
[10] L. Hong, Y. Wan, and A. Jain, “Fingerprint image enhancement:
algorithm and performance evaluation,” IEEE Transactions on Pattern
Analysis and Machine Intelligence, vol. 20, no. 8, pp. 777–789, 1998.
[11] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep
network training by reducing internal covariate shift,” arXiv preprint
arXiv:1502.03167, 2015.
[12] S. Je´gou, M. Drozdzal, D. Vazquez, A. Romero, and Y. Bengio,
“The one hundred layers tiramisu: Fully convolutional densenets for
semantic segmentation,” arXiv preprint arXiv:1611.09326, 2016.
[13] D. Kingma and J. Ba, “Adam: A method for stochastic optimization,”
arXiv preprint arXiv:1412.6980, 2014.
[14] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks
for semantic segmentation,” in Proceedings of the IEEE Conference
on Computer Vision and Pattern Recognition, 2015, pp. 3431–3440.
[15] D. Maio, D. Maltoni, R. Cappelli, J. L. Wayman, and A. K. Jain,
“Fvc2000:Fingerprintverificationcompetition,” IEEETransactionson
PatternAnalysisandMachine Intelligence, vol.24,no.3,pp.402–412,
2002.
[16] ——, “Fvc2002: Second fingerprint verification competition,” in 16th
international conference on Pattern Recognition. Proceedings., vol. 3.
IEEE, 2002, pp. 811–814.
[17] D. Maltoni, D. Maio, A. Jain, and S. Prabhakar, Handbook of
fingerprint recognition. Springer Science & Business Media, 2009.
[18] A. A. Moenssens, Fingerprint techniques. Chilton Book Company
London, 1971.
[19] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional
networks for biomedical image segmentation,” in International Con-
ference on Medical Image Computing and Computer-Assisted Inter-
vention. Springer, 2015, pp. 234–241.
[20] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and
X. Chen, “Improved techniques for training gans,” in Advances in
Neural Information Processing Systems, 2016, pp. 2226–2234.
[21] A. Shrivastava, T. Pfister, O. Tuzel, J. Susskind, W. Wang, and
R. Webb, “Learning from simulated and unsupervised images through
adversarial training,” arXiv preprint arXiv:1612.07828, 2016.
[22] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethink-
ing the inception architecture for computer vision,” in Proceedings of
the IEEE Conference on Computer Vision and Pattern Recognition,
2016, pp. 2818–2826.
[23] Y. Tang, F. Gao, and J. Feng, “Latent fingerprint minutia extraction
using fully convolutional network,” arXiv preprint arXiv:1609.09850,
2016.
[24] S. VeriFinger, “Neuro technology (2010),” 2010.
[25] C. I. Watson, M. D. Garris, E. Tabassi, C. L. Wilson, R. M. Mccabe,
S. Janet, and K. Ko, “User’s guide to nist biometric image software
(nbis),” 2007.
[26] C. I. Watson and C. Wilson, “Nist special database 4,” Fingerprint
Database, National Institute of Standards and Technology, vol. 17,
p. 77, 1992.
[27] S. Zagoruyko and N. Komodakis, “Wide residual networks,”
CoRR, vol. abs/1605.07146, 2016. [Online]. Available:
http://arxiv.org/abs/1605.07146
151
Proceedings of the OAGM&ARW Joint Workshop
Vision, Automation and Robotics
- Title
- Proceedings of the OAGM&ARW Joint Workshop
- Subtitle
- Vision, Automation and Robotics
- Authors
- Peter M. Roth
- Markus Vincze
- Wilfried Kubinger
- Andreas MĂĽller
- Bernhard Blaschitz
- Svorad Stolc
- Publisher
- Verlag der Technischen Universität Graz
- Location
- Wien
- Date
- 2017
- Language
- English
- License
- CC BY 4.0
- ISBN
- 978-3-85125-524-9
- Size
- 21.0 x 29.7 cm
- Pages
- 188
- Keywords
- Tagungsband
- Categories
- International
- Tagungsbände