Page - 67 - in Joint Austrian Computer Vision and Robotics Workshop 2020
Image of the Page - 67 -
Text of the Page - 67 -
(a) (b) (c) (d)
(a) (b) (c) (d)
Figure2.Waxandsiliconeartefacts (a)and imageascap-
tured by the PLUSVein finger vein scanner [7] using dif-
ferentenhancements for theveinpattern: noenhancement
(b), tracing with black marker (c), local contrast enhance-
ment (CLAHE) (d).
the real-life genuine samples and printed to paper.
Multiple printers and print configurations have been
tested to find an appropriate solution in regard to the
absorption of NIR light. In the end, using a ‘HP
LaserJet500colourM551’ laserprinter ingrey-scale
printing mode yielded satisfactory results. Some ex-
amples of the hand vein PA artefact generation and
recapturing are shown inFigure1.
3.2.FingerVeinSpoofingArtefacts
For the light transmission based finger vein mod-
ality, the establishment of working PA artefacts is
less trivial than in the reflected light case seen for
hand veins. Following an idea as exhibited in a re-
cent Chaos Computer Club video based on a sliced
waxartefactandasiliconemodelasproposedin[18]
we finally came up with two different types of arte-
facts, as shown in Figure 2. These artefacts are de-
rived from samples contained in the publicly avail-
able PLUSVein-FV3 finger vein data set [6]. These
two materials exhibited the best properties in regard
to appropriate illumination in the light transmission
caseamong severalother consideredmaterials.
For both types of artefacts, wax and silicone, the
first step increating theartefacts is toobtainamould
with a finger-like shape. We use a 3D-printer to cre-
ate the moulds, consisting of two parts: base and
top. Afterwards the vein pattern is printed using a
‘HPLaserJet500colourM551’ laserprinter ingrey- scale printing mode (similar to hand vein artefacts).
The paper sheet containing the vein pattern is placed
between the bottom and top finger artefact parts, as
showninFigure2. Thesamefingerartefact couldbe
used for all spoofs by simply substituting the piece
ofpapercontaining theveinpattern.
In order to improve the visibility of the vein pat-
tern,different techniquesareemployed: noenhance-
ment, enhancing the image (CLAHE and Gauss fil-
tering) as well as tracing the veins with a black per-
manent marker. Furthermore, various types of pa-
per are tested. The tracing of the vein pattern yields
the visually most pleasing results. In total, 42 finger
artefacts (2 materials, 7 types of paper, 3 vein pat-
tern enhancements) are generated for 3 fingers of an
exemplary user. Figure 2 illustrates the created arte-
facts and images recapturedwith the sensor.
4.PresentationAttackDetection
The PAD system applied in this work uses nat-
ural scene statistics as described in [13] and is based
on the framework presented in [2], which was ad-
apted to presentation attack detection in [22]. In
brief, the features used for detection are the para-
meters of (asymmetrical) generalised Gaussian dis-
tributions, (A)GGD, fit to statistics of characteristics
derived fromsamples&artefactsusingamulti-scale
approach.
The features are fed into a support vector ma-
chine (SVM) for classification, two-class ‘genuine’
and ‘spoofed’, using a radial basis function. First of
all, the available genuine and spoofed data is ran-
domly separated on a user basis into two equally
sized trainingand test sets.
For training, in order to cleanly separate training
and evaluation data, learning is done using a ‘leave
one label out’ cross-fold technique. All images of
a user’s hand are defined as having the same label,
i.e. the right and left hand have different labels for
each user. Furthermore, also the perspective (dorsal
or palmar) is split into different labels. To evalu-
ate on the whole training dataset each label is left
out in turn, the SVM is trained on the relevant train-
ing data, then the left-out label is evaluated. The
final training evaluation data is the union of the in-
dividually evaluated labels. The parameters are op-
timised for the overall training database, where the
search is done non-exhaustively on a grid with log-
arithmic drill-down, presenting closed set learning.
The spoofing detection accuracy serves as learning
67
Joint Austrian Computer Vision and Robotics Workshop 2020
- Title
- Joint Austrian Computer Vision and Robotics Workshop 2020
- Editor
- Graz University of Technology
- Location
- Graz
- Date
- 2020
- Language
- English
- License
- CC BY 4.0
- ISBN
- 978-3-85125-752-6
- Size
- 21.0 x 29.7 cm
- Pages
- 188
- Categories
- Informatik
- Technik