Web-Books
in the Austria-Forum
Austria-Forum
Web-Books
Informatik
Joint Austrian Computer Vision and Robotics Workshop 2020
Page - 68 -
  • User
  • Version
    • full version
    • text only version
  • Language
    • Deutsch - German
    • English

Page - 68 - in Joint Austrian Computer Vision and Robotics Workshop 2020

Image of the Page - 68 -

Image of the Page - 68 - in Joint Austrian Computer Vision and Robotics Workshop 2020

Text of the Page - 68 -

function for theparameteroptimisation. The trainedSVMis thenapplied to thepreviously unseen test data and yields an output class and a confidence, which represents the difference between classprobabilities. 5.ExperimentalEvaluation This section describes the experimental set-up for theevaluationofthehand-(HV)andfinger-vein(FV) spoofing artefacts as well as the spoofing artefact’s quality and PADperformance. 5.1.ExperimentalSet-Up Thesoftwareusedtoprocess thefinger-andhand- vein data is the OpenVein Toolkit [9]. The ROI extraction has been done manually and the visibil- ity of the vein pattern is improved by applying dif- ferent post-processing techniques from the toolkit. The vascular patterns are extracted using Maximum Curvature (MC) [14] and the comparison of the res- ulting binary feature vectors is performed using a correlation basedapproach [14]. As defined in ISO/IEC 19795-1 [3], the EER, FMR1000 and ZeroFMR are used to quantify the verification performance, where all samples are compared against each other (full comparison). The experiments are performed separately for fin- gers/hands,orientations (dorsal/palmar)and illumin- ation typeswhereapplicable. The PAD approach is evaluated using the met- rics defined in the ISO/IEC 30107-3 [5] stand- ard: detection equal error rate (D-EER), where AP- CER=BPCER, attack presentation classification er- ror rate (APCER, equivalent of FAR) which is the proportion of attack presentations using the same spoofing artefact species incorrectly classified as bona fide (true) presentations in a specific scenario, bona fide presentation classification error rate (BP- CER,equivalentofFRR)representingtheproportion of bona fide presentations incorrectly classified as presentation attacks in a specific scenario and a cor- responding DetectionErrorTrade-off (DET)curve. 5.2.Results: QualityofSpoofingArtefacts In order to assess the PAD performance, it is es- sential toevaluate thequalityof thespoofedartefacts first. This is done by comparing the recaptured im- ages of the spoofed artefacts against bona fide im- ages. Themaingoal increating thespoofedartefacts is to have as little as possible impact on the match- Figure 3. HV Verification results obtained when compar- ing bona fide samples only (baseline) and with presenta- tion attacks (spoofed) for dorsal (left) and palmar (right) view. EER FMR1000 ZeroFMR Dorsal850 3.01 3.00 4.00 Dorsal950 4.99 6.00 6.00 Palmar850 16.99 30.00 32.00 Palmar950 18.16 32.00 33.00 Dorsal850 10.80 94.80 98.00 Dorsal950 11.20 15.60 16.40 Palmar850 20.82 100.00 100.00 Palmar950 23.22 38.00 41.20 Table1.Performancevalues (in%)obtainedwhenverify- ingbonafidesamplesonly (baseline)compared toverify- ing bona fide samples against PAs (spoofed) for reflected lightHVrecognition. ingperformance. If that is thecase, thequalityof the spoofedartefacts canbeconsideredas satisfactory. The results for the HV artefacts (reflected light) are shown in Figure 3 and the corresponding per- formance values are reported in Table 1. In gen- eral, we notice a matching performance degradation with spoofing artefacts, however the resulting EER degradation is still acceptable. It can be observed that the quality of the 950 artefacts (dorsal and pal- mar) is consistent for all spoofed patterns, since the FMR1000 and ZeroFMR remain quite stable in this case. For the 850 spoofs on the other hand, a large degradation in the FMR1000 and ZeroFMR can be observed, which indicates that some of the created artefactsdidnothavesufficientquality. Furthermore, the baseline performance is much lower for the pal- mar view compared to the dorsal one (3.01% vs. 16.99%), while the relative EER degradation using spoofed artefacts behaves stably and ranges approx- imatelybetween4%and7%forallmodalities. Table 2 illustrates the comparison scores (genu- ine and impostor) of the created FV spoofing arte- facts compared to the baseline, where only bona fide 68
back to the  book Joint Austrian Computer Vision and Robotics Workshop 2020"
Joint Austrian Computer Vision and Robotics Workshop 2020
Title
Joint Austrian Computer Vision and Robotics Workshop 2020
Editor
Graz University of Technology
Location
Graz
Date
2020
Language
English
License
CC BY 4.0
ISBN
978-3-85125-752-6
Size
21.0 x 29.7 cm
Pages
188
Categories
Informatik
Technik
Web-Books
Library
Privacy
Imprint
Austria-Forum
Austria-Forum
Web-Books
Joint Austrian Computer Vision and Robotics Workshop 2020