Web-Books
in the Austria-Forum
Austria-Forum
Web-Books
Informatik
Joint Austrian Computer Vision and Robotics Workshop 2020
Page - 170 -
  • User
  • Version
    • full version
    • text only version
  • Language
    • Deutsch - German
    • English

Page - 170 - in Joint Austrian Computer Vision and Robotics Workshop 2020

Image of the Page - 170 -

Image of the Page - 170 - in Joint Austrian Computer Vision and Robotics Workshop 2020

Text of the Page - 170 -

• FalsePositive(FP): A false positive test re- sult for an image fromtheBUILDING PHOTO category is one that detects at least τmatching keypointpairs. Based on these definitions, we are able to com- pute precision, recall, and F1-score. Recall, that the aim of the attacker is todisable the techniques of the forensic analyst. Thus, the attacker developing these techniques to counter SIFT keypoint based forensic techniques by removing keypoints aims for low TP (and lowTN),ashighFNmakes the forensicanalyst miss forged imagesandhighFPconfuses theanalyst asmanygenuine imagesaredeterminedas forgeries. First, we computed SIFT keypoints and then for each keypoint we found the two nearest neighbours from all remaining keypoints using a K-d tree based on Euclidean distance d1 and d2 (where d1 and d2 aredistancesandd1correspondstotheclosestneigh- bour),T ∈ (0,1). [13] and [11] suggested that there is a match only if d1d2 < T holds. In these papers T = 0.6 but we looked into results for T = 0.4, T=0.5,T=0.6andT=0.7. Figure5: Forged Images Figure6: Original Images Fig. 7 shows confusion matrices (i.e. the number of TP, FN, TN, FP) for using 50 keypoints, τ = 1, for four different values ofT, comparing copy move forgery detection without manipulating images, and with applying keypoint removal techniques LS, CA, and GS+LS. Patch size is set to 9x9 pixels in all patch-based techniques. Overall, we observe that all three SIFT keypoint removal strategies work, i.e. they reduce signifi- cantly thenumberofTP.However, they increasealso the number of TN, thus, the number of false posi- tives isalsoreduced(whichisnotdesired). Whenwe (a)SIFT Matching (b)LS (c)CA (d) GS+LS Figure7: CopyMoveForgeryDetection compare the three removal strategies,GS+LSclearly has a higher number of TP, thus is least efficient and does not need to be considered further in this com- parison. LSandCAareclose,withslightadvantages for LS, however, difficult to confirm in this visual representation. When looking into recall and precision values for τ=1,2,3 andT =0.4,0.5,0.6,0.7using 50, 100, and 200 keypoints (overall 36 configurations), we find precision(LS)< precision(CA) in 33/36 cases, while recall(LS)< recall(CA)in20/36cases. There- fore, overall, LS is clearly more effective in prevent- ing to detect a copy move forgery as CA is. In terms of F1-score F1(LS)≤F1(CA) in 27/36 cases, which confirms the trend. Table 3 shows precision, recall and F1-scores of theconfusionmatrices showninFig. 7. Thecases in whichLSdelivers thebest (lowest) resultsareunder- lined - we notice that this is also the clear majority within these result subsets. CA LS GS+LS τ T Prec. Rec. F1 Prec. Rec. F1 Prec. Rec. F1 1 0.4 0.81 0.42 0.55 0.69 0.36 0.47 0.86 0.60 0.71 1 0.5 0.76 0.44 0.56 0.68 0.42 0.52 0.81 0.62 0.70 1 0.6 0.72 0.46 0.56 0.71 0.50 0.57 0.84 0.74 0.79 1 0.7 0.69 0.58 0.63 0.65 0.56 0.60 0.84 0.86 0.85 Table3: Comparisonofkeypoint removal techniques in termsofprecision, recall, andF1-score. 4.Conclusion Local smoothing (LS), as proposed in this paper, turns out to be more effective in preventing a detec- tion of a copy move attack as compared to the col- 170
back to the  book Joint Austrian Computer Vision and Robotics Workshop 2020"
Joint Austrian Computer Vision and Robotics Workshop 2020
Title
Joint Austrian Computer Vision and Robotics Workshop 2020
Editor
Graz University of Technology
Location
Graz
Date
2020
Language
English
License
CC BY 4.0
ISBN
978-3-85125-752-6
Size
21.0 x 29.7 cm
Pages
188
Categories
Informatik
Technik
Web-Books
Library
Privacy
Imprint
Austria-Forum
Austria-Forum
Web-Books
Joint Austrian Computer Vision and Robotics Workshop 2020