Seite - 36 - in Proceedings - OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“
Bild der Seite - 36 -
Text der Seite - 36 -
tattooedskinregionsthatcanbeusedinanadvancedde-identificationpipelinetoobfuscateorremove
tattoos. We train a convolutional neural network that acts as a patch classifier, labeling each patch of
an input imageaseither belonging to a tattoo ornot.
2. Relatedwork
Current research inde-identification ismainlyconcernedwithde-identifyinghardbiometric features,
especially theface[9]. Considerably lessvolumeofresearchisdevotedtosoftandnon-biometric fea-
tures [20]. Tattoo detection is typically studied not in the context of de-identification, but in forensic
applications. There the goal is to build a content-based image retrieval system for tattoos that would
help law enforcement in finding suspects and other persons of interest, e.g. persons associated with a
particular gang etc. [6, 12, 10]. For instance, Jain et al. [12] propose a content-based image retrieval
systemintended tobeusedby lawenforcementagencies. Thequery image isacropped tattoo,which
is then segmented, represented using color, shape and texture features and matched to the database.
Han and Jain [10] take the concept further by proposing a content-based image retrieval system for
sketch-to-image-matching,whereasketchof the tattoo ismatchedtoreal tattoo images. Theirsystem
uses SIFT descriptors to model shape and appearance patterns in both the sketch and the image, and
matches the descriptors using a local feature-based sparse representation classification scheme. Kim
etal. [13]proposecombining local shapecontext,SIFTdescriptorsandglobal tattooshape for tattoo
imageretrieval. Theirdescriptor is robust topartialshapedistortionsandinvariant to translation,scale
and rotation.
The methods used in content-based image retrieval systems often assume that tattoo images are
cropped, which limits their potential use in other scenarios. Heflin et al. [11] consider detecting
scars, marks and tattoos “in the wild”, i.e. in uncropped images, where a tattoo can appear anywhere
in the image(ornotappearatall) andbeofarbitrarysize. Theyproposeamethodfor tattoodetection
where tattoo candidate regions are detected using graph-based visual saliency. Further processing of
the candidate regions utilizes the GrabCut algorithm [21], image filtering and the quasi-connected
components technique [4] to obtain thefinal estimateof the tattoo location.
Wilber et al. [25] propose a mid-level image representation called Exemplar Codes and apply it to
the problem of tattoo classification. Exemplar codes are feature vectors that consist of normalized
outputs of simple linear classifiers. Each linear classifier measures the similarity between the input
image and an exemplar, i.e. a training image that best captures some property of the tattoo. Decision
score outputs from individual linear classifiers are used to estimate probabilities using extreme value
theory [23], thus forming exemplar code feature vectors. A random forest classifier is trained on
exemplar codes, enablingmulti-class tattoo recognition.
Because of great variability of tattoo designs, individual skin color and lighting conditions in real-
world tattoo images, as well as the fact that the tattoos resemble many different real world objects,
it is very difficult to devise good hand-crafted features suited for differentiating between tattoos and
background [19]. In recent times, however, convolutional neural networks (CNNs) were shown to
be able to automatically learn good features for many classification tasks [15]. We therefore propose
to apply a deep convolutional neural network to the difficult problem of tattoo detection. In seminal
work by Krizhevsky et al. [14], convolutional neural networks were proven to be extremely success-
ful on the ImageNet dataset. According to LeCun et al. [15], this success can be attributed to several
factors: efficient use of GPUs for network training, use of rectified linear units, use of dropout reg-
ularization and augmenting the training set with deformations of the existing images. Convolutional
36
Proceedings
OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“
- Titel
- Proceedings
- Untertitel
- OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“
- Autoren
- Peter M. Roth
- Kurt Niel
- Verlag
- Verlag der Technischen Universität Graz
- Ort
- Wels
- Datum
- 2017
- Sprache
- englisch
- Lizenz
- CC BY 4.0
- ISBN
- 978-3-85125-527-0
- Abmessungen
- 21.0 x 29.7 cm
- Seiten
- 248
- Schlagwörter
- Tagungsband
- Kategorien
- International
- Tagungsbände