Web-Books
im Austria-Forum
Austria-Forum
Web-Books
International
Proceedings of the OAGM&ARW Joint Workshop - Vision, Automation and Robotics
Seite - 163 -
  • Benutzer
  • Version
    • Vollversion
    • Textversion
  • Sprache
    • Deutsch
    • English - Englisch

Seite - 163 - in Proceedings of the OAGM&ARW Joint Workshop - Vision, Automation and Robotics

Bild der Seite - 163 -

Bild der Seite - 163 - in Proceedings of the OAGM&ARW Joint Workshop - Vision, Automation and Robotics

Text der Seite - 163 -

Fig. 3. Visual comparison of inpainting results. The first row shows a snippet including a hole at the border of imageFlower. The second and third row show snippets including holes caused by depth discontinuities for imagesCrowd andEdge, respectively. Best viewed in color. particular, parts of the background area have been erro- neously labeled as foreground and thus are not taken into account in the patch matching step according to the prede- fined depth constraints. Consequently, artifacts are present in the inpainted region, which however could be avoided by adjusting the depth-based outlier removal. Another interesting finding is the approximately uniform distribution of PC scores among the investigated inpainting methods for the imageBird. The observers declared that they found ithard todetect anydifferences,whichmightbedue to the fact thatBird exhibits the smallest number of disoccluded pixels (see Table I). Additionally, these disoccluded pixels are located in primarily low textured areas outside the main focus of the observer’s attention. Similarly, the better result of the relatively straightforward inpainting method HBR (54.31% on average) compared to PM (34.51% on average) and CAF (38.43% on average) may lie in the fact that in our test images the inconsistencies caused by HBR inpainting become mainly noticeable in highly textured background regions near the image margin, whereas observers tend to pay more attention to the central image area covered by the foreground object. V. CONCLUSION We have introduced a depth-guided inpainting approach that addresses the filling of disocclusions in novel views. Our method is based on efficient patch matching and produces visually very satisfying results for both disocclusions at image borders and disocclusions along the boundaries of foreground objects. Our method adapts its patch sizes to the disocclusion sizes. For disocclusions along objects, we additionally incorporate the depth information by focusing on the background scene content for patch selection. A sub- jective evaluation of the stereoscopically perceived quality of the synthesized novel views showed the effectiveness of our proposed approach. For future work, we plan to extend our technique to disocclusion inpainting of video sources. REFERENCES [1] I. Ahn and C. Kim, “Depth-based disocclusion filling for virtual view synthesis,” in IEEEInternationalConferenceonMultimediaandExpo, 7 2012, pp. 109–114. [2] C. Barnes, E. Shechtman, A. Finkelstein, and D. B. Goldman, “Patch- match: A randomized correspondence algorithm for structural image editing,”ACMTransactions onGraphics (Proc. SIGGRAPH), vol. 28, pp. 24:1–24:11, 2009. [3] E. Bosc, R. Pepion, P. L. Callet, M. Pressigout, and L. Morin, “Reliability of 2D quality assessment methods for synthesized views evaluation in stereoscopic viewing conditions,” in 3DTV-Conference: TheTrueVision-Capture,TransmissionandDisplayof3DVideo, 2012, pp. 1–4. [4] P. Buyssens, O. Le Meur, M. Daisy, D. Tschumperle´, and O. Le´zoray, “Depth-guided disocclusion inpainting of synthesized RGB-D im- ages,” IEEE Transactions on Image Processing, vol. 26, no. 2, pp. 525–538, 2017. [5] A. Criminisi, P. Perez, and K. Kentaro, “Region filling and object removal by exemplar-based image inpainting,” IEEE Transactions on Image Processing, vol. 13, no. 9, pp. 1200–1212, 2004. [6] I. Daribo and B. Pesquet-Popescu, “Depth-aided image inpainting for novel view synthesis,” in IEEE InternationalWorkshoponMultimedia Signal Processing, 2010, pp. 167–170. [7] M. Eisenbarth, F. Seitner, and M. Gelautz, “Quality analysis of virtual viewsonstereoscopicvideocontent,” in19th InternationalConference on Systems, Signals and Image Processing, 2012, pp. 333–336. [8] J. Gautier, L. M. Josselin, and C. Guillemot, “Depth-based image completion for view synthesis,” in3DTVConference:TheTrueVision- Capture, Transmission andDisplay of 3DVideo, 2011, pp. 1–4. [9] M. Gelautz, E. Stavrakis, and M. Bleyer, “Stereo-based image and video analysis for multimedia applications,” in International Archives of Photogrammetry, Remote Sensing and Spatial Information Systems (XXth ISPRSCongress), 2004, pp. 998–1003. [10] L. He, M. Bleyer, and M. Gelautz, “Object removal by depth-guided inpainting,” inAustrianAssociation forPatternRecognitionWorkshop, vol. 2, 2011, pp. 1–8. 163
zurĂĽck zum  Buch Proceedings of the OAGM&ARW Joint Workshop - Vision, Automation and Robotics"
Proceedings of the OAGM&ARW Joint Workshop Vision, Automation and Robotics
Titel
Proceedings of the OAGM&ARW Joint Workshop
Untertitel
Vision, Automation and Robotics
Autoren
Peter M. Roth
Markus Vincze
Wilfried Kubinger
Andreas MĂĽller
Bernhard Blaschitz
Svorad Stolc
Verlag
Verlag der Technischen Universität Graz
Ort
Wien
Datum
2017
Sprache
englisch
Lizenz
CC BY 4.0
ISBN
978-3-85125-524-9
Abmessungen
21.0 x 29.7 cm
Seiten
188
Schlagwörter
Tagungsband
Kategorien
International
Tagungsbände

Inhaltsverzeichnis

  1. Preface v
  2. Workshop Organization vi
  3. Program Committee OAGM vii
  4. Program Committee ARW viii
  5. Awards 2016 ix
  6. Index of Authors x
  7. Keynote Talks
  8. Austrian Robotics Workshop 4
  9. OAGM Workshop 86
Web-Books
Bibliothek
Datenschutz
Impressum
Austria-Forum
Austria-Forum
Web-Books
Proceedings of the OAGM&ARW Joint Workshop