Web-Books
in the Austria-Forum
Austria-Forum
Web-Books
International
Proceedings - OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“
Page - 59 -
  • User
  • Version
    • full version
    • text only version
  • Language
    • Deutsch - German
    • English

Page - 59 - in Proceedings - OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“

Image of the Page - 59 -

Image of the Page - 59 - in Proceedings - OAGM & ARW Joint Workshop 2016 on

Text of the Page - 59 -

In our last experiment, Fig. 3, we consider an image blurred by camera shake. The test image, Fig. 3 (a), is clipped from a test image used in [13] to demonstrate the non-blind RRRL method. In [13] it was used in conjunction with a PSF manually generated from an impulse response (the image of a street light); we reproduce this experiment from[13] in frame(d) for reference. In frames (b)and (c) we showblind deconvolution results: frame (b) again combinesRRRL for image estimationwith the linearPSFestimationfrom[5],whereas (c)employsalso therobustPSFestimationfromSection3. It is evident that in (c) theestimatedPSFhasbecomesharper, andartifacts in thedeblurred imagehave beenreduced,althoughthereconstructionquality isstillnotquiteonparwith thenon-blindresult (d). 5. SummaryandOutlook In this paper we have shown how a recent blind image deconvolution approach by alternating min- imisationofa jointenergyfunctional [5]canbe improvedbyintroducingrobustmethodsforPSFand imageestimation. For imageestimationweusedRRRL[13],whereas forPSFestimationamodifica- tion of the method from [5] has been used that is, to best of our knowledge, new. The viability of the approachhasbeendemonstratedonsynthetic and real-world blurred images. A weakness of this combination of methods is that the robust data terms used in the image and PSF estimation differ, compare (6), (7), and cannot be cast into a joint energy functional. This is a pragmatic decision justified by the efficiency of RRRL and the fact that, as demonstrated in [13], its results in non-blind deconvolution are largely comparable with those of a method in the sense of [1] whose data term is compatible with (7). Notwithstanding, it will be a goal of future work to reformulate the robust model such that PSF and image estimation can be expressed in a unified functional. It is expected that an exact match of data terms will also further reduce artifacts in the blinddeconvolution results. The present paper is restricted to grey-value images; an extension to multi-channel (colour) images willbedetailed inaforthcomingpublication. Futureworkmightalsoaddressstrategies for thechoice of parameters as well as efficiency improvements of the algorithm. In order to further study the practical applicability of the method, experimental validation using larger sets of images will be important, includingquantitativecomparisons. Moreover,wehavefocussedinthisworkontheability of robustdata terms tocopewith imprecisePSFestimationandmodelviolations,but largely ignored their potential in treating strong noise. Experiments on noisy blurred images will deepen insight into this aspect. References [1] L. Bar, N. Sochen, and N. Kiryati. Image deblurring in the presence of salt-and-pepper noise. InR.Kimmel,N.Sochen,andJ.Weickert, editors,ScaleSpaceandPDEMethods inComputer Vision, volume 3459 of Lecture Notes in Computer Science, pages 107–118. Springer, Berlin, 2005. [2] T. F. Chan and C. K. Wong. Total variation blind deconvolution. IEEE Transactions on Image Processing, 7:370–375, 1998. [3] R. Fergus, B. Singh, A. Hertzmann, S. T. Roweis, and W. T. Freeman. Removing camera shake fromasinglephotograph. InProc.SIGGRAPH2006,pages787–794,NewYork,NY,July2006. 59
back to the  book Proceedings - OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“"
Proceedings OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“
Title
Proceedings
Subtitle
OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“
Authors
Peter M. Roth
Kurt Niel
Publisher
Verlag der Technischen Universität Graz
Location
Wels
Date
2017
Language
English
License
CC BY 4.0
ISBN
978-3-85125-527-0
Size
21.0 x 29.7 cm
Pages
248
Keywords
Tagungsband
Categories
International
Tagungsbände

Table of contents

  1. Learning / Recognition 24
  2. Signal & Image Processing / Filters 43
  3. Geometry / Sensor Fusion 45
  4. Tracking / Detection 85
  5. Vision for Robotics I 95
  6. Vision for Robotics II 127
  7. Poster OAGM & ARW 167
  8. Task Planning 191
  9. Robotic Arm 207
Web-Books
Library
Privacy
Imprint
Austria-Forum
Austria-Forum
Web-Books
Proceedings