Web-Books
im Austria-Forum
Austria-Forum
Web-Books
International
Proceedings - OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“
Seite - 74 -
  • Benutzer
  • Version
    • Vollversion
    • Textversion
  • Sprache
    • Deutsch
    • English - Englisch

Seite - 74 - in Proceedings - OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“

Bild der Seite - 74 -

Bild der Seite - 74 - in Proceedings - OAGM & ARW Joint Workshop 2016 on

Text der Seite - 74 -

(a)dragon (b)LF (c)PS Figure3: Light field (LF) and photometric stereo (PS) depth information. 3.2. PhotometricStereoSurfaceNormals The surface appearance depends on the shape, reflectance, and illumination of the object. In our virtual test setup we arranged 25 light sources on a sphere around the object in order to capture several images from the same viewpoint under different illumination angles. Contrarily, in our real world setup different illumination angles are achieved by the movement of the object under the light source. WeassumeaLambertian reflectionmodel todescribe the radiance. Thepixel intensityvector Iv=[i1, ...,i25] for each light source and pixel depends on the illumination vectorL=[L1, ...,L25], aswell ason theestimatedsurfacenormalunit vectorN to the surfaceand the surfacealbedoρ. Iv=ρ ·L ·N Thereby, we solve the albedo and normal vectors with ρ ·N = L−1 · Iv. The depth map is then integratedusing thealgorithmof Frankot andChelappa [2]. AsshowninFig. 3c, thisphotometricstereoapproachresults infinedepthmeasurementsandastrong relative depth accuracy, while the absolute depth accuracy suffers from an accumulative offset. We use thebenefitsofboth thephotometricstereoandlightfielddepthestimation toachieveanimproved depthestimation result. 3.3. Combination We refine the light field depth map, as shown in Fig. 3b, using high frequency photometric stereo depthinformation,asshowninFig.3c. Depthfromlightfieldyieldsreliableabsolutedepthmeasures, but suffers both from inaccurately estimated details in the structure and high frequency noise. Low frequencies in the light field depth mapDl are extracted using a bilateral smoothing filter fl. High frequencycomponentsare taken fromthephotometric stereodepthmapDp, usingahigh-pass image filterfh. Depth refinement is obtained by replacing high frequency information from the light field depth map by the according high frequencies in the photometric stereo depth map. Our final depth mapD is thereby constructed as the linear combination of the low frequency components from the light field depth map and the high frequency components from the photometric stereo depth map, weightedby the factorsλl andλp respectively. D=λl ·Dl∗fl(u,v)+λp ·Dp∗fh(u,v) Results are shown in Fig. 4, where 4a and 4d hold the depth data from light field images from both the head and the tail of the dragon object. The second column, see Figs. 4b and 4e, shows the photometrically refineddepthmap, usingour combinational approach. 74
zurück zum  Buch Proceedings - OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“"
Proceedings OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“
Titel
Proceedings
Untertitel
OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“
Autoren
Peter M. Roth
Kurt Niel
Verlag
Verlag der Technischen Universität Graz
Ort
Wels
Datum
2017
Sprache
englisch
Lizenz
CC BY 4.0
ISBN
978-3-85125-527-0
Abmessungen
21.0 x 29.7 cm
Seiten
248
Schlagwörter
Tagungsband
Kategorien
International
Tagungsbände

Inhaltsverzeichnis

  1. Learning / Recognition 24
  2. Signal & Image Processing / Filters 43
  3. Geometry / Sensor Fusion 45
  4. Tracking / Detection 85
  5. Vision for Robotics I 95
  6. Vision for Robotics II 127
  7. Poster OAGM & ARW 167
  8. Task Planning 191
  9. Robotic Arm 207
Web-Books
Bibliothek
Datenschutz
Impressum
Austria-Forum
Austria-Forum
Web-Books
Proceedings