Page - 74 - in Proceedings - OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“
Image of the Page - 74 -
Text of the Page - 74 -
(a)dragon (b)LF (c)PS
Figure3: Light field (LF) and photometric stereo (PS) depth information.
3.2. PhotometricStereoSurfaceNormals
The surface appearance depends on the shape, reflectance, and illumination of the object. In our
virtual test setup we arranged 25 light sources on a sphere around the object in order to capture
several images from the same viewpoint under different illumination angles. Contrarily, in our real
world setup different illumination angles are achieved by the movement of the object under the light
source. WeassumeaLambertian reflectionmodel todescribe the radiance. Thepixel intensityvector
Iv=[i1, ...,i25] for each light source and pixel depends on the illumination vectorL=[L1, ...,L25],
aswell ason theestimatedsurfacenormalunit vectorN to the surfaceand the surfacealbedoρ.
Iv=ρ ·L ·N
Thereby, we solve the albedo and normal vectors with ρ ·N = L−1 · Iv. The depth map is then
integratedusing thealgorithmof Frankot andChelappa [2].
AsshowninFig. 3c, thisphotometricstereoapproachresults infinedepthmeasurementsandastrong
relative depth accuracy, while the absolute depth accuracy suffers from an accumulative offset. We
use thebenefitsofboth thephotometricstereoandlightfielddepthestimation toachieveanimproved
depthestimation result.
3.3. Combination
We refine the light field depth map, as shown in Fig. 3b, using high frequency photometric stereo
depthinformation,asshowninFig.3c. Depthfromlightfieldyieldsreliableabsolutedepthmeasures,
but suffers both from inaccurately estimated details in the structure and high frequency noise. Low
frequencies in the light field depth mapDl are extracted using a bilateral smoothing filter fl. High
frequencycomponentsare taken fromthephotometric stereodepthmapDp, usingahigh-pass image
filterfh. Depth refinement is obtained by replacing high frequency information from the light field
depth map by the according high frequencies in the photometric stereo depth map. Our final depth
mapD is thereby constructed as the linear combination of the low frequency components from the
light field depth map and the high frequency components from the photometric stereo depth map,
weightedby the factorsλl andλp respectively.
D=λl ·Dl∗fl(u,v)+λp ·Dp∗fh(u,v)
Results are shown in Fig. 4, where 4a and 4d hold the depth data from light field images from both
the head and the tail of the dragon object. The second column, see Figs. 4b and 4e, shows the
photometrically refineddepthmap, usingour combinational approach.
74
Proceedings
OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“
- Title
- Proceedings
- Subtitle
- OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“
- Authors
- Peter M. Roth
- Kurt Niel
- Publisher
- Verlag der Technischen Universität Graz
- Location
- Wels
- Date
- 2017
- Language
- English
- License
- CC BY 4.0
- ISBN
- 978-3-85125-527-0
- Size
- 21.0 x 29.7 cm
- Pages
- 248
- Keywords
- Tagungsband
- Categories
- International
- Tagungsbände