Page - 72 - in Proceedings - OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“
Image of the Page - 72 -
Text of the Page - 72 -
Figure 1: Multi-line-scan setup with directional lighting: a) multi-line sensor, views constructed over
time, imaging theobjectareadivided inuppercase letters: b)view1,c) referenceand top-downview,
d)viewn,e)EPIstackholdingviewsdenotingobject linesbyuppercase letters thedisparity slopeα.
The depth reconstruction from light field data is usually estimated through the epipolar plane image
(EPI) data structure. EPIs were originally introduced for the estimation of structure from motion [1],
but they also became a popular tool in light field processing [10],[4]. Kim et al. [4] use an easy
criterion for ranking depth hypotheses, namely the best hypothesis is the one, for which as many
radiancevaluesaspossiblealong thehypothesizedslope inanEPIare similar enough to the radiance
in the referenceview. Venkataramanetal. [8]usepatternmatchingbetweendifferentviews, i.e. fora
discrete number of hypothesized depths the sum of absolute differences (SAD) of radiances between
different views is calculated. Wanner and Goldlu¨cke [10] suggest a statistical approach to estimate
the principal orientation of linear structures in EPIs via analysis of the structure tensor constructed
locally in smallEPI neighborhoods.
This paper is organized as follows. We describe the proposed setup in Sec. 2. In Sec. 3. we describe
the fusion framework for light fields and photometric stereo. First results describing the work in
progress isgiven inSec. 4. InSec. 5. wedrawfirst conclusionsand discuss further work.
2. Multi-Line-ScanSetup
Light fields provide 4-D information, consisting of two spatial and two directional dimensions. They
can be captured e.g. by a multiple camera array [11], where each camera has a different viewing
perspective of the scene, or by plenoptic cameras [6], which usually make use of a microlens array
placed in frontof the sensor plane to acquireangular reflectance information.
Ourmulti-line-scanframework[9] isa lightfieldacquisitionsetup,whereweuseanarea-scansensor
to observe the object under varying angles, while the object is transported in a defined direction over
time. This setup works in real-time and in-line for industrial inspection setups. Fig. 1 illustrates how
the lightfielddata isobtained throughmultipleviewingangleson themovingobjectover time. Each
sensor line observes the conveyor belt in a different viewing angle and captures a certain region. As
the object moves under the observed sensor lines, see Fig. 1a, each sensor line captures every object
region at distinct time instances, see Figs. 1b,c, and d. We represent the thereby captured light fields
as light field image stacks, see Fig. 1e, in which each image is acquired from a slightly different
72
Proceedings
OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“
- Title
- Proceedings
- Subtitle
- OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“
- Authors
- Peter M. Roth
- Kurt Niel
- Publisher
- Verlag der Technischen Universität Graz
- Location
- Wels
- Date
- 2017
- Language
- English
- License
- CC BY 4.0
- ISBN
- 978-3-85125-527-0
- Size
- 21.0 x 29.7 cm
- Pages
- 248
- Keywords
- Tagungsband
- Categories
- International
- Tagungsbände