Web-Books
im Austria-Forum
Austria-Forum
Web-Books
International
Proceedings - OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“
Seite - 72 -
  • Benutzer
  • Version
    • Vollversion
    • Textversion
  • Sprache
    • Deutsch
    • English - Englisch

Seite - 72 - in Proceedings - OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“

Bild der Seite - 72 -

Bild der Seite - 72 - in Proceedings - OAGM & ARW Joint Workshop 2016 on

Text der Seite - 72 -

Figure 1: Multi-line-scan setup with directional lighting: a) multi-line sensor, views constructed over time, imaging theobjectareadivided inuppercase letters: b)view1,c) referenceand top-downview, d)viewn,e)EPIstackholdingviewsdenotingobject linesbyuppercase letters thedisparity slopeα. The depth reconstruction from light field data is usually estimated through the epipolar plane image (EPI) data structure. EPIs were originally introduced for the estimation of structure from motion [1], but they also became a popular tool in light field processing [10],[4]. Kim et al. [4] use an easy criterion for ranking depth hypotheses, namely the best hypothesis is the one, for which as many radiancevaluesaspossiblealong thehypothesizedslope inanEPIare similar enough to the radiance in the referenceview. Venkataramanetal. [8]usepatternmatchingbetweendifferentviews, i.e. fora discrete number of hypothesized depths the sum of absolute differences (SAD) of radiances between different views is calculated. Wanner and Goldlu¨cke [10] suggest a statistical approach to estimate the principal orientation of linear structures in EPIs via analysis of the structure tensor constructed locally in smallEPI neighborhoods. This paper is organized as follows. We describe the proposed setup in Sec. 2. In Sec. 3. we describe the fusion framework for light fields and photometric stereo. First results describing the work in progress isgiven inSec. 4. InSec. 5. wedrawfirst conclusionsand discuss further work. 2. Multi-Line-ScanSetup Light fields provide 4-D information, consisting of two spatial and two directional dimensions. They can be captured e.g. by a multiple camera array [11], where each camera has a different viewing perspective of the scene, or by plenoptic cameras [6], which usually make use of a microlens array placed in frontof the sensor plane to acquireangular reflectance information. Ourmulti-line-scanframework[9] isa lightfieldacquisitionsetup,whereweuseanarea-scansensor to observe the object under varying angles, while the object is transported in a defined direction over time. This setup works in real-time and in-line for industrial inspection setups. Fig. 1 illustrates how the lightfielddata isobtained throughmultipleviewingangleson themovingobjectover time. Each sensor line observes the conveyor belt in a different viewing angle and captures a certain region. As the object moves under the observed sensor lines, see Fig. 1a, each sensor line captures every object region at distinct time instances, see Figs. 1b,c, and d. We represent the thereby captured light fields as light field image stacks, see Fig. 1e, in which each image is acquired from a slightly different 72
zurück zum  Buch Proceedings - OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“"
Proceedings OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“
Titel
Proceedings
Untertitel
OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“
Autoren
Peter M. Roth
Kurt Niel
Verlag
Verlag der Technischen Universität Graz
Ort
Wels
Datum
2017
Sprache
englisch
Lizenz
CC BY 4.0
ISBN
978-3-85125-527-0
Abmessungen
21.0 x 29.7 cm
Seiten
248
Schlagwörter
Tagungsband
Kategorien
International
Tagungsbände

Inhaltsverzeichnis

  1. Learning / Recognition 24
  2. Signal & Image Processing / Filters 43
  3. Geometry / Sensor Fusion 45
  4. Tracking / Detection 85
  5. Vision for Robotics I 95
  6. Vision for Robotics II 127
  7. Poster OAGM & ARW 167
  8. Task Planning 191
  9. Robotic Arm 207
Web-Books
Bibliothek
Datenschutz
Impressum
Austria-Forum
Austria-Forum
Web-Books
Proceedings