Web-Books
im Austria-Forum
Austria-Forum
Web-Books
International
Proceedings - OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“
Seite - 78 -
  • Benutzer
  • Version
    • Vollversion
    • Textversion
  • Sprache
    • Deutsch
    • English - Englisch

Seite - 78 - in Proceedings - OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“

Bild der Seite - 78 -

Bild der Seite - 78 - in Proceedings - OAGM & ARW Joint Workshop 2016 on

Text der Seite - 78 -

In thiswork,wepresentan idea toestimatemetricallycorrect cameraposeswith just a smallnumber offeatures(seeFig.1). OurhardwaresetupconsistsofanRGBcameraandalaser rangefinder(LRF) (seeFig.2(a)). TheLRFallowsustoselecthighlydistinctivefeaturesforposeestimationwhileat the same time obtaining their accurate distance. We focus on the reconstruction of facades, enabling us to utilize homographies instead of fundamental matrices for correspondence computation. For pose estimation, we use laser points with known distance from the camera and their respective matches in otherviews. Finally, we compare our approach to the freely available SfM framework OpenMVG [10] and show that we achieve reasonable results for the camera poses with just a fraction of correspondences. This is of special interest for metric reconstruction on devices with constraints on computational power, e.g. mobiledevices or UAVs. 2. RelatedWork Most of the work related to the task of calibrating the extrinsic relationship of an LRF to a projective camera consider a setup with either a 2D [18, 7] or 3D LRF [14, 2]. Further, they rely on user input toestablishcorrespondences between the lasermeasurementsand the images takenby thecamera. We on the other hand want to solve the task of extrinsic calibration of a 1D LRF to a camera without user interaction. We require the 3D world position of the plane whose distance is measured to be inferable from the images as well as the laser point produced by the LRF to be visible within the image. In contrast to [13], where they jointly perform geometric camera and LRF calibration, we do not refine the intrinsic calibration of thecamerausing theLRFmeasurementsbut expect the intrinsic calibration tobedonebeforehandand tobeof sufficientquality. SfMalgorithmsfor3Dreconstructionandcameraposeestimationfromunstructureddatausuallyonly capture thesceneup toscale. In [1,15] theauthorsperformlargescale3Dscene reconstruction from Internet photos. Their work examines 3D modeling from unstructured data, yet the reconstruction can be only performed up to scale due to inherent lack of metric information. In [3], the authors first solve the relative motion on a local scale among just a few images, and then use these local relations as initialization for theglobal solution. Methods solving the metric reconstruction problem with the SfM paradigm often rely on either an underlying structure of data (sequential image capturing, constant acquisition frame rate) in connec- tion with registered motion estimations using GPS or inertial measurements as in [16, 3] or rely on directgeometrymeasurementswith3DLRFsandsubsequentregistrationoftheresultingpointclouds [6,7]. Theapproachpresented in [12] is theonemost similar toours. However, inaddition to1Dlasermea- surements corresponding to images of the scene, they leverage motion estimations between images through IMUdataaswell as interactivegestures for semanticcuesaiding in reconstruction. We propose an approach for metric camera pose estimation from unstructured images. Instead of searching for dense point correspondences among all images, we restrict ourselves to a sparse wire- frame model with each image contributing just a single point (the location of the laser distance mea- surement). This allows us to ensure robust reconstruction by choosing distinct and easily matchable locationson the facadeduringdataacquisition. 78
zurück zum  Buch Proceedings - OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“"
Proceedings OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“
Titel
Proceedings
Untertitel
OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“
Autoren
Peter M. Roth
Kurt Niel
Verlag
Verlag der Technischen Universität Graz
Ort
Wels
Datum
2017
Sprache
englisch
Lizenz
CC BY 4.0
ISBN
978-3-85125-527-0
Abmessungen
21.0 x 29.7 cm
Seiten
248
Schlagwörter
Tagungsband
Kategorien
International
Tagungsbände

Inhaltsverzeichnis

  1. Learning / Recognition 24
  2. Signal & Image Processing / Filters 43
  3. Geometry / Sensor Fusion 45
  4. Tracking / Detection 85
  5. Vision for Robotics I 95
  6. Vision for Robotics II 127
  7. Poster OAGM & ARW 167
  8. Task Planning 191
  9. Robotic Arm 207
Web-Books
Bibliothek
Datenschutz
Impressum
Austria-Forum
Austria-Forum
Web-Books
Proceedings