Page - 80 - in Proceedings - OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“
Image of the Page - 80 -
Text of the Page - 80 -
Weobtain several noisy estimates for thepositiontLRF of theLRFthrough thecorrespondence
tLRF,i= l3D,i−di · lLRF, (2)
where lLRF has been normalized to unit length. We obtain the final estimate for the positiontLRF of
theLRFbytakingthemedianofallnoisyestimates. TherotationRLRF isgivenbytheanglebetween
theviewingdirection lLRF of the laser rangefinderand thecamerasoptical axis in theplanespanned
by theoptical axisof thecameraand lLRF.
4. SparsePoseEstimationand3DSceneReconstruction
The proposed approach is structured in steps typical to SfM pipelines: image recording, preprocess-
ing, relative pose and motion estimation between views and ultimately 3D reconstruction. Since it is
aimed at the reconstruction of building facades, which can to a large extent be modeled as a set of
flat surfaces, it is sufficient to reconstruct the building as a wire-frame model using surface vertices
together with a few supporting points on the walls. We compute SIFT matches to estimate homo-
graphies between image pairs (Ii,Ij), which can be used to establish correspondences based on the
known laserpoint l3D,i inIi and its respective2Dposition l2D,i,j inIj.
Using an initial set of 4 images with full correspondences and the laser measurements, we are able
to initialize and calculate an early estimate for our model and the relative camera poses. Then we
iteratively add the remaining cameras and distance measurements and finally refine the poses with a
global bundle adjustment. Since we know the respective distance information to each camera pose,
this estimation is accurate in its scale.
4.1. ImageRecording
Since we perform sparse camera pose estimation and reconstruction, the accuracy of the solution
depends upon a few, yet highly significant features which are easily found in images taken from
different perspectives. For a good reconstruction, the significant features should be chosen in a way
such that they lie on the facade and are well-distributed over its surface including the corner points,
e.g. vertices of walls and corners of windows. Figure 1 depicts the data recording process, where
we take RGB images from various view points while measuring the distance of a single point in the
respective image with the LRF.
4.2. PreprocessingandFeatureExtraction
To keep our approach as flexible as possible and to reduce the complexity during manual data ac-
quisition, we assume no particular order of the images. Initially all possible image pairs are added
to a working set. We extract SIFT features [9] from gray-scale versions of the images and establish
pointcorrespondencesusingaFLANN-basedmatcher [11] followedbyLowe’s ratio test tofilterout-
liers. Withthesecorrespondences,werobustlyestimateahomographybetweeneachimagepairusing
RANSAC[5]witha thresholdof1px. Weonlywant tokeep imagepairswithacertainoverlap in the
working set, thus we filter out all with less thann= 10 inliers according to the RANSAC estimate
and a ratio of inliers to number of matches of< 50%. As a measure for the quality of the remain-
ing image correspondences, we define an errorEi,j for an image pair (Ii,Ij) using the 2D positions
PSIFT,i andPSIFT,j of theirmatched featuresas follows:
Ei,j =mean(||PSIFT,i−PSIFT,j||2),∀i,j∈N,i 6= j. (3)
80
Proceedings
OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“
- Title
- Proceedings
- Subtitle
- OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“
- Authors
- Peter M. Roth
- Kurt Niel
- Publisher
- Verlag der Technischen Universität Graz
- Location
- Wels
- Date
- 2017
- Language
- English
- License
- CC BY 4.0
- ISBN
- 978-3-85125-527-0
- Size
- 21.0 x 29.7 cm
- Pages
- 248
- Keywords
- Tagungsband
- Categories
- International
- Tagungsbände