Seite - 82 - in Proceedings - OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“
Bild der Seite - 82 -
Text der Seite - 82 -
independent equations for the x- and y-position, thus we need at least 3 correspondences to solve the
6degreesof freedomgivenbyRk andtk. Asmentioned inSection4.3.,outliers (wrongmatches)are
possible, thus inpracticeweuseat least4correspondences.
In a first step, we compute the reprojection error of all correspondencesC (4) using the initialized
camera pose and the mean reprojection error C¯. Then we take all correspondences with an error
smaller than 1.5C¯ or a threshold C. We then optimize with the same cost function (4) as for the
initial bundle adjustment with the major difference that only the pose of the newly added camera
k is optimized, while the rest of the systemMcurr is fixed. After optimization, we again filter bad
correspondences with the same approach as described above and perform a second optimization,
which is usually very fast due to the already good estimation. We iterate through all images until no
morecanbeadded, i.e. donot fulfill anyof theconditions.
GlobalBundleAdjustment
While keeping the camera systemMcurr fixed and only optimizing the new camera k is very fast
and gives an estimate of the model structure, it does not replace a global optimization approach. We
perform a final global bundle adjustment step, where all the camera poses are optimized. In this
case, we take all the correspondences used during the iterative bundle adjustment step and initialize
the camera poses with the previously computed rotation and translation. Here, we also use the cost
function presented in (4). Similar to the iterative bundle adjustment, we again filter outliers with the
reprojection error after optimization, but instead impose that the error must be smaller than< 1.2C¯.
After thisfinal filteringstep, weperformone lastglobalbundleadjustment.
5. ResultsandDiscussion
In this section, we present early results of our guided sparse camera pose estimation. We evaluate
our approach on two datasets from different buildings, one with a well-textured facade and one with
redundant structures. As reference, we use the open-source SfM framework OpenMVG, which can
achieveanaccuracyofaround1cmin idealcases [10]. ItutilizesSIFTfeaturesandmanycorrespon-
dences to estimate camera poses and a point cloud. OpenMVG chooses the starting views randomly
and in our evaluation we had to start SfM multiple times to get a reconstruction. We evaluate the
distance between the camera centers generated by the two approaches. OpenMVG estimates the re-
construction up to scale, thus to metrically measure the distance between cameras, we transform its
worldcoordinate systemto oursusinga robust similarity transform.
Figure3 showsahistogram, where the bins showthenumber of distancesbetweencamera centers in
therespectiverangeincm. Camerasinthefirst fewbinsarecloser toOpenMVG,while thecamerasin
the lastbinare farther away. Especially in thefirst experimentweachieve reasonable results and that
withonly90correspondencescompared toOpenMVG’s2969,which isa reductionbyafactorof30.
In the second experiment, we only use 56 correspondences compared to 1880 in the reference. The
histograms show that we are centimeters away from OpenMVGs reconstruction even though we still
achievevisuallyappealingresultswhenreprojecting the laserpoints into the images (seeFig.4). Due
to the sparsecorrespondences, evenoneunfiltered outlier can decrease thefinal result significantly.
82
Proceedings
OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“
- Titel
- Proceedings
- Untertitel
- OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“
- Autoren
- Peter M. Roth
- Kurt Niel
- Verlag
- Verlag der Technischen Universität Graz
- Ort
- Wels
- Datum
- 2017
- Sprache
- englisch
- Lizenz
- CC BY 4.0
- ISBN
- 978-3-85125-527-0
- Abmessungen
- 21.0 x 29.7 cm
- Seiten
- 248
- Schlagwörter
- Tagungsband
- Kategorien
- International
- Tagungsbände