Page - 21 - in Proceedings of the OAGM&ARW Joint Workshop - Vision, Automation and Robotics
Image of the Page - 21 -
Text of the Page - 21 -
Estimated distance with VO in m
3. loop
2. loop
1. loop
0 500 1000 1500 2000 2500 3000
â100
â50
0
50
100
150
200
250
300
350
400
Fig. 9: Scenario 2 â Comparison of estimated yaw angles
of the yaw angle like itâs necessary for agricultural vehicles.
Using the given laptop, in average the computing time of
the pose estimation takes 0.349s per stereo pair. This equates
to approximately three pose estimations per second.
V. CONCLUSION AND FUTURE WORK
We developed a Visual Odometry system that is based on
a stereo-camera pair and capable of estimating the position
and orientation of an agricultural vehicle in GPS-obstructed
environments. We deployed our system on a small truck
and carried out measurements for two different logging road
scenarios. The results showthat an insufficientdistribution of
features can lead to an ill-conditioned pose estimation, and
hence to an inaccurately estimated distance and pitch angle.
Due to the incremental concatenation of relative motions,
this results in an increased error in position. However, our
robust VO system is highly capable of estimating the orien-
tation (yaw angle) with acceptable accuracy in unstructured
environment. This is especially shown in the first scenario in
the dense forest where the signal quality of GPS lacks.
Future work includes the improvement of the distribu-
tion of features and hence the pose estimation. Uniformly
distributed features could be achieved via using different
detector parameters for the upper and lower half of the
image.
Another goal is to reduce the computing time of the VO
to facilitate an online system. This can mainly be done via
the parallelization of repeatable tasks like feature detection
and description.
Furthermore, the next steps include the incorporation of
the data of an IMU that were recorded simultaneously during
our measurements. We plan to use a data fusion algorithm
such as a Kalman Filter to improve the overall accuracy by
combining the VO with the IMU data.
REFERENCES
[1] M. Agrawal and K. Konolige, âRough terrain visual odometry,â in
Proceedings of the International Conference on Advanced Robotics
(ICAR), vol. 1, 2007, pp. 28â30.
[2] P. F. Alcantarilla, J. Nuevo, and A. Bartoli, âFast explicit diffusion
for accelerated features in nonlinear scale spaces,â inBritishMachine
Vision Conference (BMVC), 2013. [3] P. F. Alcantarilla, A. Bartoli, and A. J. Davison, âKAZE features,â in
Computer VisionâECCV. Springer, 2012, pp. 214â227.
[4] S. Blackmore, âPrecision farming: An introduction,â Outlook on
Agriculture, vol. 23, no. 4, pp. 275â280, 1994.
[5] G. Bradski, âThe OpenCV library,â Dr. Dobbâs Journal of Software
Tools, 2000.
[6] Y. Cheng, M. Maimone, and L. Matthies, âVisual odometry on
the mars exploration rovers,â in IEEE International Conference on
Systems,Man andCybernetics, vol. 1, 2005, pp. 903â910.
[7] A. I. Comport, E. Malis, and P. Rives, âAccurate quadrifocal tracking
for robust 3d visual odometry,â inProceedings of IEEE International
Conference on Robotics and Automation (ICRA), 2007, pp. 40â45.
[8] K. Cordes, L. Grundmann, and J. Ostermann, âFeature evaluation with
high-resolution images,â in International Conference on Computer
Analysis of Images and Patterns. Springer, 2015, pp. 374â386.
[9] E. B. Dam, M. Koch, and M. Lillholm, Quaternions, Interpolation
and Animation. Datalogisk Institut, Københavns Universitet, 1998.
[10] A. J. Davison, âReal-time simultaneous localisation and mapping with
a single camera,â in IEEE International Conference on Computer
Vision (ICCV), 2003, pp. 1403â1410.
[11] M. M. Deza and E. Deza,EncyclopediaofDistances. Springer, 2009.
[12] M. A. Fischler and R. C. Bolles, âRandom sample consensus: A
paradigm for model fitting with applications to image analysis and
automated cartography,âCommunications of the ACM, vol. 24, no. 6,
pp. 381â395, 1981.
[13] F. Fraundorfer and D. Scaramuzza, âVisual odometry, part II: Match-
ing, robustness, optimization, and applications,â IEEE Robotics &
AutomationMagazine, vol. 19, no. 2, pp. 78â90, 2012.
[14] R. W. Hamming, âError detecting and error correcting codes,â Bell
System technical journal, vol. 29, no. 2, pp. 147â160, 1950.
[15] C. Harris and M. Stephens, âA combined corner and edge detector,â
inProceedings of 4th Alvey Vision Conference, 1988, pp. 147â151.
[16] R. Hartley and A. Zisserman, Multiple View Geometry in Computer
Vision. Cambridge university press, 2003.
[17] K. Konolige, M. Agrawal, and J. Sola, âLarge scale visual odometry
for rough terrain,â inRoboticsResearch. Springer,2011,pp.201â212.
[18] V. Lepetit, F. Moreno-Noguer, and P. Fua, âEPnP: An accurate o(n)
solution to the pnp problem,â International Journal of Computer
Vision, vol. 81, no. 2, pp. 155â166, 2009.
[19] M. I. Lourakis and A. A. Argyros, âSBA: A software package for
generic sparse bundle adjustment,âACMTransactions onMathemati-
cal Software (TOMS), vol. 36, no. 1, p. 2, 2009.
[20] D. G. Lowe, âDistinctive image features from scale-invariant key-
points,â International Journal of Computer Vision, vol. 60, no. 2, pp.
91â110, 2004.
[21] M. Maimone, Y. Cheng, and L. Matthies, âTwo years of visual
odometry on the mars exploration rovers,â Journal of Field Robotics,
vol. 24, no. 3, pp. 169â186, 2007.
[22] D. Niste´r, O. Naroditsky, and J. Bergen, âVisual odometry,â in Pro-
ceedings of IEEE Computer Society Conference on Computer Vision
and Pattern Recognition (CVPR), vol. 1, 2004, pp. Iâ652.
[23] D. Niste´r, O. Naroditsky, and J. Bergen, âVisual odometry for ground
vehicle applications,â Journal of Field Robotics, vol. 23, no. 1, pp.
3â20, 2006.
[24] D. Scaramuzza and F. Fraundorfer, âVisual odometry, part I: The first
30 years and fundamentals,â IEEERobotics&AutomationMagazine,
vol. 18, no. 4, pp. 80â92, 2011.
[25] T. Strang, F. Schubert, S. Tho¨lert, R. Oberweis, M. Angermann,
B. Belabbas, A. Dammann, M. Grimm, T. Jost, S. Kaiser, et al.,
âLokalisierungsverfahren,â Deutsches Zentrum fu¨r Luft-und Raum-
fahrt (DLR), Tech. Rep., 2008.
[26] N. Su¨nderhauf, K. Konolige, S. Lacroix, and P. Protzel, âVisual
odometry using sparse bundle adjustment on an autonomous outdoor
vehicle,â inAutonomeMobile Systeme. Springer, 2006, pp. 157â163.
[27] J.-P. Tardif, M. George, M. Laverne, A. Kelly, and A. Stentz, âA new
approach to vision-aided inertial navigation,â in IEEE/RSJ Interna-
tional Conference on Intelligent Robots and Systems (IROS), 2010,
pp. 4161â4168.
[28] B. Triggs, P. Mclauchlan, R. Hartley, and A. Fitzgibbon, âBundle
adjustment â a modern synthesis,â in Vision Algorithms: Theory and
Practice, ser. Lecture Notes in Computer Science (LNCS), B. Triggs,
A. Zisserman, and R. Szeliski, Eds. Springer-Verlag, 2000, vol. 1883,
pp. 298â372. [Online]. Available: https://hal.inria.fr/inria-00590128
21
Proceedings of the OAGM&ARW Joint Workshop
Vision, Automation and Robotics
- Title
- Proceedings of the OAGM&ARW Joint Workshop
- Subtitle
- Vision, Automation and Robotics
- Authors
- Peter M. Roth
- Markus Vincze
- Wilfried Kubinger
- Andreas MĂźller
- Bernhard Blaschitz
- Svorad Stolc
- Publisher
- Verlag der Technischen Universität Graz
- Location
- Wien
- Date
- 2017
- Language
- English
- License
- CC BY 4.0
- ISBN
- 978-3-85125-524-9
- Size
- 21.0 x 29.7 cm
- Pages
- 188
- Keywords
- Tagungsband
- Categories
- International
- Tagungsbände