Web-Books
in the Austria-Forum
Austria-Forum
Web-Books
Informatik
Joint Austrian Computer Vision and Robotics Workshop 2020
Page - 6 -
  • User
  • Version
    • full version
    • text only version
  • Language
    • Deutsch - German
    • English

Page - 6 - in Joint Austrian Computer Vision and Robotics Workshop 2020

Image of the Page - 6 -

Image of the Page - 6 - in Joint Austrian Computer Vision and Robotics Workshop 2020

Text of the Page - 6 -

Vision-basedDockingofaMobileRobot AndreasKriegler,WilfriedWo¨ber UASTechnikumVienna {mr18m016,woeber}@technikum-wien.at Abstract. For mobile robots to be considered au- tonomous they must reach target locations in re- quiredpose,aprocedurereferredtoasdocking. Pop- ular current solutions use LiDARs combined with sizeable docking stations but these systems struggle by incorrectly detecting dynamic obstacles. This pa- per instead proposes a vision-based framework for docking a mobile robot. Faster R-CNN is used for detecting arbitrary visual markers. The pose of the robot is estimated using the solvePnP algorithm re- lating 2D-3D point pairs. Following exhaustive ex- periments, it is shownthat solvePnPgivessystemati- cally inaccurateposeestimates in thex-axispointing to the side. Pose estimates are off by ten to fifty cen- timeters and could therefore not be used for docking therobot. Insightsareprovidedtocircumventsimilar problems in futureapplications. 1. INTRODUCTION Dockingcanbeunderstoodas the localizationand navigation of a robot towards a target location [1]. In contrast to path-planning across larger distances, docking does not require obstacle avoidance meth- ods but instead seeks highly accurate pose estimates [28]. As long as the pose of the robot and the target location are known in a reference coordinate system path planning algorithms can easily generate control commands. In thexy ground-plane, the pose~x con- sists of three degrees of freedom,x, y, and θ as the rotation about its own axis z, and is described using the stateat time t ~xt= ( x x˙ y y˙ θ θ˙ )T t (1) where x˙, y˙ and θ˙ describe the speed of the robot in x and y and its rotation respectively. As Thrun et al. [32] write outlining the motion model and mea- surement model, taking multiple control steps ~ut with only an initial measurement or observation ~zt Figure 1. The visual target used for docking. The target location is on the ground infront. The origin for the PnP solvers is in the upper left corner. The logos are roughly 9x3 centimeters in size. The upper right logo was raised duringexperiments to removecoplanarity. leads to large uncertainties about its pose, they pro- pose a measurement step after every control to re- storeconfidence in thebeliefbel(~x). Thesemeasure- ments can be non-vision methods such as evaluat- ing detections from LiDAR-scans [22] or can come from a camera setup providing visual feedback [6]. Yurtsever et al. [34] show in their survey on auto- mated driving systems (ADS) that computer vision (CV) based approaches to navigation have become increasingly popular. Artificial landmark detection as described by Luo et al. [19] and gradient based optical flow [20] rival modern non-vision solutions. Classical non-vision systems typically employ Li- DARtechnology, indoorGPSorwirelessfingerprint- ing [17]. While LiDARs are still widely used com- mercially (such as MiRs and Robotinos) recent ad- vances in deep learning and their application in the ADS domain are of more scientific interest. Deep ConvolutionalNeuralNetworks(CNNs)haveproven successful at tackling a variety of perception prob- lems, including object detection [26] and pose esti- 6
back to the  book Joint Austrian Computer Vision and Robotics Workshop 2020"
Joint Austrian Computer Vision and Robotics Workshop 2020
Title
Joint Austrian Computer Vision and Robotics Workshop 2020
Editor
Graz University of Technology
Location
Graz
Date
2020
Language
English
License
CC BY 4.0
ISBN
978-3-85125-752-6
Size
21.0 x 29.7 cm
Pages
188
Categories
Informatik
Technik
Web-Books
Library
Privacy
Imprint
Austria-Forum
Austria-Forum
Web-Books
Joint Austrian Computer Vision and Robotics Workshop 2020