Web-Books
im Austria-Forum
Austria-Forum
Web-Books
Informatik
Joint Austrian Computer Vision and Robotics Workshop 2020
Seite - 2 -
  • Benutzer
  • Version
    • Vollversion
    • Textversion
  • Sprache
    • Deutsch
    • English - Englisch

Seite - 2 - in Joint Austrian Computer Vision and Robotics Workshop 2020

Bild der Seite - 2 -

Bild der Seite - 2 - in Joint Austrian Computer Vision and Robotics Workshop 2020

Text der Seite - 2 -

Semi-AutomaticGenerationofTrainingData forNeuralNetworks for6DPoseEstimationandRoboticGrasping JohannesNikolausRauer,MohamedAburaia,WilfriedWo¨ber FHTechnikumWien {rauer,aburaia,woeber}@technikum-wien.at Abstract. Machine-learning-based approaches for poseestimationare trainedusingannotatedground- truthdata– images showing theobjectand informa- tion of its pose. In this work an approach to semi- automatically generate 6Dpose-annotateddata, us- ing a movable marker and an articulated robot, is presented. A neural network for pose estimation is trained using datasets varying in size and type. The evaluation shows that small datasets recorded in the targetdomainandsupplementedwithaugmentedim- ages lead tomorerobust results thanlargersynthetic datasets. The results demonstrate that amobilema- nipulatorusing theproposedpose-estimationsystem could be deployed in real-life logistics applications to increase the levelofautomation. 1. Introduction Production facilities have successfully deployed classic fixed-programmed robots since the 1960s. Due to their inability to perceive the environment, such robots have mostly been used in mass produc- tion, where a static setup can be assumed [8]. The productionindustries’moveawayfrommassproduc- tion towards highly customized goods requires in- creased flexibility. Deploying mobile manipulators, a combination of mobile and articulated robots, for intra-logistical transport tasks, promises this desired modularity [6]. Since the accuracy achieved by mo- bile robot navigation is not sufficient to grasp ob- jects, robots need sensors to perceive their surround- ingsandautonomouslydetectobjects’poses[1]. The most promising approaches for pose estimation are machine-learning-based methods applied to camera data [2]. Deep neural networks are trained using an- notated ground-truth data – images showing the ob- ject and information of its pose [4]. State-of-the-art methods for creating such data use markers rigidly attached to the objects, which have to be removed in cumbersome post-processing [3], or need human an- notators that align 3D models to video-streams [5]. In this work an approach to semi-automatically gen- erate6D-pose-annotated trainingdatausinganartic- ulated robot ispresented. 2.Semi-AutomaticData-Generation As shown in Figure 1 the object is placed in front of the robotandafiducialmarker isputon it inade- fined pose. The pose of the marker with respect to thecamera iscomputedfromthecaptured imageand used to calculate the pose of the object with respect totherobot’sbase. Themarker iscapturedfrommul- tiple perspectives and the mean pose is calculated to minimizeerrorsof thecameracalibrationandmarker detection. Afterwards the marker is removed (care mustbetakenthat theobject isnotdisplaced)andthe robotarmmovesaroundtheobject tocapture images and associated object-pose data automatically. In or- der to make the data also usable for training neural networks for object detection, the object can be ren- dered inavirtualenvironment tocalculatesegmenta- tion masks. The design minimizes the extent of hu- manlabor. It isonlynecessarytoplacethemarkeron the object, capture images of it, and remove it again, to enable recording of several thousand training im- agesfullyautonomously. Drawbacksarethat thepro- cess has to be repeated to cover the other half of the orientation space and that the background is static. However, this canbesolvedbydataaugmentation. 3.Results&Discussion Multiple annotated datasets are created using the proposedmethodandusedtotrainthedeep-learning- based6DposeestimationsystemDOPE[7]. Thean- notated training data is split into five equally sized 2
zurück zum  Buch Joint Austrian Computer Vision and Robotics Workshop 2020"
Joint Austrian Computer Vision and Robotics Workshop 2020
Titel
Joint Austrian Computer Vision and Robotics Workshop 2020
Herausgeber
Graz University of Technology
Ort
Graz
Datum
2020
Sprache
englisch
Lizenz
CC BY 4.0
ISBN
978-3-85125-752-6
Abmessungen
21.0 x 29.7 cm
Seiten
188
Kategorien
Informatik
Technik
Web-Books
Bibliothek
Datenschutz
Impressum
Austria-Forum
Austria-Forum
Web-Books
Joint Austrian Computer Vision and Robotics Workshop 2020