Web-Books
im Austria-Forum
Austria-Forum
Web-Books
Informatik
Joint Austrian Computer Vision and Robotics Workshop 2020
Seite - 124 -
  • Benutzer
  • Version
    • Vollversion
    • Textversion
  • Sprache
    • Deutsch
    • English - Englisch

Seite - 124 - in Joint Austrian Computer Vision and Robotics Workshop 2020

Bild der Seite - 124 -

Bild der Seite - 124 - in Joint Austrian Computer Vision and Robotics Workshop 2020

Text der Seite - 124 -

GraspingPointPrediction inClutteredEnvironment usingAutomaticallyLabeledData StefanAinetter,FriedrichFraundorfer GrazUniversityofTechnology {stefan.ainetter,fraundorfer}icg.tugraz.at Abstract. We propose a method to automatically generate high quality ground truth annotations for graspingpointpredictionandshowtheusefulnessof these annotations by training a deep neural network to predict grasping candidates for objects in a clut- tered environment. First, we acquire sequences of RGBD images of a real world picking scenario and leverage the sequential depth information to extract labels for grasping point prediction. Afterwards, we train a deep neural network to predict grasping points, establishing a fully automatic pipeline from acquiringdata toa trainednetworkwithout theneed of human annotators. We show in our experiments that our network trained with automatically gener- atedlabelsdelivershighqualityresults forpredicting grasping candidates, on par with a trained network which uses human annotated data. This work low- ers the cost/complexity of creating specific datasets forgraspingandmakes iteasy toexpandtheexisting datasetwithoutadditional effort. 1. Introduction Automated grasping is a very active field of re- search in robotics. The process of having a robot manipulator successfully grasp objects in a cluttered environment is still a challenging problem. Re- cent state-of-the-art for grasping position computa- tion often use deep learning techniques and super- visedlearning. However, thesemethodsusuallyneed to be trained on a large amount of labeled data. Therefore, it is of high interest to find techniques to automatically label data for robotic grasping. Previ- ous work [17, 19] focused on using raw RGBD data for automatic object segmentation by leveraging se- quentialdepth informationfromthescene. However, thesegmentationmask isnot sufficient asannotation for grasping point prediction because many state-of- theartapproachesdefinethegraspingproposalusing aboundingbox representation. We propose a fully automatic pipeline from raw RGBD data to a system that predicts grasping point candidates using our automatically labeled data for training. Figure 1 shows our workflow. As practical example, we captured RGBD data from log order- ing in the wood industry. We will demonstrate the usefulness of our approach by training a deep neural network to predict grasping points using our auto- maticallygenerated labelsasground truth. Themain contributionsof thisworkare: 1. Afullyautomaticannotationpipeline forgrasp- ing point prediction using sequential RGBD data. 2. An automatic annotation method that allows dense labeling of grasping points for graspable objects. Additionally, the annotations contain implicit information about the order of object removal due to the usage of sequential input data. These labelscanbedirectlyusedfor train- ingasupervised learningapproach. 3. A deep neural network which is able to pre- dict grasping points in a cluttered environment, solely trained with a small number of automati- cally labeled images. 2.RelatedWork Grasping point detection. The conventional method for grasping point detection uses informa- tion about object geometry, physics models and force analytics [1]. With the rise of deep learning, data-driven methods [2] became more common. Methods like [13, 9, 7, 20] use deep neural networks and supervised learning to predict multiple grasping points for a single object. Chu et al. [4] were able 124
zurück zum  Buch Joint Austrian Computer Vision and Robotics Workshop 2020"
Joint Austrian Computer Vision and Robotics Workshop 2020
Titel
Joint Austrian Computer Vision and Robotics Workshop 2020
Herausgeber
Graz University of Technology
Ort
Graz
Datum
2020
Sprache
englisch
Lizenz
CC BY 4.0
ISBN
978-3-85125-752-6
Abmessungen
21.0 x 29.7 cm
Seiten
188
Kategorien
Informatik
Technik
Web-Books
Bibliothek
Datenschutz
Impressum
Austria-Forum
Austria-Forum
Web-Books
Joint Austrian Computer Vision and Robotics Workshop 2020