Web-Books
in the Austria-Forum
Austria-Forum
Web-Books
Informatik
Joint Austrian Computer Vision and Robotics Workshop 2020
Page - 124 -
  • User
  • Version
    • full version
    • text only version
  • Language
    • Deutsch - German
    • English

Page - 124 - in Joint Austrian Computer Vision and Robotics Workshop 2020

Image of the Page - 124 -

Image of the Page - 124 - in Joint Austrian Computer Vision and Robotics Workshop 2020

Text of the Page - 124 -

GraspingPointPrediction inClutteredEnvironment usingAutomaticallyLabeledData StefanAinetter,FriedrichFraundorfer GrazUniversityofTechnology {stefan.ainetter,fraundorfer}icg.tugraz.at Abstract. We propose a method to automatically generate high quality ground truth annotations for graspingpointpredictionandshowtheusefulnessof these annotations by training a deep neural network to predict grasping candidates for objects in a clut- tered environment. First, we acquire sequences of RGBD images of a real world picking scenario and leverage the sequential depth information to extract labels for grasping point prediction. Afterwards, we train a deep neural network to predict grasping points, establishing a fully automatic pipeline from acquiringdata toa trainednetworkwithout theneed of human annotators. We show in our experiments that our network trained with automatically gener- atedlabelsdelivershighqualityresults forpredicting grasping candidates, on par with a trained network which uses human annotated data. This work low- ers the cost/complexity of creating specific datasets forgraspingandmakes iteasy toexpandtheexisting datasetwithoutadditional effort. 1. Introduction Automated grasping is a very active field of re- search in robotics. The process of having a robot manipulator successfully grasp objects in a cluttered environment is still a challenging problem. Re- cent state-of-the-art for grasping position computa- tion often use deep learning techniques and super- visedlearning. However, thesemethodsusuallyneed to be trained on a large amount of labeled data. Therefore, it is of high interest to find techniques to automatically label data for robotic grasping. Previ- ous work [17, 19] focused on using raw RGBD data for automatic object segmentation by leveraging se- quentialdepth informationfromthescene. However, thesegmentationmask isnot sufficient asannotation for grasping point prediction because many state-of- theartapproachesdefinethegraspingproposalusing aboundingbox representation. We propose a fully automatic pipeline from raw RGBD data to a system that predicts grasping point candidates using our automatically labeled data for training. Figure 1 shows our workflow. As practical example, we captured RGBD data from log order- ing in the wood industry. We will demonstrate the usefulness of our approach by training a deep neural network to predict grasping points using our auto- maticallygenerated labelsasground truth. Themain contributionsof thisworkare: 1. Afullyautomaticannotationpipeline forgrasp- ing point prediction using sequential RGBD data. 2. An automatic annotation method that allows dense labeling of grasping points for graspable objects. Additionally, the annotations contain implicit information about the order of object removal due to the usage of sequential input data. These labelscanbedirectlyusedfor train- ingasupervised learningapproach. 3. A deep neural network which is able to pre- dict grasping points in a cluttered environment, solely trained with a small number of automati- cally labeled images. 2.RelatedWork Grasping point detection. The conventional method for grasping point detection uses informa- tion about object geometry, physics models and force analytics [1]. With the rise of deep learning, data-driven methods [2] became more common. Methods like [13, 9, 7, 20] use deep neural networks and supervised learning to predict multiple grasping points for a single object. Chu et al. [4] were able 124
back to the  book Joint Austrian Computer Vision and Robotics Workshop 2020"
Joint Austrian Computer Vision and Robotics Workshop 2020
Title
Joint Austrian Computer Vision and Robotics Workshop 2020
Editor
Graz University of Technology
Location
Graz
Date
2020
Language
English
License
CC BY 4.0
ISBN
978-3-85125-752-6
Size
21.0 x 29.7 cm
Pages
188
Categories
Informatik
Technik
Web-Books
Library
Privacy
Imprint
Austria-Forum
Austria-Forum
Web-Books
Joint Austrian Computer Vision and Robotics Workshop 2020