Web-Books
im Austria-Forum
Austria-Forum
Web-Books
Informatik
Joint Austrian Computer Vision and Robotics Workshop 2020
Seite - 3 -
  • Benutzer
  • Version
    • Vollversion
    • Textversion
  • Sprache
    • Deutsch
    • English - Englisch

Seite - 3 - in Joint Austrian Computer Vision and Robotics Workshop 2020

Bild der Seite - 3 -

Bild der Seite - 3 - in Joint Austrian Computer Vision and Robotics Workshop 2020

Text der Seite - 3 -

Figure1.Procedure forgeneratingannotated data, usinga robot andamovable fiducialmarker. portionsandmerged togaindatasetscontaining20% to100%(15k images)of all recordedsamples. The translational-15mm-errormetrics (percentage of tested data for which the translational error is smaller than 15 mm – accuracy necessary for grasp- ing)[7] inFigure2show, thatusingpre-trainedmod- els (blue, 6-10) leads to better performance than ini- tializing networks with random weights (red, 1-5). Bigger datasets do not necessarily improve the ac- curacy since biased datasets lead to wrong general- izations (e.g. network 5). A relatively small dataset recorded in the target domain achieves better results than a several times larger synthetic dataset (net- work 12: 15k real + 15k domain randomized im- ages), especially when extended using data augmen- tation (network 11: smallest real dataset augmented twice). Therotationalerrorsshowsimilar results,but are generally lower. Figure 2. Translational errors compared regarding train- ing time: Synthetic data (green), augmented data (cyan), pre-trained (blue) and non-pre-trained networks (red). Bubble-sizevisualizes dataset-size. A qualitative evaluation using a real mobile ma- nipulatorconfirms that theproposedpose-estimation system could be deployed in real-life logistics appli- cations to increase the levelof automation. References [1] U.Asif,M.Bennamoun,andF.A.Sohel. RGB-Dob- ject recognition and grasp detection using hierarchi- cal cascaded forests. IEEETransactionsonRobotics, 33(3):547–564, 2017. [2] G. Du, K. Wang, and S. Lian. Vision-based robotic grasping from object localization, pose estimation, graspdetection tomotionplanning: Areview.CoRR, 2019. [3] M.Garon,D.Laurendeau,andJ.F.Lalonde.Aframe- work for evaluating 6-DOF object trackers. In 15th EuropeanConference onComputer Vision – ECCV, pages 608–623,2018. [4] I. J. Goodfellow, Y. Bengio, and A. Courville. Deep Learning. MITPress,Cambridge,MA,USA, 2016. [5] P. Marion, P. R. Florence, L. Manuelli, and R. Tedrake. Label Fusion: A pipeline for generat- ing ground truth labels for real RGBD data of clut- tered scenes. In IEEE International Conference on RoboticsandAutomation– ICRA, pages 1–8,2017. [6] D.Pavlichenko,G.M.Garcı´a,S.Koo,andS.Behnke. Kittingbot: A mobile manipulation robot for collabo- rativekitting inautomotive logistics. In15th Interna- tionalConferenceonIntelligentAutonomousSystems – IAS, pages 849–864,2018. [7] J. Tremblay, T. To, B. Sundaralingam, Y. Xiang, D. Fox, and S. Birchfield. Deep object pose estima- tion for semantic robotic grasping of household ob- jects. In2ndAnnualConferenceonRobotLearning– CoRL, pages306–316,2018. [8] J.Walle´n. Thehistoryof the industrial robot. Techni- cal report fromAutomaticControl atLinko¨pingsuni- versitet, 2008. 3
zurĂĽck zum  Buch Joint Austrian Computer Vision and Robotics Workshop 2020"
Joint Austrian Computer Vision and Robotics Workshop 2020
Titel
Joint Austrian Computer Vision and Robotics Workshop 2020
Herausgeber
Graz University of Technology
Ort
Graz
Datum
2020
Sprache
englisch
Lizenz
CC BY 4.0
ISBN
978-3-85125-752-6
Abmessungen
21.0 x 29.7 cm
Seiten
188
Kategorien
Informatik
Technik
Web-Books
Bibliothek
Datenschutz
Impressum
Austria-Forum
Austria-Forum
Web-Books
Joint Austrian Computer Vision and Robotics Workshop 2020