Web-Books
in the Austria-Forum
Austria-Forum
Web-Books
International
Proceedings - OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“
Page - 88 -
  • User
  • Version
    • full version
    • text only version
  • Language
    • Deutsch - German
    • English

Page - 88 - in Proceedings - OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“

Image of the Page - 88 -

Image of the Page - 88 - in Proceedings - OAGM & ARW Joint Workshop 2016 on

Text of the Page - 88 -

by a model-free segmentation process. The complexity of the recognition task is reduced stepwise by handling large segments before small segments. The input segmentation is refined iteratively by exploiting collected 6DOF model pose information. Recognition and pose estimation rely on object models that are specified by 3D meshes as shown in figure 1. Object recognition is bound to certain timeconstraints, therefore theproposedalgorithmdoesnotexecute inreal-time. Theutilizedsegmen- tationstreamusescolor informationas itsmaincue. Incontrast toobject recognition,whichhasbeen restrictedtogeometrical information. Omittingcolor informationinthelattercasehasbeenmotivated by the surface characteristics of the evaluated object dataset. The proposed algorithm does not nec- essarily rely on color cues. In general, it can be applied with any adequate point cloud segmentation input. faceplate separator pendulum shaft bolt angular bolt sensor pendulum head Figure1: Thesetofobjectmodels that areused for recognitionandposeestimation. 2. RelatedWork Exploiting low-level processing outcomes in higher-level tasks is a fundamental paradigm in com- puter vision [4, 8, 19]. At present, there exist many segmentation methods that apply to RGBD data [1, 7, 13, 9]. Global surface descriptors are commonly applied to pre-segmented scenes [4]. In this paperweconcentrateon localdescriptors [5]. The latter type ismoresuitable forourdataset, since it ismore robustagainstclutterandocclusion. Model information is frequentlyusedforobject tracking in videos. The method proposed in [14] uses model information to track 6DOF poses. A RGBD- basedsegmentationand trackingapproach thatusesadaptivesurfacemodels isproposed in [10]. Our approach concentrates on a combination of object recognition, pose estimation and segmentation in RGBD-images. 3. Background The following sections provide information about the methods that have been utilized in this paper. Recognition and pose estimation is addressed in the subsequent section 3.1.. Section 3.2. introduces a method that delivers model-free segmentation. The model-based point cloud segmentation that is described in section 3.3. acts as a baseline for the bottom-up segmentation approach proposed in section4.2.. 3.1. Point-basedObjectRecognitionandPoseEstimation Atpresent, thereexistsa largevarietyofdifferentobject recognitionandposeestimationapproaches. Anappropriatemethodshouldberobustagainstnoisewhich is introducedbythesensorand it should provide reliable results even in the case of occluded scenes. Scenes are captured by a depth-sensor andobjectmodelsarerepresentedaspointcloudsthataresampledfrom3Dmeshes. Themethodused in this paper estimates 6DOF poses by applying a point-based recognition pipeline [3]. The pipeline is publicly available as part of the Point Cloud Library (PCL) [16]. Figure 2 shows the single steps thatareexecuted inorder to recognize theobjects in thescene. Thefirst stageextractskeypoints from 88
back to the  book Proceedings - OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“"
Proceedings OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“
Title
Proceedings
Subtitle
OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“
Authors
Peter M. Roth
Kurt Niel
Publisher
Verlag der Technischen Universität Graz
Location
Wels
Date
2017
Language
English
License
CC BY 4.0
ISBN
978-3-85125-527-0
Size
21.0 x 29.7 cm
Pages
248
Keywords
Tagungsband
Categories
International
Tagungsbände

Table of contents

  1. Learning / Recognition 24
  2. Signal & Image Processing / Filters 43
  3. Geometry / Sensor Fusion 45
  4. Tracking / Detection 85
  5. Vision for Robotics I 95
  6. Vision for Robotics II 127
  7. Poster OAGM & ARW 167
  8. Task Planning 191
  9. Robotic Arm 207
Web-Books
Library
Privacy
Imprint
Austria-Forum
Austria-Forum
Web-Books
Proceedings