Page - 90 - in Proceedings - OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“
Image of the Page - 90 -
Text of the Page - 90 -
3.3. Model-basedPointCloudSegmentation
A trivial model-based point cloud segmentation results from evaluating the point vicinity of recog-
nized object models. Object models are aligned with the scene point cloud by applying point-based
methods, as described in section 3.1.. It is reasonable to assume that a model point that is close to
a scene point indicates a model-explained segment membership of this point. Spatial decomposition
techniques such as kd-trees provide an efficient structure to determine thek closest points of a query
point [15]. The set of scene points that are explained by an aligned object model results as follows.
Eachpoint thathasbeensampledfromthemodelpointclouddefinesknearestneighbors(kNN)inthe
scene point cloud. The nearest neighbor search is carried out in a kd-tree, which represents the scene
pointcloud. Choosing thevalue fork results ina trade-offbetweensegmentdensityandsharpnessof
the segmentedges.
4. Segment-basedObjectRecognitionandPoseEstimation
Rising the degree of occlusion in a scene inevitably complicates the segmentation process. Never-
theless, the set of regions that result from the model-free segmentation method described in section
3.2.canpreserveacertainamountofobjectcharacteristics, even in theoccludedcase. Thismotivates
a segment-based recognition and pose estimation approach where the model-free segmentation acts
as main input. Single segments like the one shown in figure 3 are often not expressive enough to
apply recognition and pose estimation on them. Many of them show less variation in surface-normal
orientation. Wepropose togenerate larger surfacepatches inorder to increase the recognitionoutput.
Surface patches are created by clustering a set of adjacent segments together. Figure 4 provides an
overview of how segment-based model poses are generated iteratively in order to refine model-free
segmentation in a bottom-up way. In the rest of this paper, the terms surface patch and segment are
interchangeable, since single segmentscan also act as simple surfacepatches.
Figure4: Iterativeapplication of segment-based object recognition andposeestimation.
4.1. AdaptiveCorrespondenceGrouping
The correspondence-based recognition and pose estimation method that has been introduced in sec-
tion 3.1. searches for a set of non-conflicting hypotheses that describe the whole scene at once. In
contrast,weproposeasegment-basedbottom-upstrategy. Thisapproach ismotivatedby twoconsid-
erations. Firstly, restricting recognition and pose estimation to a surface patch, that preserves certain
object characteristics, could reduce the number of wrong hypotheses. Secondly, following a bottom-
up strategy that handles large surface patches early, reduces the complexity of the recognition task
for smaller segments. The latter consideration is gaining relevance if the scene is a composition of
90
Proceedings
OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“
- Title
- Proceedings
- Subtitle
- OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“
- Authors
- Peter M. Roth
- Kurt Niel
- Publisher
- Verlag der Technischen Universität Graz
- Location
- Wels
- Date
- 2017
- Language
- English
- License
- CC BY 4.0
- ISBN
- 978-3-85125-527-0
- Size
- 21.0 x 29.7 cm
- Pages
- 248
- Keywords
- Tagungsband
- Categories
- International
- Tagungsbände