Page - 89 - in Proceedings - OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“
Image of the Page - 89 -
Text of the Page - 89 -
model and scene point clouds. In general, keypoints are defined by detecting characteristic surface
points. A simple and efficient alternative is to sample keypoints uniformly from the surface. The
local geometry of each keypoint is described by the Signature of Histograms of Orientation (SHOT)
descriptor [17], which delivers favorable results for the evaluated dataset. PCL provides a variety of
different descriptor implementations. A comprehensive comparison can be found in [5]. Correspon-
dences are generated by matching scene descriptors against a database of offline computed model
descriptors. The next step clusters geometrically consistent correspondences into groups. Starting
from a seed correspondence ci={pmi ,psi} (pmi andpsi denote corresponding key points of model and
scene), geometrical consistency follows from the following relation
|||pmi −pmj ||2−||psi−psj||2|<ε (1)
where ε defines a distance threshold between the keypoints. A minimum of three correspondences
is required to estimate a 6DOF pose. The absolute orientation step eliminates correspondences that
are not consistent with a unique 6DOF pose. The utilized recognition pipeline provides an optional
iterativeclosestpoint (ICP) refinementstep,whichcanbeappliedon the recognizedhypotheses. The
numberof ICPiterationshasbeenset toa lowvalue. Runningmore than5ICPiterationsonthegiven
datasetdoesnot result in significant recognition improvements. Thefinalhypothesisverificationstep
determinesasetofnon-conflictingmodelhypothesis thatareinaccordancewiththescenepointcloud.
Hypothesis that result from unexpected objects within the scene have to withstand the following
quality measurement. An acceptance function evaluates the number of supported model points that
are close to scene points, as well as the number of unsupported model points (visible model points
thathavenocounterpart in thescene). Adetaileddescriptionof thehypothesisverificationalgorithm
that hasbeenutilized in this paper is given in [12].
Figure2: Recognitionpipelineused in thispaper.
3.2. Model-FreePointCloudSegmentation
Segmentation results from summarizing interesting and distinguishable image properties. Higher-
level visual tasks like object recognition and pose estimation can benefit from such condensed image
representations. A method that segments the signal of a RGBD-sensor, without explicit object model
informationhasbeenpresentedin[1]. Homogeneousregions(segments)aregeneratedbyusingcolor
information. In addition, the method exploits depth information in order to support the segmentation
and tracking process. Figure 3 shows two example scenes that have been segmented by this method.
The segmentation result depends on several factors like scene density, degree of occlusion, object
geometry, light conditions, etc.
Figure3: Point cloudsegmentation generatedbyacolor-based model-freemethod.
89
Proceedings
OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“
- Title
- Proceedings
- Subtitle
- OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“
- Authors
- Peter M. Roth
- Kurt Niel
- Publisher
- Verlag der Technischen Universität Graz
- Location
- Wels
- Date
- 2017
- Language
- English
- License
- CC BY 4.0
- ISBN
- 978-3-85125-527-0
- Size
- 21.0 x 29.7 cm
- Pages
- 248
- Keywords
- Tagungsband
- Categories
- International
- Tagungsbände