Page - 75 - in Proceedings of the OAGM&ARW Joint Workshop - Vision, Automation and Robotics
Image of the Page - 75 -
Text of the Page - 75 -
(a) 165
125
270
70
72 168 134
unit inmm
FrameF4&F6 is
shifted backward
by the DH algorithm.
x0 y0
z0
x1 y1
z1
x2
y2 z2
x3
y3 z3
x4
y4
z4
x5
y5
z5
x6
y6
z6 m2
(b) (c)
Fig. 2: (a) ABB IRB 120 point cloud model - the (joint) frames are set-up by DH convention. Each link is implemented
as an independent model but coupled to its neighbor over the respective DH transformation; (b) ABB IRB 120 kinematic
structure; (c) Robot depth image captured by two IntelR© RealSense R200 in a distance of 1.5m.
captured pattern in comparison to the original one to get
thedepth informationby triangulation.Third,Time-of-Flight
(ToF) cameras [17]where the depth information ismeasured
over the elapsed time of pixel-wise emittedmodulated light
signals reflected by the detectable object.
RGB-D cameras provide point clouds (with position in-
formation in R3) generated from depth data. Thus, it is
not necessary to seek for interest points for orientation
estimation since the detected objects are already available
as 2.5Dobjects in theworkspace. Similar to an image-based
approach a 3D model of an object can be matched to the
point cloud via the Iterative Closest Point (ICP) algorithm
[18–20] to find its alignment. It minimizes the distance
between two point cloudswith the requirement that the two
point clouds are roughly close to each other (the initial
guess), until they are aligned. The ICP algorithm consists
of the following phases:
• Selection of point pairs,
• Matching of these point pairs,
• Rejectionofpoint pairs due to individual consideration,
• Errormetric assignment,
• Minimizing the error metric.
With the ICPalgorithm, analignment canbeachievedwithin
a few iterations.
Now, the idea is, instead of matching the whole robot
as a rigid body, to split the robot into its links and match
themseparately (cf.Figure1) inaneye-to-handcomposition,
such that the orientation of its joints can be estimated. In
this case the use of markers can be omitted since the joint
orientation can be calculated from the alignment of the
links to each other, which makes this approach a versatile
applicablemethod for industrial applications.Theknowledge
of the robot’s kinematic chain gives the possibility of robot
pose variation by well-defined joint orientations as well as the variation of the joint orientation duringmotion to correct
the trajectory in case of work-flow disturbance. The goal of
this approach is a visual servoing concept by depth sensing
with a potential to collision protection and avoidance in a
collaborative applicablemanner.
This paper is organized as follows. In Section II, the
applied method is described. The implementation of the
robot’s link point cloud models is described in Section III.
The description of the setup and the camera alignment is
described in Section IV. In SectionV, the presentedwork is
summarized and SectionVI concludes the paper.
II. METHODS&APPROACH
The goal of the presented approach is to track a manip-
ulator with six Degrees-of-Freedom (DoF) by two RGB-
D imaging systems for joint position perception and visual
servoing.For themeasurementof the robot’s joint alignment,
the cameras are placed in an eye-to-hand composition. This
allows to capture the wholemanipulator from awider view
and avoid occlusions. The depth sensing technology with
the highest accuracy for positioning and object matching
is derived by comparing two different camera technologies.
Therefore, a structured light camera and a ToF camera
is applied and tested. Before the pose of the robot can
be estimated, the position of both cameras have to be
extrinsically calibrated, to get a perfect aligned point cloud
from both cameras. The camera calibration is carried out
as a transformation of the camera coordinate system by
its physical position relative to the robot’s base coordinate
system.
For amatching process of point clouds by an appropriate
approximation method like ICP to receive the robot’s joint
positions asmentioned in Section I, themodels of its links,
generated fromComputer-AidedDesign (CAD)files, have to
be prepared. This is done by aligning the linkmodels in the
CADfiles in their initial position as shown in Figure 2a and
75
Proceedings of the OAGM&ARW Joint Workshop
Vision, Automation and Robotics
- Title
- Proceedings of the OAGM&ARW Joint Workshop
- Subtitle
- Vision, Automation and Robotics
- Authors
- Peter M. Roth
- Markus Vincze
- Wilfried Kubinger
- Andreas Müller
- Bernhard Blaschitz
- Svorad Stolc
- Publisher
- Verlag der Technischen Universität Graz
- Location
- Wien
- Date
- 2017
- Language
- English
- License
- CC BY 4.0
- ISBN
- 978-3-85125-524-9
- Size
- 21.0 x 29.7 cm
- Pages
- 188
- Keywords
- Tagungsband
- Categories
- International
- Tagungsbände