Page - 74 - in Proceedings of the OAGM&ARW Joint Workshop - Vision, Automation and Robotics
Image of the Page - 74 -
Text of the Page - 74 -
AVisual Servoing Approach for a Six Degrees-of-Freedom Industrial
Robot by RGB-D Sensing
ThomasVarhegyi,MartinMelik-Merkumians,Michael Steinegger, GeorgHalmetschlager-Funek,
andGeorg Schitter
Abstract—Avisual servoing approach is presented that uses
depth images for robot-pose estimation utilizing a marker-
less solution. By matching a predefined robot model to a
captured depth image for each robot link, utilizing an appro-
priate approximation method like the Iterative Closest Point
(ICP) algorithm, the robot’s joint pose can be estimated.
The a-priori knowledge of the robot configuration, alignment,
and its environment enables a joint pose manipulation by
a visual servoed system with potential to collision detection
and avoidance. By the use of two RGB-D cameras a more
accuratematchingof the robot’s links is feasiblewhile avoiding
occlusions.Themodeled links are coupled as akinematic chain
by theDenavit-Hartenberg convention, andareprevented from
divergence during the matching phase by the implementation
of an algorithm for joint pose dependency. The required joint
orientation of the robot is calculated by the ICP algorithm
to perform a pose correction until its point cloud align with
themodel again. First tests with two structured light cameras
indicated that the recognition of the robot’s joint positions
brings good results but currently only for slowmotion tasks.
I. INTRODUCTION
The fourth industrial revolution involves the use of new
robotic technologies for smart and efficient work-flows in
an innovative way. Humans will work together with robots
side-by-side and integrate them in their every daywork life
as a collaborative device. Therefore, a collision detection
with humans and the environment has to be established, for
instance, with pressure sensitive skins [1, 2] or abnormal
force recognition [3, 4] which are two approaches for a
collaborative aspect.Another idea is the integrationof visual
perception [5, 6]. Robots should see where they are, know
and see the environment they move in and know how they
can grab and move without disturbing the work-flow. The
focus of this paper lies on the application of computer/-
machine vision methods for image processing and robot
actuation. Vision-based motion control of robots is called
visual servoing, where the robot manipulator is operated
by the evaluation of visual information from an eye-to-
hand (camera fix to workspace position) or an eye-in-hand
(camera attached to robot) composition [7]. Figure 1 shows
the recording of a robot in an eye-to-hand composition, that
is used for the visual servoing approach in this paper. The
advantageof visual servoing is that the teach-inprocedureof
a robot can be omitted since tool-tip-pose errors caused by
low accuracy between the tool-tip-pose and the joint angle
All authors are with the Automation and Control Institute (ACIN),
Vienna University of Technology, A-1040 Vienna, Austria. Con-
tact: melik-merkumians@acin.tuwien.ac.at (correspond-
ing author) Base
Link1
Link2
Link3
Link4
Link5
Link6
Fig. 1: ABB IRB 120 point cloud model overlaid by the
captured point cloud from the IntelR© RealSense R200.
can be corrected in addition. These visual information can
be exploited as position- or image-based information [8–10].
Position-based detection uses interest-points in the image to
detect the object position, while image-based detection uses
a template image of the designed object to predict how the
camera should be aligned to the object.
So far, mainly 2D cameras have been applied for visual
servoing applications [11–13]. The accuracy of the interest
point estimation in the image as edges or corners determines
howprecisely the robotcanbepositionedby2Dcameras.For
objects without distinctive characteristics as curved shapes
without edges, these kinds of camera systems do not suite
perfectly. In this case depth sensing cameras is the better
choice.
RGB-D imaging systems can be separated into three
main groups. First, stereo vision systems [14] which are
based on two cameras and feature disparity where the
depth information is obtained by the use of triangulation.
Second, structured light cameras [15, 16] with the same
basic principles as stereo vision cameras but instead of the
second camera a projector is used. It emits a patterned light
(usually infra-red light) and measures the disparity of the
74
Proceedings of the OAGM&ARW Joint Workshop
Vision, Automation and Robotics
- Title
- Proceedings of the OAGM&ARW Joint Workshop
- Subtitle
- Vision, Automation and Robotics
- Authors
- Peter M. Roth
- Markus Vincze
- Wilfried Kubinger
- Andreas Müller
- Bernhard Blaschitz
- Svorad Stolc
- Publisher
- Verlag der Technischen Universität Graz
- Location
- Wien
- Date
- 2017
- Language
- English
- License
- CC BY 4.0
- ISBN
- 978-3-85125-524-9
- Size
- 21.0 x 29.7 cm
- Pages
- 188
- Keywords
- Tagungsband
- Categories
- International
- Tagungsbände