Seite - 70 - in Proceedings of the OAGM&ARW Joint Workshop - Vision, Automation and Robotics
Bild der Seite - 70 -
Text der Seite - 70 -
cameras, the depth information defined by z-axis can
be calculated. The vision sensor in our setup has both
stereo cameras fixed in relation to each other looking
slightly inwards,with rotationaroundY (vertical) axis.
Solving Eq. 1 provides the real-world coordinates X,
Y andZ of a point seen by the stereo cameras. Inputs
(x1,y1) and (x2,y2) are thepoint coordinates incamera
1 and camera 2 respectively. Variable f is the focal
length of the camera and b defines a baseline (dis-
tance) between the stereo cameras. Rotation between
the cameras aroundY-axis is defined by θ.
Z0= b
tan(θ)
Z= b∗ f
x1−x2+ f∗bZ0
X= x1∗Z
f
Y= y1∗Z
f (1)
After the charging port is found in the input im-
ages, stereo triangulation is used to obtain 3D real-
world coordinates of the port position, providing 5
to 7 reference points depending on the charging port
type. Using the points, a perspective transformation is
calculated using the least squares fit method to obtain
the exact position and orientation of the charging port
in relation to the vision sensor. Least squares fit for
finding the orientation optimises for 3 unknowns (A,B
andC), which later are mapped to roll, pitch and yaw
angles. The least square error function is defined in
Eq. 2, wherex,yand zare coordinates of the reference
points.
e(A,B,C)=∑(Ax+By+C−z)2 (2)
Then, the error function is differentiated and set to
zero, as shown in Eq. 3.
∂e
∂A =∑2(Ax+By+C−z)x=0
∂e
∂B =∑2(Ax+By+C−z)y=0
∂e
∂C =∑2(Ax+By+C−z)=0 (3)
The resulting linear equations with 3 unknowns are
solved to get the orientation of the object. This can
also be seen as 3D plane fitting to the given points.
B. Marker-less Eye-to-HandCalibration
In order to operate the vision sensor and the robot
in the same coordinate system, eye-to-hand calibration
is necessary. The eye-to-hand calibration estimates
the transformation between the vision sensor and the
robot base. Using this transformation, the position of any object detected by the vision sensor can be
recalculated into the coordinate system of the robot,
allowing the robot to move to, or avoid that location.
Normally, a well structured object, like a checker-
board of known size and structure is used in the
calibration process. However, it requires mounting it
on the end-effector of the robot and can still result
in additional offsets. We use the known structure of
the connector plug and previously presented shape-
based template matching with orientation estimation
to obtain the precise pose. Eye-to-hand calibration
is based on an automatic calibration procedure for
3D camera-robot systems, which uses the calibration
method proposed by Tsai et al [15] [21].
The result of the eye-to-hand calibration are two
transformation matrices. The first one defines the
position of the vision sensor in relation to the robot
base and the second one defines the position of the
end point of the connector plug in relation to the end-
effector of the robot.
The marker-less eye-to-hand calibration can be ben-
eficial if the robot is placed on a moving platform,
so the relative position between the vision sensor and
the robot can change. Furthermore, it would benefit in
cases when the robot has interchangeable end-effector
attachments with different connector plugs. In both
of these cases, recalibration procedure could be done
automatically without any reconfiguration.
C. RobotMotion Planning
Given the limited workspace and all the movements
being defined by camera measurements, robot control
in Cartesian coordinates was used. TheMoveIt! frame-
work, containingmultiplemotionplanningalgorithms,
was used for the initial testing [20]. The best perfor-
mance in the defined case was demonstrated by the
RRT-connect algorithm, which is based on the rapidly
exploring random trees [13].
In order to get smoother motion execution and more
human-like motions, a velocity based controller was
used instead of the standard one provided in ROS.
Better performance is achieved by calculating and
directly sending speed commands to each of the robot
joints, thus reducing theexecutionstart time to50−70
ms compared to around 170 ms using the official ROS
UR10 drivers [10].
D. Plugging-In Procedure
After the pose of the charging port is calculated, the
coordinate system is assigned with the origin placed
at the center of the plug and Z-axis looking outwards.
Similarly, the coordinate system is assigned to the
connector plug, which is held by the robot. The goal
of the plug-in procedure is to perfectly align connector
plug with the charging port, so the last movement
is simply along one axis. In order to achieve that, a
70
Proceedings of the OAGM&ARW Joint Workshop
Vision, Automation and Robotics
- Titel
- Proceedings of the OAGM&ARW Joint Workshop
- Untertitel
- Vision, Automation and Robotics
- Autoren
- Peter M. Roth
- Markus Vincze
- Wilfried Kubinger
- Andreas Müller
- Bernhard Blaschitz
- Svorad Stolc
- Verlag
- Verlag der Technischen Universität Graz
- Ort
- Wien
- Datum
- 2017
- Sprache
- englisch
- Lizenz
- CC BY 4.0
- ISBN
- 978-3-85125-524-9
- Abmessungen
- 21.0 x 29.7 cm
- Seiten
- 188
- Schlagwörter
- Tagungsband
- Kategorien
- International
- Tagungsbände