Page - 78 - in Proceedings of the OAGM&ARW Joint Workshop - Vision, Automation and Robotics
Image of the Page - 78 -
Text of the Page - 78 -
iterations on the test system and the point cloudmodel can
not be matched. The low accuracy of the ICP algorithm in
the experiments for lowspeedsmayoccur to theverybumpy
surface images from the cameras (Figure 1), whichmakes it
difficult to calculate an accuratematch. A smoothing of the
robot’s point cloudby themoving least squaresmethod from
thePCLalsodoesnot significantly improve the results, since
the outliers’ in the robot’s point cloud surface are too large
(cf. Figure 2c) to achieve good results.
VI. CONCLUSION&OUTLOOK
A robot point cloud model generated from CAD data
for each robot link have been adopted and linked via the
DH convention. A linked motion algorithm is integrated
so that each link depends from each other. The first tests
with structured light cameras and the ICP algorithm from
the PCL showed moderate results. For the next tests with
structured light cameras, the results should be improved
by the implementation of a Levenberg-Marquardt Optimizer
[23, 24] for an optimized registration. The change of the
camera system to ToF cameras will also bring better results
with the general ICP algorithm. So far the operation area is
limited by only two cameras, because the robot’s tool center
point is not detectable overall by reason of occlusions in
negative y-direction. A remedy would be to place a third
camera right from the robot. This is feasible with a ToF
camera but challengingwith a structured light camera due to
illumination disturbance from the counterpart. An alignment
of 60 degrees for three structured light cameras would be
better, since all the three cameras would receive the same
disturbance which is less than if two of three fully receive
it. A faster and more general model implementation would
bring the implementation of an automatic model generation
from COLLAborative Design Activity (COLLADA) [25]
datawhich can be generated easily byCADprograms.With
the COLLADA data (version 1.5.0) not only the geometry
parameter would be loaded, the mechanical parameter as
mass, inertia and center ofmass could be loaded too, which
is interesting for the robot dynamic. This would remove the
model preparation as mentioned in Section III for a more
user-friendly application.
REFERENCES
[1] J. O’Neill, J. Lu, R. Dockter, and T. Kowalewski,
“Practical, stretchable smart skin sensors for contact-
aware robots in safe andcollaborative interactions”, in
2015 IEEE International Conference onRobotics and
Automation (ICRA), IEEE, 2015, pp. 624–629.
[2] C. Liu, Y. Huang, P. Liu, Y. Zhang, H. Yuan, L.
Li, and Y. Ge, “A flexible tension-pressure tactile
sensitive sensor array for the robot skin”, inRobotics
and Biomimetics (ROBIO), 2014 IEEE International
Conference on, IEEE, 2014, pp. 2691–2696.
[3] A. De Luca and R. Mattone, “Sensorless robot col-
lision detection and hybrid force/motion control”, in
Robotics andAutomation, 2005. ICRA2005.Proceed-
ings of the 2005 IEEE International Conference on,
IEEE, 2005, pp. 999–1004. [4] K. Kosuge and T. Matsumoto, “Collision detection
of manipulator based on adaptive control law”, in
Proc. IEEE/ASME Int. Conf. on Advanced Intelligent
Mechatronics, 2001, pp. 117–122.
[5] C. Morato, K. N. Kaipa, B. Zhao, and S. K. Gupta,
“Toward safe human robot collaboration by using
multiple kinects based real-time human tracking”,
Journal of Computing and Information Science in
Engineering, vol. 14, no. 1, p. 011006, 2014.
[6] B. Schmidt and L.Wang, “Depth camera based col-
lision avoidance via active robot control”, Journal of
Manufacturing Systems, vol. 33, no. 4, pp. 711–718,
2014.
[7] A. Muis and K. Ohnishi, “Eye-to-hand approach on
eye-in-hand configuration within real-time visual ser-
voing”, IEEE/ASME Transactions on Mechatronics,
vol. 10, no. 4, pp. 404–410, 2005.
[8] F. Janabi-Sharifi, L. Deng, and W. J. Wilson,
“Comparison of basic visual servoing methods”,
IEEE/ASME Transactions on Mechatronics, vol. 16,
no. 5, pp. 967–983, 2011.
[9] B. Thuilot, P. Martinet, L. Cordesses, and J. Gallice,
“Position based visual servoing: Keeping the object
in the field of vision”, in Robotics and Automation,
2002.Proceedings. ICRA’02. IEEEInternationalCon-
ference on, IEEE, vol. 2, 2002, pp. 1624–1629.
[10] T.Koenig,Y.Dong,andG.N.DeSouza,“Image-based
visual servoing of a real robot using a quaternion for-
mulation”, inRobotics,AutomationandMechatronics,
2008 IEEEConference on, IEEE, 2008, pp. 216–221.
[11] E. Marchand and F. Chaumette, “Visual servoing
through mirror reflection”, in IEEE Int. Conf. on
Robotics and Automation, ICRA’17, 2017.
[12] D. Tsai, D. G. Dansereau, T. Peynot, and P. Corke,
“Image-based visual servoing with light field cam-
eras”, IEEE Robotics and Automation Letters, vol. 2,
no. 2, pp. 912–919, 2017.
[13] N. Shahriari, S. Fantasia, F. Flacco, and G. Oriolo,
“Robotic visual servoing of moving targets”, in In-
telligent Robots and Systems (IROS), 2013 IEEE/RSJ
International Conference on, IEEE, 2013, pp. 77–82.
[14] H. Liu, S. Huang, N. Gao, and Z. Zhang, “Binocular
stereo vision system based on phase matching”, in
SPIE/COS Photonics Asia, International Society for
Optics and Photonics, 2016, 100230S–100230S.
[15] J. Han, L. Shao, D. Xu, and J. Shotton, “Enhanced
computer vision with microsoft kinect sensor: A re-
view”, IEEETransactions onCybernetics, vol. 43, no.
5, pp. 1318–1334, 2013.
[16] S.K.Nayar andM.Gupta, “Diffuse structured light”,
in Computational Photography (ICCP), 2012 IEEE
International Conference on, IEEE, 2012, pp. 1–11.
[17] S. Foix, G. Alenya, and C. Torras, “Lock-in time-
of-flight (ToF) cameras: A survey”, IEEE Sensors
Journal, vol. 11, no. 9, pp. 1917–1926, 2011.
78
Proceedings of the OAGM&ARW Joint Workshop
Vision, Automation and Robotics
- Title
- Proceedings of the OAGM&ARW Joint Workshop
- Subtitle
- Vision, Automation and Robotics
- Authors
- Peter M. Roth
- Markus Vincze
- Wilfried Kubinger
- Andreas MĂĽller
- Bernhard Blaschitz
- Svorad Stolc
- Publisher
- Verlag der Technischen Universität Graz
- Location
- Wien
- Date
- 2017
- Language
- English
- License
- CC BY 4.0
- ISBN
- 978-3-85125-524-9
- Size
- 21.0 x 29.7 cm
- Pages
- 188
- Keywords
- Tagungsband
- Categories
- International
- Tagungsbände