Seite - 158 - in Joint Austrian Computer Vision and Robotics Workshop 2020
Bild der Seite - 158 -
Text der Seite - 158 -
Operations 1 and 6 are specific to multi-camera sys-
temsandare thereforedescribedinmoredetail in the
following sections. The remaining CPU/GPU per-
formance on the hub is available for AI and deep-
learningapplications.
3.Synchronization
Imaging systems that rely on multiple active sen-
sors inevitably requirea synchronization. In thecase
of the multi-ToF platform, synchronization serves
two purposes: On the one hand, it avoids interfer-
ence effects between the sensors, and on the other
hand, it simplifies the registrationof thepointclouds
produced by the individual frontends. The hub can
synchronize multiple frontends by using a hardware
triggertostart theacquisitionofindividualfrontends.
This could be done in a round-robin scheme or by
triggering opposite sensors at the same time to avoid
interference.
4.Pointcloudregistration
Each ToF-sensor frontend produces a 2D depth
map which can be converted into a 3D point cloud.
A consistent 360° view of the environment necessi-
tates the registration of these individual point clouds
in a common world coordinate system. Through an
extrinsic calibration all sensors of the ring can be
combined inasinglepointcloudwhichcanbe trans-
formedinarobotorworldcoordinatesystemgivena
known positionof the robot’s joints.
5.Advantagesandperformance
Withtoday’sToFtechnology,camerasarecapable
ofdetectingobjectswithhigh frameratesand lowla-
tencies. Active lighting ensures that data quality is
independent from ambient conditions to a high de-
gree. The exact distance measurement accuracy is
dependent on the target’s reflectivity and distance,
but the user can expect a relative accuracy of 1%
basedon thedistance.
Considering the system’s performance in the con-
text of machine learning, ToF cameras provide use-
ful additional information compared to, for exam-
ple, 2D RGB cameras: Objects can be more easily
spatially separated using the 3D point cloud and the
corresponding IR greyscale image can be employed
when training a network. Training labels can easily
be transferred between the four ToF channels (X, Y,
Z,andamplitude)atpixelprecision. Asaresult,ToF
cameras reveal more information about the observed Figure 2. Depth image (left), with red indicating smaller
and green larger distances, and the corresponding IR
greyscale image (right).
scene, but labelling the data does not require addi-
tional effort. The recognition performance of deep
learning algorithms in particular benefits from an in-
crease in theamountofavailabledata.
6.Conclusion
In this paper we have presented a hardware plat-
form which uses multiple ToF Sensors and a central
processing hub to generate a high-resolution point
cloudaroundanautonomousmachinewhichenables
collaborative and safety functions. Further work
will include a synchronization of multiple machines
working in close proximity using a clock synchro-
nizationmechanismoverstateoftheartwirelesscon-
nectivityhardware.
References
[1] U. Behrje, M. Himstedt, and E. Maehle. An au-
tonomous forklift with 3d time-of-flight camera-
based localization and navigation. In 2018 15th
International Conference on Control, Automation,
Robotics and Vision (ICARCV), pages 1739–1746.
IEEE,2018.
[2] S. Kumar, S. Arora, and F. Sahin. Speed and sepa-
ration monitoring using on-robot time-of-flight laser-
ranging sensor arrays. In 2019 IEEE 15th Interna-
tional Conference on Automation Science and Engi-
neering (CASE), pages 1684–1691. IEEE, 2019.
158
Joint Austrian Computer Vision and Robotics Workshop 2020
- Titel
- Joint Austrian Computer Vision and Robotics Workshop 2020
- Herausgeber
- Graz University of Technology
- Ort
- Graz
- Datum
- 2020
- Sprache
- englisch
- Lizenz
- CC BY 4.0
- ISBN
- 978-3-85125-752-6
- Abmessungen
- 21.0 x 29.7 cm
- Seiten
- 188
- Kategorien
- Informatik
- Technik