Page - 84 - in Proceedings of the OAGM&ARW Joint Workshop - Vision, Automation and Robotics
Image of the Page - 84 -
Text of the Page - 84 -
Fig. 5. Distributed setup of sensors including 2 laser scanners and one
Pico Flexx ToF camera around the workplace.
The sensors we have chosen are not supposed to be
used for object/human detection and localization per se, but
mainly for distance measurement. Therefore, with both types
of sensors,weneed toperformsomepost-processing inorder
to be able to detect and perceive an approaching object. In
order to have a safety-eligible sensory data analysis and
decision making, we need to reduce the chance of false
positive and false negative in our perception system. Safety-
wise, perception scenarios with false negative (i.e., a human
approaching the robot isnotdetected)are farmoredangerous
compared to scenarios with false positive. In case of false
positive, on the other hand, we may observe instances of
unwanted robot speed reduction or even a complete stop,
which is affecting the system performance but not the safety
property. For instance, in case of ToF camera the following
steps are being performed to robustly detect a moving object:
• Filtering the depth image: it is performed by using
various filtering method (e.g., median filter in both
spatial and temporal domain) which mitigates the false
detection. Filtering steps are shown in Figures 6 and 7,
and resulting filtered depth image is shown in Figure 8.
• Background image: recording filtered depth image at
startup.Thebackground image is refreshed if there is no
movement detected for a specific period of time (Figure
8).
• Difference image: subtraction of background and cur-
rent depth image (Figure 8 – Figure 10 = Figure 12).
• Blob Detection in binary difference image (Figure 13).
To avoid detecting changes produced by noise and also
using the prior-knowledge of the size of an approaching
object (e.g., human) we adjust the parameters of our
blob detector (such as expected shape and size) in a way
todetectonly the intendedmoving targets.Whenat least
one blob is detected, it means that there is a movement
in the workspace, and therefore we can proceed with
the next two steps.
• Masking the original depth image: binary difference
image is used as a mask in order to have real depth
data of each pixel of the blob that is assumed to be a
moving object (Figure 11).
• Final depth information of detected moving object:
is a result of using median value of depth info from
the masked image. Higher importance is given to closer
distances that still have a smaller covering area in the
depth image, such as an intruding arm of a human. Fig. 6. Original
depth images Fig. 7. Filtered
depth images Fig. 8. Final background
image
Fig. 9. Original
images with hu-
man Fig. 10. Final fil-
tered depth image
Fig. 11. Masked
original depth image Fig. 12. Difference
image
Fig. 13. Blob de-
tection in BW diff.
image
For the laser scanners, which provide 2D data (scan a
plane or a cross-section out of the 3D space), the process of
extracting distance of a detected object and its coordinates
is aligned with the one of the ToF camera:
• Background data: resulting background data is median
filter applied on temporal domain of data collected in
initialization step.
• Difference data is calculated as subtraction of back-
ground data and current data every time stamp.
• Movement in the workspace is detected if the percent-
age of not moving points is less than 98.5%.
• Transformation of depth data from the laser coordinate
system to the robot’s coordinate system is done using
Euclidean distance, taken into account the fixed position
of the laser scanner relative to the robot.
Every time stamp we have the result of our sensor fu-
sion as the final danger zone. From each sensor, regarding
distance of a human, or any other moving object in robot’s
workspace, it is decided in which danger zone the detection
happened, and the final danger zone is the worst case of
all three. Measuring the separation distance between the ob-
ject/human and the robot, in constant speed setting situations
with worst-case value taken into account, it is ensured that
the robot system never gets closer to the operator than the
protective separation distance [12].
84
Proceedings of the OAGM&ARW Joint Workshop
Vision, Automation and Robotics
- Title
- Proceedings of the OAGM&ARW Joint Workshop
- Subtitle
- Vision, Automation and Robotics
- Authors
- Peter M. Roth
- Markus Vincze
- Wilfried Kubinger
- Andreas Müller
- Bernhard Blaschitz
- Svorad Stolc
- Publisher
- Verlag der Technischen Universität Graz
- Location
- Wien
- Date
- 2017
- Language
- English
- License
- CC BY 4.0
- ISBN
- 978-3-85125-524-9
- Size
- 21.0 x 29.7 cm
- Pages
- 188
- Keywords
- Tagungsband
- Categories
- International
- Tagungsbände