Web-Books
in the Austria-Forum
Austria-Forum
Web-Books
International
Proceedings - OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“
Page - 158 -
  • User
  • Version
    • full version
    • text only version
  • Language
    • Deutsch - German
    • English

Page - 158 - in Proceedings - OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“

Image of the Page - 158 -

Image of the Page - 158 - in Proceedings - OAGM & ARW Joint Workshop 2016 on

Text of the Page - 158 -

origin for ray-tracing is defined at the robot hull front, based on the robot’s corrected pose. The local mapisused insteadof thecurrentmeasurementsalonebecause italsocontainsobstacles remembered from before, even if they are already inside the sensor’s blind zone. The robot turns to face the user and moves towards the sitting position up to a distance given by the closest projected measurement withinananglearound thedetected user, consideringasafetymargin. Fig. 2 illustrates this. d m predefined position final position Figure2. Obtainingthedistance tomovetowards theuser. Left: ray tracingonthe localmap. Thecurrentvirtual scan when the head is looking up is depicted in blue, while the scan obtained from ray tracing is shown in yellow. Theraytracingscanisconsideredtobeobtainedfromthefrontalpartoftherobot(excludingthebumper,whichis taken intoaccountseparatedly). Theobstacle in frontof therobot isnotdetectedbytheheadsensorbut ispresent in the remembered local map and therefore in the scan obtained by ray tracing. Right: simplified diagram of how thedistancetomoveiscomputed. Theminimumdistancefromraytracingisdenoteddandm is thesafetymargin. For enhanced flexibility, reliability and better adaptation to different users, after this process is exe- cuted therobotaskswhether tocomeevencloser. Whenthere isapositiveanswer therobotmoves15 cm more, completely blind now and trusting the user’s input, and then the question and subsequent movement can be repeated up to three times. The user can reply by means of voice, gestures, or the touch screen commands. The robot remembers if it moved closer and if so it moves backwards again before startinganew navigation task. The described method for approaching the user was developed for the call robot function, but it can alsobeapplied inother tasksandcontexts. For instance,wealso incorporated it at theendofafitness functionscenario so that the robot comescloser to theuser for further interactionafter exercising. 4.5. Pickupobject For this function, in the first place, navigation to a goal obtained from the user’s pointing gesture is needed. Using autonomous navigation in this way provides a much wider applicability and can be useful for picking up objects not directly inside the robot’s field of view. The static map of the environment isalsousedforcheckingwhetheradetectedobject is tooclose towallsor funiture, since thatmeansriskofcollisionandpickinguptheobject isnotpossible then. Oncetheprecomputedpose is reached,finepositioningwith respect to thedetectedobject isperformed,basedondiscretemotion commandswith sufficient accuracy. Moredetails about thepick up algorithmscanbe found in [6]. 158
back to the  book Proceedings - OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“"
Proceedings OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“
Title
Proceedings
Subtitle
OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“
Authors
Peter M. Roth
Kurt Niel
Publisher
Verlag der Technischen Universität Graz
Location
Wels
Date
2017
Language
English
License
CC BY 4.0
ISBN
978-3-85125-527-0
Size
21.0 x 29.7 cm
Pages
248
Keywords
Tagungsband
Categories
International
Tagungsbände

Table of contents

  1. Learning / Recognition 24
  2. Signal & Image Processing / Filters 43
  3. Geometry / Sensor Fusion 45
  4. Tracking / Detection 85
  5. Vision for Robotics I 95
  6. Vision for Robotics II 127
  7. Poster OAGM & ARW 167
  8. Task Planning 191
  9. Robotic Arm 207
Web-Books
Library
Privacy
Imprint
Austria-Forum
Austria-Forum
Web-Books
Proceedings