Web-Books
im Austria-Forum
Austria-Forum
Web-Books
Informatik
Joint Austrian Computer Vision and Robotics Workshop 2020
Seite - 24 -
  • Benutzer
  • Version
    • Vollversion
    • Textversion
  • Sprache
    • Deutsch
    • English - Englisch

Seite - 24 - in Joint Austrian Computer Vision and Robotics Workshop 2020

Bild der Seite - 24 -

Bild der Seite - 24 - in Joint Austrian Computer Vision and Robotics Workshop 2020

Text der Seite - 24 -

facing camera, the autonomously detected peo- ple or the local and global cost map for the au- tonomous drive. In addition, the GUI can also be used for tele-operation of the robot arm in case theconnection to thecontrolpanel fails, so that a redundant tele-operation system is avail- able (6). 3. Controlpanel (Figure3 (3)) The control panel for teleoperation is used to tele-operator Robbie. Here the steering of the base platform as well as the robot-arm is han- dled. Further the autonomous ”come-home” functionalitycanbestartedandstopped. 2.4. Implementation -Software The Robot Operating System (ROS) [14] is used as high-level API to evaluate sensor data and control actuators. To improve the tele-operation process, a GUI plug-in for rviz [19] has been developed, which displaysall sensordataandenables tele-operationof the robot arm(seefigure3). For 2D and 3D mapping the open source frame- worksCartographer [15]andOctomap[16]were im- plemented. The main advantage of the Cartogra- pher algorithm is the ability to detect and calculate online loop closure with graph optimization, which minimizestheabsolutetranslationandrotationerrors during map generation [15]. Octomap, on the other hand, uses a probabilistic estimation of the occu- pancyof3Dspaceandrepresents theenvironment in octaves,whichconsistsofoccupiedvoxels [16]. Fig- ure 4 visualize a generated 2D or 3D map with these SLAMapproachesusingtheLIDARsensors listedin section 2.2, recordedduring theEnRicH2019 trial. To overcome the need for manual victim recogni- tion and mappinga ROS package based on Octomap and YOLO-ROS [2], a convolutional neural network (CNN) for object recognition in RGB images, was developed. ByutilizingraycastingandtheBounding Box the xand ycoordinatesof thevictim,with the6 DOF transformation between the map frame and the RGB camera frame, is calculated. Detected victims arevisualizedon the2Dand3Dmapof theGUI.By utilizing 2 different sensors to calculate the position of the victim thermal imaging cameras can also be used forvictim detection. Further a ROS package for automatic drive has been developed which uses the move-base-flex framework [26], a flexible navigation framework, and SMACH [3], a task level architecture for build- ing complex robot behaviors. Currently two path planners are implemented. The Timed Elastic Band (TEB) planner, a planner that takes travel time into account. Movement is not calculated by the simu- lated forces within the virtual elastic band, but by optimizing the travel time and the path [28]. The TEBplannercalculatesseveral feasiblepathsandse- lects the fastest one. If the planner does not reach the target, recovery behaviours are called up. After each behaviour call the planner tries to reach the tar- get again. If the target is still not reachable, the next recovery behaviour is called. After all three imple- mentedbehaviorsareexecuted, the localscheduler is switched to the Dynamic Window Approach (DWA) algorithm. The DWA breaks up the global plan into smaller windows, whereby only the current and the next window are used to calculate the path [11]. The speeds within the next window are calculated using the current robot speed, the possible acceleration of the robot and objects to be avoided. The target tol- erance of the DWA planner is increased to ensure that the target position can be reached. If the plan- ner cannot reach the specified target, the recovery behaviours are called up as with the TEB planner. If the system still cannot reach the target after call- ing all recovery behaviors, the execution of the local andglobalplanner is terminated. TheSMACHscript then returns an error and waits for a new target. The first implemented recovery behavior clears the cost maps, the second moves the robot back for 0.3m or 5 seconds and the third turns the robot 360◦ on the spot. During the exploration the radioactivity is contin- uously measured with the radiometer. After explo- ration the nuclear radiation of the area around the driven path is estimated using a Gaussian process. The amount of radiation is then visualized and piled over the 2D map together with a legend, further the radioactivity is alsovisualized in the3Dmap. Section 3 now introduces the results achieved using this systemconceptduring the2019EnRicHtrial. 3.ResultsandDiscussion Table 2 evaluates the System Readiness Level (SRL) of the mobile Robot of the UAS Technikum Vienna based on the survey evaluated in [7, 24]. Us- ingthesurveyresultsof[7]and[24]Robbie’sSRLis defined as 9/10, since only the IP65 resistance could notbe fulfilled. Figure 4 visualizes the environment mapped dur- 24
zurück zum  Buch Joint Austrian Computer Vision and Robotics Workshop 2020"
Joint Austrian Computer Vision and Robotics Workshop 2020
Titel
Joint Austrian Computer Vision and Robotics Workshop 2020
Herausgeber
Graz University of Technology
Ort
Graz
Datum
2020
Sprache
englisch
Lizenz
CC BY 4.0
ISBN
978-3-85125-752-6
Abmessungen
21.0 x 29.7 cm
Seiten
188
Kategorien
Informatik
Technik
Web-Books
Bibliothek
Datenschutz
Impressum
Austria-Forum
Austria-Forum
Web-Books
Joint Austrian Computer Vision and Robotics Workshop 2020