Page - 133 - in Proceedings - OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“
Image of the Page - 133 -
Text of the Page - 133 -
4.2. AnalysisofHumanBehaviors,EmotionsandActions
Humans generally interact with robots in the same way they might interact with other people, es-
tablishing social relationships and emotional ties with them [4, 8]. As industrial robots are enabling
humanandrobotworkers toworksidebysideascollaborators inmanufacturing tasks,a fundamental
issue regards the development of methods to assess the user’s experience with a robot, while under-
standinghowhumans feelduring their interactionwith it [27]. Furthermore,human-relatedvariables
areessential for theevaluationofhuman-interactionmetrics [26]. Toworkseamlesslyandefficiently
with their human counterparts, robots must similarly rely on predictions of the human worker’s be-
havior, his/her emotions, task specific actions and intent to plan their actions. In [12] for instance
ananticipatorycontrolmethodusingahuman-in-the-looparchitecturewas implemented that enables
robots to proactively perform task actions based on observed gaze patterns to anticipate actions of
theirhumanpartners according to itspredictions.
In the project CollRob, there is a focus on advancing models of human-related variables that directly
refer to theevaluationof levelsofautonomyinhuman-robot interaction, suchas situationawareness,
trust and workload, which have a long history in the automation literature [3, 6, 31]. CollRob un-
dertakes to elaborate situation awareness in the manufacturing domain of human-robot interaction
(HRI) on the basis of human attention measures. It specifically considers the dynamic estimation
of current and predicted gaze in the context of collaboration affordances. Affordances have already
been thoroughly studied in robot control [22]. However, from the human worker’s viewpoint in the
manufacturing domain, affordances refer to relations between the human and the manufacturing en-
vironment that, through a collection of stimuli, afford the opportunity for the worker to perform an
interaction. CollRobintends toestimatevarious levelsofhumanattention in the3Denvironment [20]
– in the context of collaboration affordances – and from this become capable to derive parameters
fordecisionmaking: as lowlevelsof situationawarenesswoulddecreasespeed insafety-critical task
processing, high levels would need to increase the throughput or to increasingly consider production
quality related processing. As a first step, CollRob developed methodologies for the efficient, robust
and low-cost method for the continuous localization of human gaze in industrial work cells. One
application is to estimate gaze directly from eye tracking glasses based on the visual recognition of
artificial random dot markers [28]. Additionally, a spatiotemporal model of attention was developed
thatestimateshumangazesolelyfromegocentricvision[21]. Furtheractivities in theframeofhuman
behavior measurements will focus on the worker’s context being estimated from psychophysiologi-
cal measurements [27] and developing a metric for HRI situation awareness in the manufacturing
domain.
4.3. ResourceManaging includingDynamicTaskOptimizationandDecisionMaking
Human-robot collaboration requires robust and time-aware dynamic planning and scheduling strate-
gies for robot-human teams. In the last few years, key contributions to make robot-human team
collaboration more fluent stem from [25, 18]. Within the CollRob project, we will focus on the de-
velopment of algorithms to deal with geometric issues, consider time based planning strategies and
providearobust implementation. Currently,ourfocus is tofindarobotmodel forgeometricandtime-
aware scheduling strategies by building a puzzle together. Our long term research goal is to deeply
integrate the social interaction models (e.g., the fair distribution of team members, conflict solution
strategiesetc. for individual team members). Akey issuewill nowbehowwemodel suchaspects.
133
Proceedings
OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“
- Title
- Proceedings
- Subtitle
- OAGM & ARW Joint Workshop 2016 on "Computer Vision and Robotics“
- Authors
- Peter M. Roth
- Kurt Niel
- Publisher
- Verlag der Technischen Universität Graz
- Location
- Wels
- Date
- 2017
- Language
- English
- License
- CC BY 4.0
- ISBN
- 978-3-85125-527-0
- Size
- 21.0 x 29.7 cm
- Pages
- 248
- Keywords
- Tagungsband
- Categories
- International
- Tagungsbände