Page - 185 - in Pflegeroboter
Image of the Page - 185 -
Text of the Page - 185 -
18510
Implementing Responsible Research …
the end of human civilisation or maybe even humanity, as represented in much popular
culture and science fiction output.
In this paper the focus is on care robots, i. e. robotic devices that provide or support
the human provision of care or aspects thereof. There is significant overlap with medical
robots as well as social robots, such as artificial companions. The exact delineation of
care robots is less important here than the concerns that one can find in the literature.
Elsewhere we have described these concerns in more detail (Stahl and Coeckelbergh
2016). In this paper I only briefly recapitulate what these concerns are, to provide the
background of the ethical problems that RRI needs to address and that BS 8611 should
be sensitive to. We distinguish between three types of ethical concerns, each with a set of
individual problems.
The first set of issues arises from a critical evaluation of the vision that drives health-
care technology and their implications for society. These include the replacement of
human beings and the implications that such replacement has for labour. For instance, in
research concerning the development of robots for the elderly, robots are often presen-
ted as a response to demographic challenges (Fischinger et al. 2015). But it is not clear
that such technological solutions can or should be the solution to the problem. It is simi-
larly not clear to what degree this really constitutes an economic problem and a threat
to employment and what the consequences for human care work would be. A second
example is the replacement of humans and its implications for the quality of care; the
dehumanization of care. An important fear in discussions about robots in healthcare is
that robots may replace human care givers, and that this may not only put these people
out of job, but also remove the capacity for “warm”, “human” care from the care pro-
cess. It is highly doubtful, for instance, if robots could ever be truly empathic (Stahl et al.
2014) or have emotions (Coeckelbergh 2010). There is the concern that elderly people
are abandoned, handed over to robots (Sparrow and Sparrow 2006) devoid of human
contact ( Sharkey and Sharkey 2010). Concerns arise both with regards to the potential
objectification of care givers and care receivers.
A second set of issues has less to do with the idea of replacement and more with
human-robot interaction in healthcare. A key issue discussed in this respect is autonomy.
While autonomy comes in degrees and not all healthcare robots are autonomous, the
concept of machine autonomy is often seen as problematic. In addition to the question
of human replacement, it raises fundamental questions about the appropriateness of auto-
nomous machines and the degree to which autonomy would be acceptable. In practical
terms this raises questions about liability and responsibility. It is open to debate which
roles and tasks should be undertaken by robots and to what degree it is legitimate to
provide them with autonomy. On one extreme end of the spectrum of possible answers
to this one can find fully autonomous robots that interact with care receivers without
human input. In this case one could argue that robots should be endowed with a capacity
to undertake ethical reasoning (Anderson and Anderson 2015; Wallach and Allen 2008).
However, the very possibility of constructing such ethical reasoning in machines is con-
tested. Increased use of robots in care and the possibility of robots acting increasingly
back to the
book Pflegeroboter"