Page - (000113) - in Autonomes Fahren - Technische, rechtliche und gesellschaftliche Aspekte
Image of the Page - (000113) -
Text of the Page - (000113) -
101References
of an imperative to avoid collisions that follows from deontological frameworks such as
Asimov’s laws coupled with a relative weighing of costs for mobility and traffic laws pro-
vides a reasonable starting point.
Moving forward, Asimov’s laws raise another point worth considering. The Second Law
requiring the robot to obey human commands cannot override the First Law. Thus the need
to protect human life outweighs the priority given to human commands. All autonomous
vehicles with which the authors are familiar have an emergency stop switch or “big red
button” that returns control to the driver when desired. The existence of such a switch implies
that human authority ultimately overrules the autonomous system since the driver can take
control at any time. Placing the ultimate authority with the driver clearly conflicts with the
priority given to obeying human commands in Asimov’s laws. This raises an interesting
question: Is it ethical for an autonomous vehicle to return control to the human driver if the
vehicle predicts that a collision with the potential for damage or injury is imminent?
The situation is further complicated by the limitations of machine perception. The human
and the vehicle will no doubt perceive the situation differently. The vehicle has the advantage
of 360 degree sensing and likely a greater ability to perceive objects in the dark. The human
has the advantage of being able to harness the power of the brain and experience to perceive
and interpret the situation. In the event of a conflict between these two views in a dilemma
situation, can the human take control at will? Is a human being – who has perhaps been
attending to other tasks in the car besides driving – capable of gaining situational awareness
quickly enough to make this decision and then apply the proper throttle, brake or steering
commands to guide the car safely?
The question of human override is essentially a deontological consideration; the ultimate
authority must either lie with the machine or with the human. The choice is not obvious and
both approaches, for instance, have been applied to automation and fly-by-wire systems in
commercial aircraft. The ultimate answer for automated vehicles probably depends upon
whether society comes to view these machines as simply more capable cars or robots with
their own sense of agency and responsibility. If we expect the cars to bear the responsibil-
ity for their actions and make ethical decisions, we may need to be prepared to cede more
control to them. Gaining the trust required to do that will no doubt require a certain trans-
parency to their programmed priorities and a belief that the decisions made in critical situa-
tions are reasonable, ethical and acceptable to society.
References
1. Floridi, L., Sanders, J.W.: On the morality of artificial agents. Minds and Machines 14 (3),
349–379 (2004)
2. Lin, P., Bekey, G., Abney, K.: Autonomous military robotics: risk, ethics, and design. Report
funded by the US Office of Naval Research. California Polytechnic State University, San Luis
Obispo. http://ethics.calpoly.edu/ONR_report.pdf (2008). Accessed 8 July 2014
3. Goodall, N. J.: Machine ethics and automated vehicles. In: Meyer, G. and Beiker, S. (eds.) Road
Vehicle Automation. Springer (2014)
Autonomes Fahren
Technische, rechtliche und gesellschaftliche Aspekte
Gefördert durch die Daimler und Benz Stiftung