Seite - (000100) - in Autonomes Fahren - Technische, rechtliche und gesellschaftliche Aspekte
Bild der Seite - (000100) -
Text der Seite - (000100) -
Implementable Ethics for Autonomous
Vehicles88
5.1 Introduction
As agents moving through an environment that includes a range of other road users – from
pedestrians and cyclists to other human or automated drivers – automated vehicles contin-
uously interact with the humans around them. The nature of these interactions is a result of
the programming in the vehicle and the priorities placed there by the programmers. Just as
human drivers display a range of driving styles and preferences, automated vehicles repre-
sent a broad canvas on which the designers can craft the response to different driving
scenarios. These scenarios can be dramatic, such as plotting a trajectory in a dilemma sit-
uation when an accident is unavoidable, or more routine, such as determining a proper
following distance from the vehicle ahead or deciding how much space to give a pedestri-
an standing at the corner. In all cases, however, the behavior of the vehicle and its control
algorithms will ultimately be judged not by statistics or test track performance but by the
standards and ethics of the society in which they operate.
In the literature on robot ethics, it remains arguable whether artificial agents without free
will can truly exhibit moral behavior [1]. However, it seems certain that other road users
and society will interpret the actions of automated vehicles and the priorities placed by their
programmers through an ethical lens. Whether in a court of law or the court of public opin-
ion, the control algorithms that determine the actions of automated vehicles will be subject
to close scrutiny after the fact if they result in injury or damage. In a less dramatic, if no
less important, manner, the way these vehicles move through the social interactions that
define traffic on a daily basis will strongly influence their societal acceptance. This places
a considerable responsibility on the programmers of automated vehicles to ensure their
control algorithms collectively produce actions that are legally and ethically acceptable to
humans.
An obvious question then arises: can automated vehicles be designed a priori to embody
not only the laws but also the ethical principles of the society in which they operate? In
particular, can ethical frameworks and rules derived for human behavior be implemented
as control algorithms in automated vehicles? The goal of this chapter is to identify a path
through which ethical considerations such as those outlined by Lin, Bekey and Abney [2]
and Goodall [3] from a philosophical perspective can be mapped all the way to appropriate
choices of steering, braking and acceleration of an automated vehicle. Perhaps surprising-
ly, the translation between philosophical constructs and concepts and their mathematical
equivalents in control theory proves to be straightforward. Very direct analogies can be
drawn between the frameworks of consequentialism and deontological ethics in philosophy
and the use of cost functions or constraints in optimal control theory. These analogies
enable ethical principles that can be described as a cost or a rule to be implemented in a
control algorithm alongside other objectives. The challenge then becomes determining
which principles are best described as a comparative weighting of costs from a consequen-
tialist perspective and which form the more absolute rules of deontological ethics.
Examining this question from the mathematical perspective of deriving control laws for
a vehicle leads to the conclusion that no single ethical framework appears sufficient. This
Autonomes Fahren
Technische, rechtliche und gesellschaftliche Aspekte
Gefördert durch die Daimler und Benz Stiftung