Page - (000107) - in Autonomes Fahren - Technische, rechtliche und gesellschaftliche Aspekte
Image of the Page - (000107) -
Text of the Page - (000107) -
955.4
Constraints and Deontological Ethics
From the mathematical perspective, dilemma situations represent cases that are mathemat-
ically infeasible. In other words, there is no choice of control inputs that can satisfy all of the
constraints placed on the vehicle motion. The more constraints that are layered on the vehicle
motion, the greater the possibility of encountering a dilemma situation where some constraint
must be violated. Clearly, the vehicle must be programmed to do something in these situations
beyond merely determining that no ideal action exists. A common approach in solving opti-
mization problems with constraints is to implement the constraint as a “soft constraint” or slack
variable [15]. The constraint normally holds but, when the problem becomes infeasible, the
solver replaces it with a very high cost. In this way, the system can be guaranteed to find some
solution to the problem and will make its best effort to reduce constraint violation. A hierarchy
of constraints can be enforced by placing higher weights on the costs of violating certain con-
straints relative to others. The vehicle then operates according to deontological rules or con-
straints until it reaches a dilemma situation; in such situations, the weight or hierarchy placed
on different constraints resolves the dilemma, again drawing on a consequentialist approach.
This becomes a hybrid framework for ethics in the presence of infeasibility, consistent with
approaches suggested philosophically by Lin and others [2, 4, 12] and addressing some of the
limitations Goodall [3] described with using a single ethical framework.
So what is an appropriate hierarchy of rules that can provide a deontological basis for
ethical actions of automated vehicles? Perhaps the best known hierarchy of deontological
rules for automated systems is the Three Laws of Robotics postulated by science fiction
writer Isaac Asimov [16], which state:
1. A robot may not injure a human being or, through inaction, allow a human being
to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders
would conflict with the First Law.
3. A robot must protect its own existence, as long as such protection does not conflict
with the First or Second Law.
These rules do not comprise a complete ethical framework and would not be sufficient for
ethical behavior in an autonomous vehicle. In fact, many of Asimov’s plotlines involved
conflicts when resolving these rules into actions in real situations. However, this simple
framework works well to illustrate several of the ethical considerations that can arise, be-
ginning with the First Law. This law emphasizes the fundamental value of human life and
the duty of a robot to protect it. While such a law is not necessarily applicable to robotic
drones that could be used in warfare [12], it seems highly valuable to automated vehicles.
The potential to reduce accidents and fatalities is a major motivation for the development
and deployment of automated vehicles. Thus placing the protection of human life at the top
of a hierarchy of rules for automated vehicles, analogous to the placement in Asimov’s
laws, seems justified.
The exact wording of Asimov’s First Law does represent some challenges, however. In
particular, the emphasis on the robot’s duty to avoid injuring humans assumes that the robot
Autonomes Fahren
Technische, rechtliche und gesellschaftliche Aspekte
Gefördert durch die Daimler und Benz Stiftung