Seite - (000084) - in Autonomes Fahren - Technische, rechtliche und gesellschaftliche Aspekte
Bild der Seite - (000084) -
Text der Seite - (000084) -
Why Ethics Matters for Autonomous
Cars72
In these and many other cases, there may not be enough time to hand control back to the
driver. Some simulation experiments suggest that human drivers need up to 40 seconds to
regain situation awareness, depending on the distracting activity, e. g., reading or napping
– far longer than the 1–2 seconds of reaction time required for typical accident scenarios
[38, 18]. This means that the car must be responsible for making decisions when it is un-
reasonable to expect a timely transfer of control back to the human, and again braking might
not be the most responsible action.
One possible reply is that, while imperfect, braking could successfully avoid the major-
ity of emergency situations a robot car may find itself it, even if it regrettably makes things
worse in a small number of cases. The benefits far outweigh the risks, presumably, and the
numbers speak for themselves. Or do they? I will discuss the dangers of morality by math
throughout this chapter.
Braking and other responses in the service of crash-avoidance won’t be enough, because
crash-avoidance is not enough. Some accidents are unavoidable – such as when an animal
or pedestrian darts out in front of your moving car – and therefore autonomous cars will
need to engage in crash-optimization as well. Optimizing crashes means to choose the
course of action that will likely lead to the least amount of harm, and this could mean a
forced choice between two evils, for instance, choosing to strike either the eight-year old
girl or the 80-year old grandmother in my first scenario above.
4.1.2 Crash-optimization means targeting
There may be reasons, by the way, to prefer choosing to run over the eight-year old girl
that I have not yet mentioned. If the autonomous car were most interested in protecting
its own occupants, then it would make sense to choose a collision with the lightest object
possible (the girl). If the choice were between two vehicles, then the car should be
programmed to prefer striking a lighter vehicle (such as a Mini Cooper or motorcycle)
than a heavier one (such as a sports utility vehicle (SUV) or truck) in an adjacent lane
[18, 34].
On the other hand, if the car were charged with protecting other drivers and pedestrians
over its own occupants – not an unreasonable imperative – then it should be programmed
to prefer a collision with the heavier vehicle than the lighter one. If vehicle-to-vehicle
(V2V) and vehicle-to-infrastructure (V2I) communications are rolled out (or V2X to refer
to both), or if an autonomous car can identify the specific models of other cars on the road,
then it seems to make sense to collide with a safer vehicle (such as a Volvo SUV that has a
reputation for safety) over a car not known for crash-safety (such as a Ford Pinto that’s
prone to exploding upon impact).
This strategy may be both legally and ethically better than the previous one of jealously
protecting the car’s own occupants. It could minimize lawsuits, because any injury to others
would be less severe. Also, because the driver is the one who introduced the risk to society
– operating an autonomous vehicle on public roads – the driver may be legally obligated,
Autonomes Fahren
Technische, rechtliche und gesellschaftliche Aspekte
Gefördert durch die Daimler und Benz Stiftung