Page - (000090) - in Autonomes Fahren - Technische, rechtliche und gesellschaftliche Aspekte
Image of the Page - (000090) -
Text of the Page - (000090) -
Why Ethics Matters for Autonomous
Cars78
In another thought-experiment [15, 18, 33], your robotic car is stopped at an intersection
and waits patiently for the children who are crossing in front of you. Your car detects a
pickup truck coming up behind you, about to cause a rear-end collision with you. The crash
would likely damage your car to some degree and perhaps cause minor injury to you, such
as whiplash, but certainly not death. To avoid this harm, your car is programmed to dash
out of the way, if it can do so safely. In this case, your car can easily turn right at the inter-
section and avoid the rear-end collision. It follows this programming, but in doing so, it
clears a path for the truck to continue through the intersection, killing a couple children and
seriously injuring others.
Was this the correct way to program an autonomous car? In most cases of an impending
rear-end collision, probably yes. But in this particular case, the design decision meant
saving you from minor injury at the expense of serious injury and death of several children,
and this hardly seems to be the right choice. In an important respect, you (or the car) are
responsible for their deaths: you (or the car) killed the children by removing an obstruction
that prevented harm from falling upon them, just as you would be responsible for a person’s
death if you removed a shield he was holding in front of a stream of gunfire. And killing
innocent people has legal and moral ramifications.
As with the self-sacrifice scenario above, it might be that in the same situation today, in
a human-driven car, you would make the same decision to save yourself from injury, if you
were to see a fast-approaching vehicle about to slam into you. That is, the result might not
change if a human made the on-the-spot decision. But, again, it is one thing to make such
a judgment in the panic of the moment, but another less forgivable thing for a programmer
– far removed from the scene and a year or more in advance – to create a cost-function that
resulted in these deaths. Either the programmer did so deliberately, or she did it uninten-
tionally, unaware that this was a possibility. If the former, then this could be construed as
premeditated homicide; and if the latter, gross negligence.
Either way is very bad for the programmer and perhaps an inherent risk in the business,
when one attempts to replicate human decision-making in a broad range of dynamic
scenarios. Sometimes, an autonomous car may be faced with a “no-win” scenario, putting
the programmer in a difficult but all too real position. To mitigate this risk, industry may
do well to set expectations not only with users but also with broader society, educating them
that they could also become victims even if not operating or in a robot car, and that perhaps
this is justified by a greater public or overall good.
4.2.4 Trolley problems
One of the most iconic thought-experiments in ethics is the trolley problem [4, 8, 11, 47],
and this is one that may now occur in the real world, if autonomous vehicles come to be.
Indeed, driverless trains are already operating in dozens of cities worldwide and could bring
this scene to life [24]. The classical dilemma involves a runaway trolley (or train) that is
about to run over and kill five unaware people standing on the tracks. Looking at the scene
Autonomes Fahren
Technische, rechtliche und gesellschaftliche Aspekte
Gefördert durch die Daimler und Benz Stiftung