Page - (000615) - in Autonomes Fahren - Technische, rechtliche und gesellschaftliche Aspekte
Image of the Page - (000615) -
Text of the Page - (000615) -
Regulation and the Risk of
Inaction594
27.1 Introduction
27.1.1 In Context
Two complex and conflicting objectives shape altruistic regulation of human activity:
maximizing net social good and mitigating incidental individual loss. Eminent domain
provides a superficially simple example: To build a road that benefits ten thousand people,
a government evicts – and compensates – the ten people whose homes are in the way. But
in many cases, individual loss is not fully compensable, most strikingly when that loss in-
volves death: Whatever her actual detriment, a person who dies cannot be “made whole.”
And indeed, more than 30,000 people lose their lives on US roadways every year while
more than 300 million obtain some direct or indirect benefit from motorized transport.
The promise that vehicle automation holds for highway safety raises difficult questions
about regulation’s social and individual objectives. Analyzing either objective requires
topical and temporal definition of a manageable system in which costs and benefits can be
identified, valued, and compared. With respect to net social good, what is the statistical
value of a human life? Is a reduction in organ donations a “cost” of safer highways? Could
aggressive deployment of particular technologies cause a backlash that ultimately under-
mines safety? Similarly, with respect to individual loss, how should injury or death be
valued? Should culpability affect compensation? Who is entitled to it? The particular
answers to these questions may depend on the domain – law, economics, ethics, the social
sciences – from which they are drawn.
Vehicle automation exposes tension between the social and individual objectives. Ex-
ternalities frequently accompany innovation: Inventors impose costs that they need not or
cannot bear and create benefits that they cannot capture. Compensation of incidental in
jury
may be one such cost, and socially desirable innovations like automation might be subsi-
dized by shielding them from it. Calibrating net social good and individual loss can also
create moral hazard: Safety might be discounted by innovators who are legally or effective-
ly exempt from rules and immunized from lawsuits or by consumers who are assured of
compensation for injury.
This tension exists against two related background conditions. The first is a preference
for the status quo – a tendency that is reflected in administrative law, in tort law, and inter-
nationally in the precautionary principle. Many vehicle fatalities appear only in local
obituaries, but a single automated vehicle fatality would end up on national front pages.
The second is a failure by imperfectly probabilistic humans to accurately perceive risk.
Drivers who speed around blind corners but fear traveling over bridges demonstrate this
tendency to underestimate some risks and overestimate others.
This complex regulatory context leads to two fundamental questions: How should risk
be allocated in the face of significant uncertainty – and who should decide? The range of
actors includes the legislative, executive, and judicial branches of national and subnational
governments, companies, standards organizations, consumers, and the public at large.
Regulation can be prospective or retrospective, but it cannot be nonexistent: Administrative
Autonomes Fahren
Technische, rechtliche und gesellschaftliche Aspekte
Gefördert durch die Daimler und Benz Stiftung