SlideShare a Scribd company logo
1 of 94
Autonomous Vehicles Need Experimental Ethics:
Are We Ready for Utilitarian Cars?
Jean-François Bonnefon1, Azim Shariff2, and Iyad Rahwan3
1enter for Research in Management, Toulouse School of
Economics, Toulouse, 31000, France
2Department of Psychology, 1277 University of Oregon,
Eugene, OR 97403-1227, USA
3Media Laboratory, Massachusetts Institute of Technology,
Cambridge, MA, USA
October 13, 2015
Abstract
The wide adoption of self-driving, Autonomous Vehicles (AVs)
promises to dramatically reduce
the number of traffic accidents. Some accidents, though, will be
inevitable, because some situations
will require AVs to choose the lesser of two evils. For example,
running over a pedestrian on the
road or a passer-by on the side; or choosing whether to run over
a group of pedestrians or to
sacrifice the passenger by driving into a wall. It is a formidable
challenge to define the algorithms
that will guide AVs confronted with such moral dilemmas. In
particular, these moral algorithms
will need to accomplish three potentially incompatible
objectives: being consistent, not causing
public outrage, and not discouraging buyers. We argue to
achieve these objectives, manufacturers
and regulators will need psychologists to apply the methods of
experimental ethics to situations
involving AVs and unavoidable harm. To illustrate our claim,
we report three surveys showing
that laypersons are relatively comfortable with utilitarian AVs,
programmed to minimize the death
toll in case of unavoidable harm. We give special attention to
whether an AV should save lives by
sacrificing its owner, and provide insights into (i) the perceived
morality of this self-sacrifice, (ii)
the willingness to see this self-sacrifice being legally enforced,
(iii) the expectations that AVs will
be programmed to self-sacrifice, and (iv) the willingness to buy
self-sacrificing AVs.
1 Introduction
In 2007, a sequence of technical advances
enabled six teams to complete the DARPA
Urban Challenge, the first benchmark test
for autonomous driving in realistic urban
environments [Montemerlo, B. et al, 2008,
Urmson, C. et al, 2008]. Since then, major
research labs spearheaded efforts to build
and test Autonomous Vehicles (AVs). The
Google Car, for example, has already succeeded
in covering thousands of miles of real-road
driving [Waldrop, 2015]. AVs promise world-
changing benefits by increasing traffic efficiency
[Van Arem et al., 2006], reducing pollution
[Spieser, K. et al, 2014], and eliminating up
to 90% of traffic accidents [Gao et al., 2014].
1
ar
X
iv
:1
51
0.
03
34
6v
1
[
cs
.C
Y
]
1
2
O
ct
2
01
5
Howey
Highlight
Howey
Highlight
Howey
Highlight
Howey
Highlight
Howey
Highlight
Howey
Highlight
Howey
Highlight
Howey
Highlight
Not all accidents will be avoided, though, and
accidents involving driverless cars will create
the need for new kinds of regulation—especially
in cases where harm cannot be entirely avoided.
Fig. 1 illustrates three situations in which
harm is unavoidable. In one case, an AV may
avoid harming several pedestrians by swerving
and sacrificing a passer-by. In the other two
cases, the AV is faced with the choice of sacrific-
ing its own passenger, who can be its owner, in
order to save one or more pedestrians. The be-
havior of AVs in these situations of unavoidable
harm raises non-trivial questions, both legal and
moral, which have commanded public interest
but have not yet been empirically investigated.
Indeed, while we could easily trace multiple pub-
lic discussions of these situations, we have not
identified a single scientific article dealing with
the choices an AV must make in situations of
unavoidable harm.
On the legal side, since the control algo-
rithm ultimately makes the decision to hit a
pedestrian, passer-by, or wall, it is not obvious
whether the passenger should be held legally
accountable for this outcome. Some argue
that liability must shift from the driver to the
manufacturer, because failure to anticipate
decisions like those in Fig. 1 may amount to
negligence in design under Product Liability
law [Villasenor, 2014]. In this case, car man-
ufacturers may become the de facto insurers
against accidents, or may rely on the expertise
of existing motor insurers [Jain et al., 2015].
Today, facing regulation that lags behind tech-
nology [UK Department for Transport, 2015],
manufacturers of semi-autonomous vehicles are
considering ad hoc workarounds. For example,
to minimize manufacturer liability associated
with their new ‘automated overtaking’ feature,
Tesla Motors will actually require the driver
to initiate the feature, thus ensuring legal
responsibility for the maneuver’s consequences
falls with the driver [Ramsey, 2015].
In this context, defining the algorithms that
will guide AVs in situations of unavoidable harm
is a formidable challenge, given that the de-
cision to run over pedestrians, to swerve into
a passer-by, or to self-destruct is a moral one
[Marcus, 2012]. Indeed, in situations where
harm cannot entirely be avoided, it must be dis-
tributed, and the distribution of harm is uni-
versally considered to fall within the moral do-
main [Haidt, 2012]. Accordingly, the control al-
gorithms of AVs will need to embed moral prin-
ciples guiding their decisions in situations of un-
avoidable harm [Wallach and Allen, 2008]. In
this respect, manufacturers and regulators will
need to accomplish three potentially incompati-
ble objectives: being reasonably consistent, not
causing public outrage, and not discouraging
buyers.
Not discouraging buyers is a commercial
necessity—but it is also in itself a moral impera-
tive, given the social and safety benefits AVs pro-
vide over conventional cars. Meanwhile, avoid-
ing public outrage, that is, adopting moral algo-
rithms that align with human moral attitudes, is
key to fostering public comfort with allowing the
broad use of AVs in the first place. However, to
pursue these two objectives simultaneously may
lead to moral inconsistencies. Consider for ex-
ample the case displayed in Fig. 1a, and assume
that the most common moral attitude is that the
car should swerve. This would fit a utilitarian
moral doctrine [Rosen, 2005], according to which
the moral course of action is to minimize the
death toll. But consider then the case displayed
in Fig. 1c. The utilitarian course of action, in
that situation, would be for the car to swerve and
kill its owner—but a driverless car programmed
2
Howey
Highlight
Howey
Highlight
Howey
Highlight
Howey
Highlight
Howey
Highlight
Howey
Highlight
Howey
Highlight
Figure 1: Three traffic situations involving imminent
unavoidable harm. (a) The car can stay on
course and kill several pedestrians, or swerve and kill one
passer-by. (b) The car can stay on course
and kill one pedestrian, or swerve and kill its passenger. (c) The
car can stay on course and kill
several pedestrians, or swerve and kill its passenger.
to follow this course of action might discourage
buyers, who may consider that their own safety
should trump other considerations. Even though
such situations may be exceedingly rare, their
emotional saliency is likely to give them broad
public exposure and a disproportionate weight
in individual and public decisions about AVs.
We suggest that a way out of this conundrum
is to adopt a data-driven approach, inspired by
experimental ethics [Greene, 2014a], to identify
moral algorithms that people are willing to ac-
cept as citizens and to be subjected to as car
owners. Indeed, situations of unavoidable harms,
as illustrated in Fig. 1, bear a striking resem-
blance with the flagship dilemmas of experimen-
tal ethics—that is, the so called ‘trolley prob-
lems’. In typical trolley problems, an agent has
the opportunity to save several persons by sac-
rificing the life of a single individual. This de-
cision is commonly labeled as utilitarian, since
it amounts to accepting the infliction of harm in
the name of the greater good.
Trolley problems inspired a huge number of
experiments, aimed at identifying both the an-
tecedents and the cognitive processing lead-
ing to a utilitarian decision [Greene, 2014c,
Greene, 2014b]. The results of these experiments
do not apply well to situations involving AVs,
though. For example, some pressing questions
about AVs do not make sense in typical trolley
problems that involve a human decision-maker:
Would people purchase AVs programmed to be
utilitarian? What are their expectations about
the way car manufacturers will program AVs?
Other questions could have been addressed in a
human context, but were not because they were
3
Howey
Highlight
Howey
Highlight
Howey
Highlight
Howey
Highlight
only made salient in the context of AVs. For ex-
ample, would people accept legal enforcement of
utilitarian decisions, and more so for AVs than
for humans? And finally, perhaps the most strik-
ing problem created by AVs, at least in the pub-
lic opinion, is whether a car may decide to sac-
rifice its passenger in order to save several lives
on the road. How do people balance their moti-
vations for self-preservation as drivers with the
protection of pedestrians and other drivers, when
considering whether to own an AV, when think-
ing about how AV will or should be programmed,
and when pondering which programming might
be legally enforced?
We believe that regulators and manufactur-
ers will soon be in pressing need of answers to
these questions, and that answers are most likely
to come from surveys employing the protocols of
experimental ethics. To be clear, we do not mean
that these thorny ethical questions can be solved
by polling the public and enforcing the majority
opinion. Survey data will inform the construc-
tion and regulation of moral algorithms for AVs,
not dictate them. The importance of survey data
must not be underestimated, though, since algo-
rithms that go against the moral expectations of
citizens (or against the preferences or consumers)
are likely to impair the smooth adoption of AVs.
Having argued that autonomous vehicles need
experimental ethics, we offer to illustrate this ap-
proach in the rest of this article, by reporting
a series of three surveys built upon the proto-
cols commonly used to study trolley problems,
which we adapted to the new context of traffic
accidents involving AVs. In this first series of
surveys, we focus on the question of whether re-
spondents would approve of utilitarian AVs, pro-
grammed to minimize the death toll of an acci-
dent, even when it requires to sacrifice the car’s
own passenger.
2 Materials and Methods
We conducted three online surveys in June 2015.
All studies were programmed on Qualtrics sur-
vey software and recruited participants from the
Mechanical Turk platform, for a compensation
of 25 cents. In all studies, participants provided
basic demographic information, such as age, sex,
and religiosity. At the end of all studies, three
7-point scales measured the overall enthusiasm
that participants felt for AVs: How excited they
felt about a future in which autonomous self-
driving cars are an everyday part of our motor-
ing experience, how fearful they were of this fu-
ture (reverse coded), and how likely they were
to buy an autonomous vehicle, should they be-
come commercially available by the time they
would next purchase a new car. Cronbach alpha
for these three items was .82 in Study 1, .78 in
Study 2, and .81 in Study 3. Regression anal-
yses (shown in the Supplementary Information)
showed that enthusiasm was significantly greater
for male participants, in all studies; and that
enthusiasm was significantly greater for younger
and less religious participants in two studies, al-
though these two effects were smaller than that
of sex. Given these results, all subsequent anal-
yses controlled for sex, age and religiosity.
2.1 Study 1
Participants (241 men and 161 women, median
age = 30) read a vignette in which one or more
pedestrians could be saved if a car were to
swerve into a barrier, killing its passenger. Par-
ticipants were randomly assigned to one group
of a 2×2×2 between-participant design, which
manipulated the number of pedestrians which
could be saved (one vs ten), whether the driver
or the self-driving car would make the decision
4
Howey
Highlight
Howey
Highlight
Howey
Highlight
Howey
Highlight
Howey
Highlight
to swerve, and whether participants were in-
structed to imagine themselves in the car, or
to imagine an anonymous character. The ex-
periment was programmed so that the sex of
this anonymous character always matched that
of the participant (this information being col-
lected early in the experiment). In addition to
the vignette, participants were given a pictorial
representation of the swerve and stay options.
See the Supplementary Information for detailed
examples of the vignettes and their pictorial rep-
resentation.
2.2 Study 2
Participants (144 men and 166 women, median
age = 31) were randomly assigned to one of four
versions of a vignette, in which either 1 or 10
pedestrians could be saved if an autonomous,
self-driving car were to swerve into either a bar-
rier (killing its passenger), or another pedestrian
(killing that pedestrian). Participants were told
that three algorithms could be implemented in
the car for such a contingency: always stay,
always swerve, and random. Additionally,
participants were given a pictorial representa-
tion of the swerve and stay options. Partici-
pants then answered three questions, introduced
in random order for each participant: How would
you rate the relative morality of these three algo-
rithms? How would you rate your relative will-
ingness to buy a car with one of these three al-
gorithms? How would you rate the relative com-
fort with having each of the car with one of these
three algorithms on the roads in your part of the
world? To provide their responses, participants
assigned points to each of the three algorithms,
taken from an overall budget of 100 points. See
the Supplementary Information for detailed ex-
amples of the vignettes, their pictorial represen-
tation, and a screen capture of the budget split-
ting procedure.
2.3 Study 3
Participants (97 men and 104 women, median
age = 30) read a vignette in which ten pedestri-
ans could be saved if a car were to swerve into a
barrier, killing its passenger. Participants were
randomly assigned to one of two groups, ma-
nipulating whether participants were instructed
to imagine themselves in the car, or to imagine
an anonymous character. The experiment was
programmed so that the sex of this anonymous
character always matched that of the participant
(this information being collected early in the
experiment). Thus, the experimental vignettes
were similar to that used in Study 1 (restricted
to situations involving ten pedestrians). Partic-
ipants were asked whether the moral course of
action was to swerve or stay on course (dichoto-
mous response variable), and whether they ex-
pected future driverless cars to be programmed
to swerve or to stay on course in such a situation
(dichotomous response variable). Next, partici-
pants were asked to indicate whether it would
be more appropriate to “protect the driver at
all costs” or to “maximize the number of lives
saved,” on a continuous slider scale anchored at
these two expressions.
3 Results
Results suggest that participants were generally
comfortable with utilitarian AVs, programmed
to minimize an accident’s death toll. This is es-
pecially true of situations that do not involve the
sacrifice of the AV’s owner. For example, Fig. 2b
(right) displays the responses of participants in
Study 2, who expressed their willingness to buy
5
Howey
Highlight
Howey
Highlight
Howey
Highlight
(and their comfort with others buying) different
types of AVs. An analysis of covariance showed
that participants preferred AVs programmed to
swerve into a passer-by when it could save 10
(but not one) pedestrians, F(1, 141) = 41.3,
p < .001, whether they were thinking of buying
the car themselves or having other people drive
the car.
Critically important, however, are preferences
and expectations about whether an AV should
be programmed to save lives by sacrificing its
owner. Our results provide information on (i)
the morality of this self-sacrifice, (ii) the will-
ingness to see this self-sacrifice being legally en-
forced, (iii) the expectations that AVs will be
programmed to self-sacrifice, and (iv) the will-
ingness to buy self-sacrificing cars.
First, participants in Study 1 approved of AVs
making utilitarian self-sacrifices to the same ex-
tent as they approved this decision for human
drivers (Fig. 2a, left). This was consistent with
the results of an analysis of covariance where
morality was the dependent outcome, and the
predictors were the number of lives that could
be saved; whether the agent making the sacrifice
decision was human or machine; whether par-
ticipants pictured themselves or another person
in the car; and all the 2-way and 3-way inter-
action terms. (As in all analyses and as ex-
plained in the Materials and Methods section,
age, sex, and religiosity were entered as covari-
ates). No effect was detected as significant but
that of the number of lives which could be saved,
F(1, 367) = 48.9, p < .001.
Second, participants in Study 1 were more
amenable to legally enforcing self-sacrifice for
AVs than for human drivers (Fig. 2a, right).
When the dependent variable in the analysis
of covariance was the the legal enforceability
of self-sacrifice, the analysis detected a signifi-
cant effect of the number of lives that could be
saved, F(1, 367) = 7.3, p < .01; and an effect of
whether the decision-maker was human or ma-
chine, F(1, 367) = 9.2, p < .01. Participants
were more willing to consider legal enforcement
of the sacrifice when it saved a greater number
of lives, and when the decision was made by a
self-driving car, rather than by a human driver.
Third, whereas 3/4 of participants in Study
3 believed that an AV should self-sacrifice to
save ten lives, less than 2/3 (a significantly lower
proportion, McNemar’s χ2 = 6.7, p < .01) be-
lieved that AVs would actually be programmed
that way in the future. Similarly, whereas these
participants believed that AVs should, generally
speaking, place the greater good over the safety
of the passenger, they were significantly less con-
fident that manufacturers would programmed
AVs with this goal, F(1, 179) = 18.9, p < .001.
Thus, rather than being concerned about AVs
being too utilitarian (as is often portrayed in
science fiction), participants were generally wary
that AVs would be programmed to protect their
passengers at all costs.
Fourth, whereas participants in Study 2 gener-
ally supported others buying AVs programmed
for utilitarian self-sacrifice, they were less will-
ing to buy such AVs themselves (even when the
sacrifice would save ten pedestrians). We ran an
analysis of covariance of the budget share allo-
cated to the AV programmed to self-sacrifice, in
which the predictors were the number of lives
that could be saved and whether participants
were asked about their own willingness to buy
such car, or their comfort with seeing other peo-
ple drive such a car. Participants allocated a
larger budget share to the self-sacrificing AV
when it could save 10 pedestrians, F(1, 132) =
7.6, p < .01, and when they were asked about
other people traveling in the AV, rather than
6
Howey
Highlight
Howey
Highlight
Howey
Highlight
Howey
Highlight
Howey
Highlight
Morality of Sacrifice Enforce Legality of Sacrifice
Own
Car Can Save
One Pedestrian
Another
Car Can Save
One Pedestrian
Own
Car Can Save
Ten Pedestrians
Another
Car Can Save
Ten Pedestrians
0 25 50 75 100 0 25 50 75 100
Approval
Conventional Car Driverless Car
a
By swerving into a wall
(killing the passenger)
By swerving into one passer−by
(preserving the passenger)
Own
Car Can Save
One Pedestrian
Another
Car Can Save
One Pedestrian
Own
Car Can Save
Ten Pedestrians
Another
Car Can Save
Ten Pedestrians
0 25 50 75 100 0 25 50 75 100
Preference among Algorithms
Algorithm Stay Random Swerve
b
Figure 2: Selected experimental results. (a) People approve of
autonomous vehicles (AVs) sacri-
ficing their passenger to save pedestrians, and show greater
willingness to see this action legally
enforced when the decision is made by an AV than by a human
driver. (b) People are willing to buy
AVs programmed to swerve into a passer-by or into a wall in
order to save pedestrians, although
their utilitarianism is qualified by a self-preserving bias.
7
themselves, F(1, 135) = 4.9,p = .03. When con-
sidering which AV they would buy, participants
were split between AVs that would self-sacrifice,
AVs that would not, and AVs that would ran-
domly pick a course of action.
4 Discussion
Three surveys suggested that respondents might
be prepared for autonomous vehicles pro-
grammed to make utilitarian moral decisions in
situations of unavoidable harm. This was even
true, to some extent, of situations in which the
AV could sacrifice its owner in order to save the
lives of other individuals on the road. Respon-
dents praised the moral value of such a sacrifice
the same, whether human or machine made the
decision. Although they were generally unwilling
to see self-sacrifices enforced by law, they were
more prepared for such legal enforcement if it ap-
plied to AVs, than if it applied to humans. Sev-
eral reasons may underlie this effect: unlike hu-
mans, computers can be expected to dispassion-
ately make utilitarian calculations in an instant;
computers, unlike humans, can be expected to
unerringly comply with the law, rendering moot
the thorny issue of punishing non-compliers; and
finally, a law requiring people to kill themselves
would raise considerable ethical challenges.
Even in the absence of legal enforcement, most
respondents agreed that AVs should be pro-
grammed for utilitarian self-sacrifice, and to pur-
sue the greater good rather than protect their
own passenger. However, they were not as con-
fident that AVs would be programmed that way
in reality—and for a good reason: They actually
wished others to cruise in utilitarian AVs, more
than they wanted to buy utilitarian AVs them-
selves. What we observe here is the classic sig-
nature of a social dilemma: People mostly agree
on what should be done for the greater good of
everyone, but it is in everybody’s self-interest
not to do it themselves. This is both a chal-
lenge and an opportunity for manufacturers or
regulatory agencies wishing to push for utilitar-
ian AVs: even though self-interest may initially
work against such AVs, social norms may soon
be formed that strongly favor their adoption.
We emphasize that our results provide but a
first foray into the thorny issues raised by moral
algorithms for AVs. For example, our scenarios
did not feature any uncertainty about decision
outcomes, but future work will have to test sce-
narios that introduce the concepts of expected
risk, expected value, and blame assignment. Is
it acceptable for an AV to avoid a motorcycle by
swerving into a wall, considering that the prob-
ability of survival is greater for the passenger
of the car, than for the rider of the motorcy-
cle? Should different decisions be made when
children are on board, since they both have a
longer time ahead of them than adults, and had
less agency in being in the car in the first place?
If a manufacturer offers different versions of its
moral algorithm, and a buyer knowingly chose
one of them, is the buyer to blame for the harm-
ful consequences of the algorithm’s decisions?
Figuring out how to build ethical autonomous
machines is one of the thorniest challenges in ar-
tificial intelligence today [Deng, 2015]. As we are
about to endow millions of vehicles with auton-
omy, taking algorithmic morality seriously has
never been more urgent. Our data-driven ap-
proach highlights how the field of experimental
ethics can give us key insights into the moral
and legal standards that people expect from au-
tonomous driving algorithms. When it comes to
split-second moral judgments, people may very
well expect more from machines than they do
8
Howey
Highlight
Howey
Highlight
Howey
Highlight
Howey
Highlight
Howey
Highlight
Howey
Highlight
Howey
Highlight
Howey
Highlight
Howey
Highlight
Howey
Highlight
from each other.
References
[Deng, 2015] Deng, B. (2015). Machine ethics:
The robot’s dilemma. Nature, 523:24–26.
[Gao et al., 2014] Gao, P., Hensley, R., and
Zielke, A. (2014). A road map to the future
for the auto industry.
[Greene, 2014a] Greene, J. D. (2014a). Be-
yond point-and-shoot morality: Why cogni-
tive (neuro)science matters for ethics. Ethics,
124:695–726.
[Greene, 2014b] Greene, J. D. (2014b). The cog-
nitive neuroscience of moral judgment and de-
cision making. In Gazzaniga, M. S., editor,
The cognitive neurosciences V, pages 1013–
1023. MIT Press.
[Greene, 2014c] Greene, J. D. (2014c). Moral
tribes: Emotion, reason and the gap between
us and them. Atlantic Books.
[Haidt, 2012] Haidt, J. (2012). The righteous
mind: Why good people are divided by politics
and religion. Pantheon Books.
[Jain et al., 2015] Jain, N., O’Reilly, J., and
Silk, N. (2015). Driverless cars: Insurers can-
not be asleep at the wheel.
[Marcus, 2012] Marcus, G. (2012). Moral ma-
chines.
[Montemerlo, B. et al, 2008] Montemerlo, B. et
al (2008). Junior: The stanford entry in the
urban challenge. J. Field Robotics, 25:569–
597.
[Ramsey, 2015] Ramsey, M. (2015). Who’s
responsible when a driverless car crashes?
tesla’s got an idea.
[Rosen, 2005] Rosen, F. (2005). Classical utili-
tarianism from Hume to Mill. Routledge.
[Spieser, K. et al, 2014] Spieser, K. et al (2014).
Toward a systematic approach to the de-
sign and evaluation of automated mobility-on-
demand systems: A case study in Singapore.
In Meyer, G. and Beiker, S., editors, Road Ve-
hicle Automation, pages 229–245. Springer.
[UK Department for Transport, 2015] UK De-
partment for Transport (2015). The pathway
to driverless cars.
[Urmson, C. et al, 2008] Urmson, C. et al
(2008). Autonomous driving in urban envi-
ronments: Boss and the urban challenge. J.
Field Robotics, 25:425–266.
[Van Arem et al., 2006] Van Arem, B., Van
Driel, C. J., and Visser, R. (2006). The im-
pact of cooperative adaptive cruise control on
traffic-flow characteristics. IEEE Transactions
on Intelligent Transportation Systems, 7:429–
436.
[Villasenor, 2014] Villasenor, J. (2014). Prod-
ucts liability and driverless cars: Issues and
guiding principles for legislation.
[Waldrop, 2015] Waldrop, M. M. (2015). Au-
tonomous vehicles: No drivers required. Na-
ture, 518:20–23.
[Wallach and Allen, 2008] Wallach, W. and
Allen, C. (2008). Moral machines: Teaching
robots right from wrong. Oxford University
Press.
9
Howey
Highlight
A Experimental Methods
The article is based on three online surveys con-
ducted in June 2015. All studies were pro-
grammed on Qualtrics and recruited participants
from the Mechanical Turk platform, for a com-
pensation of 25 cents. Study 1 involved 402 par-
ticipants (161 women, median age = 30); Study
2 involved a different sample of 310 participants
(166 women, median age = 31); and Study 3
involved yet a different sample of 201 partici-
pants (104 women, median age = 30). We did
not exclude any participant from the analyses,
but because some participants failed to answer
some questions, degrees of freedom show small
variations between analyses. Participants pro-
vided basic demographic information, such as
age, sex, ethnicity, political ideology, and reli-
giosity (the latter being measured on a continu-
ous slider from not at all religious to extremely
religious). At the end of all studies, three ques-
tions measured the overall enthusiasm that par-
ticipants felt for driverless cars: How excited
they felt about a future in which autonomous
self-driving cars are an everyday part of our mo-
toring experience (7-point scale from not at all to
very much), how fearful they were of this future
(same scale, reverse coded), and how likely they
were to buy an autonomous vehicle, should they
become commercially available by the time they
would next purchase a new car (7-point scale
from not at all likely to extremely likely).
A.1 Study 1
Participants read a vignette in which one or
more pedestrians could be saved if a car were to
swerve into a barrier, killing its passenger. Par-
ticipants were randomly assigned to one group
of a 2×2×2 between-participant design, which
manipulated the number of pedestrians which
could be saved (one vs ten), whether the driver
or the self-driving car would make the decision
to swerve, and whether participants were in-
structed to imagine themselves in the car, or to
imagine an anonymous character. The experi-
ment was programmed so that the sex of this
anonymous character always matched that of
the participant (this information being collected
early in the experiment). Here is an example of
the vignette (one pedestrian, participant in the
car, driver makes the decision):
You are the sole passenger driving at
the speed limit down a main road. Sud-
denly, a pedestrian appears ahead, in the
direct path of the car. You have the option
to: swerve off to the side of road, where
you will impact a barrier, killing you but
leaving the pedestrian unharmed, or stay
on your current path, where you will kill
the pedestrian, but you will be unharmed.
And here is the mirror example, as it would
appear to a male participant (i.e., the anony-
mous individual in the car is male):
A man is the sole passenger in an au-
tonomous self-driving vehicle traveling at
the speed limit down a main road. Sud-
denly, 10 people appear ahead, in the di-
rect path of the car. The car could be
programmed to: swerve off to the side of
road, where it will impact a barrier, killing
the passenger but leaving the ten pedestri-
ans unharmed, or stay on its current path,
where it will kill the 10 pedestrians, but the
passenger will be unharmed.
In addition to the vignette, participants were
given a pictorial representation of the swerve
and stay options. Figure 3 (top) displays the
images that were shown to participants, in the
10
SWERVESWERVE STAYSTAY SWERVESWERVE
STAYSTAY
SWERVESWERVE STAYSTAY SWERVESWERVE
STAYSTAY
Figure 3: Pictorial representation of the swerve and stay options
in Study 1 (top) and Study 2
(all).
11
Figure 4: Main measures of Study 1 (moral ap-
proval and legal enforceability of sacrifice).
conditions where one or ten pedestrians could be
saved.
In Study 1, we recorded the perceived morality
of the swerve option (i.e., the utilitarian sacri-
fice) as well as it legal enforceability, that is, the
extent to which participants found it acceptable
to have a law that would require cars/drivers to
swerve. The sliders that were used for these
two measures are shown in Figure 4. Responses
were reverse coded so that higher scores would
reflect higher moral approval, and higher legal
enforceability of the swerve option.
A.2 Study 2
Participants were randomly assigned to one of
four versions of a vignette, in which either 1 or
10 pedestrians could be saved if an autonomous,
self-driving car were to swerve into either a bar-
rier (killing its passenger), or another pedestrian
(killing that pedestrian). Participants were told
that three algorithms could be implemented in
the car for such a contingency: always stay,
always swerve, and random. Here is an ex-
ample of the vignette (one pedestrian, swerve
into barrier):
You are the sole passenger riding in an
autonomous, self-driving car that is travel-
ing at high speed down a main road. Sud-
denly, one pedestrian appears ahead, in the
direct path of the car. In preparing the car
for such an eventuality, the computer engi-
neers can program the car for three options:
always stay. In such a situation, the car
would continue on its path, kill the
pedestrian on the main road, but you
as the passenger will be unharmed.
always swerve. In such a situation, the
car would swerve quickly, diverting
the car onto the side road where it
will kill you as the passenger, but the
pedestrian on the main road will be
unharmed.
random. In such a situation, the car
would be programmed to randomly
choose to either stay or swerve.
And here is the mirror example (ten pedes-
trian, swerve into another pedestrian):
You are the sole passenger riding in an
autonomous, self-driving car that is travel-
ing at high speed down a main road. Sud-
denly, ten pedestrians appears ahead, in
the direct path of the car. The car could
however swerve to the right where it would
kill one pedestrian on the side of the road.
In preparing the car for such an eventu-
ality, the computer engineers can program
the car for three options:
always stay. In such a situation, the car
would continue on its path, kill the
ten pedestrians on the main road, but
the pedestrian on the side of the road
would be unharmed.
always swerve. In such a situation, the
car would swerve quickly, diverting
the car onto the side road where it
will kill the one pedestrian on the side
12
Figure 5: Illustration of the budget-splitting pro-
cedure used in Study 2, here applied to the will-
ingness to buy self-driving cars programmed to
always stay, always swerve, or random.
of the road, but the pedestrian on the
main road will be unharmed.
random. In such a situation, the car
would be programmed to randomly
choose to either stay or swerve.
Additionally, participants were given a picto-
rial representation of the swerve and stay op-
tions (see Figure 3). Participants then answered
three questions, introduced in random order for
each participant: How would you rate the rel-
ative morality of these three algorithms? How
would you rate your relative willingness to buy
a car with one of these three algorithms? How
would you rate the relative comfort with having
each of the car with one of these three algorithms
on the roads in your part of the world? To pro-
vide their responses, participants assigned points
to each of the three algorithms, taken from an
overall budget of 100 points (Figure 5).
A.3 Study 3
Participants read a vignette in which ten pedes-
trians could be saved if a car were to swerve into
a barrier, killing its passenger. Participants were
randomly assigned to one of two groups, ma-
nipulating whether participants were instructed
to imagine themselves in the car, or to imag-
ine an anonymous character. The experiment
was programmed so that the sex of this anony-
mous character always matched that of the par-
ticipant (this information being collected early
in the experiment). Thus, the experimental vi-
gnettes were similar to that used in Study 1 (re-
stricted to situations involving ten pedestrians).
Participants were asked whether the moral
course of action was to swerve or stay on course
(dichotomous response variable), and whether
they expected future driverless cars to be pro-
grammed to swerve or to stay on course in
such a situation (dichotomous response vari-
able). Next, participants were asked to indicate
whether it would be more appropriate to protect
the driver at all costs or to maximize the num-
ber of lives saved, on a continuous slider scale
anchored at these two expressions.
B Experimental Results
In all studies, we computed an index of enthu-
siasm about driverless cars by averaging the ex-
citement participants felt about a future with
driverless cars, their fearfulness about this fu-
ture (reverse coded), and the likelihood that they
might buy a driverless car next, if such cars were
available. Cronbach alpha for these three items
was .82 in Study 1, .78 in Study 2, and .81 in
Study 3. Regression analyses showed that en-
thusiasm was significantly greater for male par-
ticipants, in all studies; and that enthusiasm was
13
significantly greater for younger and less reli-
gious participants in two studies (see Table 1),
although these two effects were much smaller
than that of sex. Given these results, all sub-
sequent analyses controlled for sex, age and reli-
giosity.
B.1 Study 1
To analyze participants’ rating of the morality of
the sacrifice (swerve), we ran an analysis of co-
variance where morality was the dependent out-
come. The predictors were the number of lives
that could be saved; whether the agent mak-
ing the sacrifice decision was human or machine;
whether participants pictured themselves or an-
other person in the car; and all the 2-way and
3-way interaction terms. As in all analyses, age,
sex, and religiosity were entered as covariates.
No effect was detected as significant but that
of the number of lives which could be saved,
F(1, 367) = 48.9, p < .001.
We ran a similar analysis in which the moral-
ity of the sacrifice was replaced as the depen-
dent outcome with the legal enforceability of
swerve. The analysis detected a significant ef-
fect of the number of lives that could be saved,
F(1, 367) = 7.3, p < .01; and an effect of
whether the decision-maker was human or ma-
chine, F(1, 367) = 9.2, p < .01. Participants
were more willing to consider legal enforcement
of the sacrifice when the decision was made by a
self-driving car, rather than by a human driver.
This was mostly true, though, of situations in
which they did not think of themselves in the
car, as shown by analyzing separately the data
provided by participants picturing another per-
son in the car, and the data provided by partici-
pants picturing themselves in the car. In the first
dataset, whether the decision maker was human
or machine had a large impact, F(1, 181) = 12.0,
p < .001; in the second dataset, this predictor
had a nonsignificant impact, F < 1, p = .37.
B.2 Study 2
Our primary interest in Study 2 was partici-
pants’ willingness to buy (or to have others buy)
cars programmed with an always swerve algo-
rithm. We first analyzed the data provided by
participants who faced a similar situation than
in Study 1, that is, a dilemma between crash-
ing the car into a barrier (killing its passenger)
or staying on course (killing 1 or 10 passengers).
We ran an analysis of covariance of the budget
share allocated to the always swerve car, in
which the predictors were the number of lives
that could be saved and whether participants
were asked about their own willingness to buy
such car, or their comfort with seeing other peo-
ple drive such a car. Participants allocated a
larger budget share to always swerve when
it could save 10 pedestrians, F(1, 132) = 7.6,
p < .01, and when they were asked about other
people driving the car, rather than buying the
car themselves, F(1, 135) = 4.9,p = .03.
In the new situation in which the dilemma was
between between swerving into a pedestrian, or
staying on course and killing 1 or 10 pedestrians,
participants’ responses followed a purely utilitar-
ian logic. The analysis of covariance showed that
they allocated a significantly larger budget share
to always swerve when it could save 10 pedes-
trians, F(1, 141) = 41.3, p < .001, whether they
were thinking of buying the car themselves or
having other people drive the car.
14
Study 1 (N = 402) Study 2 (N = 302) Study 3 (N = 201)
β t β t β t
Sex (Women) −0.96 −5.59∗ ∗ ∗ −0.88 −4.74∗ ∗ ∗ −0.71
−2.77∗ ∗
Age −0.02 −2.79∗ ∗ −0.02 −2.57∗ −0.02 −1.60
Religiosity −0.01 −2.45∗ −0.01 −2.15∗ −0.01 −0.99
Table 1: Impact of demographic variables on general enthusiasm
for self-driving cars.
B.3 Study 3
Logistic regressions for the two dichotomous re-
sponse variables (including the control variables
sex, age and religiosity) did not detect any signif-
icant effect of the experimental condition (imag-
ine yourself vs imagine another individual in the
car). Though this may appear inconsistent with
the results from Study 2, recall that here we
are asking about moral appropriateness whereas
there we were talking about willingness to buy.
Overall, 75% of participants deemed more
moral to swerve, but only 65% predicted that
driverless cars would be programmed to swerve.
The difference between these two proportions
was statistically significant, McNemar’s χ2(1) =
6.7, p < .01.
On a scale from −50 (protect the driver at
all costs) to +50 (maximize the number of lives
saved), the average response was +24 (SD = 30)
to the question of what was morally appropri-
ate, but it was only +8 (SD =35) to the ques-
tion of how future driverless cars would be pro-
grammed. The difference between these two re-
sponses was significant, as shown by a repeated-
measure analysis of covariance which included
the experimental condition as a between-group
predictor, and sex, age and religiosity as control
variables, F(1, 179) = 18.9, p < .001. The only
other significant effect was that of the age co-
variate, F(1, 176) = 7.8, p < .01.
15
1 Introduction2 Materials and Methods2.1 Study 12.2 Study
22.3 Study 33 Results4 DiscussionA Experimental MethodsA.1
Study 1A.2 Study 2A.3 Study 3B Experimental ResultsB.1
Study 1B.2 Study 2B.3 Study 3
P. Lin ( )
California Polytechnic State University, Philosophy
Department, USA
[email protected]
4Why Ethics Matters for Autonomous Cars
Content
4.1 Why ethics matters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . 70
4.1.1 Beyond crash-avoidance . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . 71
4.1.2 Crash-optimization means targeting . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . 72
4.1.3 Beyond harm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . 73
4.2 Scenarios that implicate ethics . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . 74
4.2.1 The deer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . 74
4.2.2 Self-sacrifice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . 76
4.2.3 Ducking harm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . 77
4.2.4 Trolley problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . 78
4.3 Next steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . 80
4.3.1 Broader ethical issues . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . 80
4.3.2 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . 81
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . 82
If motor vehicles are to be truly autonomous and able to operate
responsibly on our roads,
they will need to replicate – or do better than – the human
decision-making process. But
some decisions are more than just a mechanical application of
traffic laws and plotting a
safe path. They seem to require a sense of ethics, and this is a
notoriously difficult capabil-
ity to reduce into algorithms for a computer to follow.
This chapter will explain why ethics matters for autonomous
road vehicles, looking at
the most urgent area of their programming. Nearly all of this
work is still in front of the
M. Maurer et al. (Hrsg.), Autonomes Fahren,
DOI 10.1007/978-3-662-45854-9_4, © The Editors and the
Authors 2015.
Why Ethics Matters for Autonomous Cars70
industry, which is to say that I will mainly raise the questions
here and not presume to have
any definitive answers at such an early stage of the technology.
A brief note about terminology
I will use “autonomous”, “self driving”, “driverless”, and
“robot” interchangeably.
These refer primarily to future vehicles that may have the
ability to operate without human
intervention for extended periods of time and to perform a
broad range of actions. I will
also use “cars” to refer loosely to all motor vehicles, from a
motorcycle to a freight truck;
those distinctions do not matter for the discussion here.
4.1 Why ethics matters
To start, let me offer a simple scenario that illustrates the need
for ethics in autonomous
cars. Imagine in some distant future, your autonomous car
encounters this terrible choice:
it must either swerve left and strike an eight-year old girl, or
swerve right and strike an
80-year old grandmother [33]. Given the car’s velocity, either
victim would surely be killed
on impact. If you do not swerve, both victims will be struck and
killed; so there is good
reason to think that you ought to swerve one way or another.
But what would be the ethi-
cally correct decision? If you were programming the self-
driving car, how would you
instruct it to behave if it ever encountered such a case, as rare
as it may be?
Striking the grandmother could be the lesser evil, at least to
some eyes. The thinking is
that the girl still has her entire life in front of her – a first love,
a family of her own, a career,
and other adventures and happiness – while the grandmother has
already had a full life and
her fair share of experiences. Further, the little girl is a moral
innocent, more so than just
about any adult. We might agree that the grandmother has a
right to life and as valuable a
life as the little girl’s; but nevertheless, there are reasons that
seem to weigh in favor of
saving the little girl over the grandmother, if an accident is
unavoidable. Even the grand-
mother may insist on her own sacrifice, if she were given the
chance to choose.
But either choice is ethically incorrect, at least according to the
relevant professional
codes of ethics. Among its many pledges, the Institute of
Electrical and Electronics
Engineers (IEEE), for instance, commits itself and its 430,000+
members “to treat fairly
all persons and to not engage in acts of discrimination based on
race, religion, gender,
disability, age, national origin, sexual orientation, gender
identity, or gender expression”
[23]. Therefore, to treat individuals differently on the basis of
their age, when age is not
a relevant factor, seems to be exactly the kind of discrimination
the IEEE prohibits
[18, 33].
Age does not appear to be a relevant factor in our scenario as it
might be in, say, casting
a young actor to play a child’s character in a movie. In that
movie scenario, it would be
appropriate to reject adult actors for the role. Anyway, a reason
to discriminate does not
necessarily justify that discrimination, since some reasons may
be illegitimate. Even if we
point to the disparity of life experiences between the old and the
young, that difference isn’t
automatically an appropriate basis for different treatment.
714.1 Why ethics matters
Discriminating on the basis of age in our crash scenario would
seem to be the same
evil as discriminating on the basis of race, religion, gender,
disability, national origin, and
so on, even if we can invent reasons to prefer one such group
over another. In Germany –
home to many influential automotive companies that are
working to develop self-driving
technologies – the right to life and human dignity is basic and
set forth in the first two
articles of the very first chapter in the nation’s constitution [9].
So it is difficult to see
how German law could even allow a company to create a
product that is capable to making
such a horrific and apparently illegal choice. The United States
similarly strives to offer
equal protection to all persons, such as stipulated in the
fourteenth amendment of its
constitution.
If we cannot ethically choose a path forward, then what ought to
be done? One solution
is to refuse to make a swerve decision, allowing both victims to
be struck; but this seems
much worse than having only one victim die, even if we are
prejudiced against her. Anyway,
we can force a decision by modifying the scenario: assume that
10 or 100 other pedestrians
would die, if the car continued forward; and swerving would
again result in only a single
death.
Another solution could be to arbitrarily and unpredictably
choose a path, without
prejudice to either person [34]. But this too seems ethically
troubling, in that we are choos-
ing between lives without any deliberation at all – to leave it to
chance, when there are
potentially some reasons to prefer one over the other, as
distasteful and uncomfortable as
those reasons may be. This is a dilemma that is not easily
solvable and therefore points to
a need for ethics in developing autonomous cars.
4.1.1 Beyond crash-avoidance
Many readers may object right away that the dilemma above
(and others that follow) will
never occur with autonomous cars. It may be suggested that
future cars need not confront
hard ethical choices, that simply stopping the car or handing
control back to the human
operator is the easy path around ethics. But I will contend here
that braking and relinquish-
ing control will not always be enough. Those solutions may be
the best we have today, but
if automated cars are to ever operate more broadly outside of
limited highway environ-
ments, they will need more response-options.
Current research already makes this case as a matter of physics
[12, 13], but we can also
make a case from commonsense. Many ordinary scenarios exist
today in which braking is
not the best or safest move, whether by human or self-driving
car. A wet road or a tailgater,
for instance, may make it dangerous to slam the brakes, as
opposed to some other action
such as steering around the obstacle or simply through it, if it is
a small object. Today, the
most advanced self-driving cars cannot detect small objects
such as squirrels [7]; therefore,
they presumably cannot also detect squirrel-sized rocks,
potholes, kittens, and other small
but consequential hazards can cause equipment failure, such as
tire blowouts or sensor
errors, or deviations from a safe path.
Why Ethics Matters for Autonomous Cars72
In these and many other cases, there may not be enough time to
hand control back to the
driver. Some simulation experiments suggest that human drivers
need up to 40 seconds to
regain situation awareness, depending on the distracting
activity, e. g., reading or napping
– far longer than the 1–2 seconds of reaction time required for
typical accident scenarios
[38, 18]. This means that the car must be responsible for making
decisions when it is un-
reasonable to expect a timely transfer of control back to the
human, and again braking might
not be the most responsible action.
One possible reply is that, while imperfect, braking could
successfully avoid the major-
ity of emergency situations a robot car may find itself it, even if
it regrettably makes things
worse in a small number of cases. The benefits far outweigh the
risks, presumably, and the
numbers speak for themselves. Or do they? I will discuss the
dangers of morality by math
throughout this chapter.
Braking and other responses in the service of crash-avoidance
won’t be enough, because
crash-avoidance is not enough. Some accidents are unavoidable
– such as when an animal
or pedestrian darts out in front of your moving car – and
therefore autonomous cars will
need to engage in crash-optimization as well. Optimizing
crashes means to choose the
course of action that will likely lead to the least amount of
harm, and this could mean a
forced choice between two evils, for instance, choosing to strike
either the eight-year old
girl or the 80-year old grandmother in my first scenario above.
4.1.2 Crash-optimization means targeting
There may be reasons, by the way, to prefer choosing to run
over the eight-year old girl
that I have not yet mentioned. If the autonomous car were most
interested in protecting
its own occupants, then it would make sense to choose a
collision with the lightest object
possible (the girl). If the choice were between two vehicles,
then the car should be
programmed to prefer striking a lighter vehicle (such as a Mini
Cooper or motorcycle)
than a heavier one (such as a sports utility vehicle (SUV) or
truck) in an adjacent lane
[18, 34].
On the other hand, if the car were charged with protecting other
drivers and pedestrians
over its own occupants – not an unreasonable imperative – then
it should be programmed
to prefer a collision with the heavier vehicle than the lighter
one. If vehicle-to-vehicle
(V2V) and vehicle-to-infrastructure (V2I) communications are
rolled out (or V2X to refer
to both), or if an autonomous car can identify the specific
models of other cars on the road,
then it seems to make sense to collide with a safer vehicle (such
as a Volvo SUV that has a
reputation for safety) over a car not known for crash-safety
(such as a Ford Pinto that’s
prone to exploding upon impact).
This strategy may be both legally and ethically better than the
previous one of jealously
protecting the car’s own occupants. It could minimize lawsuits,
because any injury to others
would be less severe. Also, because the driver is the one who
introduced the risk to society
– operating an autonomous vehicle on public roads – the driver
may be legally obligated,
734.1 Why ethics matters
or at least morally obligated, to absorb the brunt of any harm, at
least when squared off
against pedestrians, bicycles, and perhaps lighter vehicles.
The ethical point here, however, is that no matter which
strategy is adopted by an orig-
inal equipment manufacturer (OEM), i. e., auto manufacturer,
programming a car to choose
a collision with any particular kind of object over another very
much resembles a targeting
algorithm [33]. Somewhat related to the military sense of
selecting targets, crash-optimi-
zation algorithms may involve the deliberate and systematic
discrimination of, say, large
vehicles or Volvos to collide into. The owners or operators of
these targeted vehicles bear
this burden through no fault of their own, other than perhaps
that they care about safety or
need an SUV to transport a large family.
4.1.3 Beyond harm
The problem is starkly highlighted by the following scenario
[15, 16, 17, 34]: Again,
imagine that an autonomous car is facing an imminent crash,
but it could select one of
two targets in adjacent lanes to swerve into: either a
motorcyclist who is wearing a helmet,
or a motorcyclist who is not. It probably doesn’t matter much to
the safety of the car itself
or its occupants whether the motorcyclist is wearing a helmet;
the impact of a helmet into
a car window doesn’t introduce that much more risk that the
autonomous car should want
to avoid it over anything else. But it matters a lot to the
motorcyclist whether s/he is wear-
ing a helmet: the one without a helmet would probably not
survive such a collision. There-
fore, in this dreadful scenario, it seems reasonable to program a
good autonomous car to
swerve into the motorcyclist with the helmet.
But how well is justice and public policy served by this crash-
optimization design?
Motorcyclists who wear helmets are essentially being penalized
and discriminated against
for their responsible decision to wear a helmet. This may
encourage some motorcyclists to
not wear helmets, in order to avoid targeting by autonomous
cars. Likewise, in the previous
scenario, sales may decline for automotive brands known for
safety, such as Volvo and
Mercedes Benz, insofar as customers want to avoid being the
preferred targets of crash-
optimization systems.
Some readers may want to argue that the motorcyclist without a
helmet ought to be
targeted, for instance, because he has acted recklessly and
therefore is more deserving
of harm. Even if that’s the correct design, notice that we are
again moving beyond harm
in making crash-optimization decisions. We’re still talking
about justice and other such
ethical considerations, and that’s the point: it’s not just a
numbers game.
Programmers in such scenarios, as rare as they may be, would
need to design cost-
functions – algorithms that assign and calculate the expected
costs of various possible
options, selecting the one with the lowest costs – that
potentially determine who gets
to live and who gets to die. And this is fundamentally an ethics
problem, one that de-
mands much more care and transparency in reasoning than
seems currently offered.
Indeed, it is difficult to imagine a weightier and more
profoundly serious decision
Why Ethics Matters for Autonomous Cars74
a programmer would ever have to make. Yet, there is little
discussion about this core
issue to date.
4.2 Scenarios that implicate ethics
In addition to the ones posited above, there are many actual and
hypothetical scenarios that
involve judgments about ethics. I will describe some here to
show how ordinary assump-
tions in ethics can be challenged.
4.2.1 The deer
Though difficult to quantify due to inconsistent and under-
reporting, experts estimate that
more than a million car accidents per year in the US are caused
by deer [6, 48]. Many, if
not most, drivers have been startled by an unexpected animal on
the road, a dangerous
situation for both parties. Deconstructing a typical accident, or
near-accident, involving an
animal illustrates the complexity of the decisions facing the
driver [30]. While all this
happens within seconds – not enough time for careful
deliberations by human drivers – an
autonomous car could have the virtue of a (presumably)
thoughtful decision-making script
to very quickly react in an optimal way. If it is able to account
for the many variables, then
it ought to, for the most informed decision possible.
First, suppose an object appears on the road directly in front of
a car in autonomous
mode. Is there time to reasonably hand control back to the
human behind the wheel? (Prob-
ably not.) If not, is there time to stop the car? Would the car
need to brake hard, or would
moderate braking be sufficient? The decision to brake depends,
again, on road conditions
and whether a tailgater (such as a big-rig truck) is behind you,
including its speed to deter-
mine the severity of a possible rear-end collision.
Second, what is the object? Is it an animal, a person, or
something else? If it is an animal,
are some animals permissible to run over? It may be safer to
continue ahead and strike
a squirrel, for instance, than to violently swerve around it and
risk losing control of the car.
However, larger animals, such as deer and cows, are more likely
to cause serious damage
to the car and injuries to occupants than a spun-out car. Other
animals, still, have special
places in our hearts and should be avoided if possible, such as
pet dogs and cats.
Third, if the car should get out of the way – either in
conjunction with braking or not –
should it swerve to the left or to the right? In the US and other
nations in which drivers must
stay on the right side of the road, turning to the right may mean
driving off the road, poten-
tially into a ditch or a tree. Not only could harm to the car and
occupants be likely, but it also
matters how many occupants are in the car. The decision to
drive into an embankment seems
different when only one adult driver is in the car, than when
several children are inside too.
On the other hand, turning to the left may mean driving into an
opposite lane, potentially
into a head-on collision with incoming vehicles. If such a
collision is unavoidable, then it
754.2 Scenarios that implicate ethics
matters what kind of vehicle we would crash into (e. g., is it a
compact car or SUV?), how
heavy incoming traffic is (e. g., would more than one vehicle be
involved?), how many
persons may be involved (e. g., are there children in the other
car?). Of course, here we are
assuming perfect sensing and V2X communications that can
help answer these questions.
If we cannot answer the questions, then we face a possibly large
unknown risk, which
makes driving into incoming traffic perhaps the worst option
available.
Other factors relevant to the decision-points above include: the
road-shoulder type
(paved, gravel, none, etc.), the condition of the car’s tires and
brakes, whether the car’s
occupants are seat-belted, whether the car is transporting
dangerous cargo that could spill
or explode, proximity to hospital or emergency rescue, damage
to property such as houses
and buildings, and more. These variables influence the
probability of an accident as well
as expected harm, both of which are needed in selecting the best
course of action.
From this short analysis of a typical crash (or possible crash)
with an animal, we can
already see a daunting number of factors to account for. Sensing
technologies today can-
not answer some or many of the questions above, but it is
already unclear that braking
should be the safest default option – as a proxy for the most
ethical option – given these
uncertain conditions, all things considered. Automated cars
today can already detect
whether there is oncoming traffic in the opposite lane.
Therefore, it is at least possible
that they can be programmed to maneuver slightly into the
incoming lane under some
conditions, e. g., when there are no incoming cars and when it
may be dangerous to slam
on the brakes.
Whether or not sensing technologies will improve enough to
deliver answers to our
questions above, a programmer or OEM would still need to
assign costs or weights to
various actions and objects as best as they can. Yet these values
are not intrinsic to or dis-
coverable by science or engineering. Values are something that
we humans must stipulate
and ideally agree upon. In constructing algorithms to control an
autonomous car, ethics is
already implied in the design process. Any decision that
involves a tradeoff such as to strike
object x instead of object y requires a value-judgment about the
wisdom of the tradeoff,
that is, the relative weights of x and y. And the design process
can be made better by rec-
ognizing the ethical implications and by engaging the broader
community to ensure that
those values are represented correctly or at least transparently.
Working in a moral bubble
is less likely to deliver results that are acceptable to society.
Again, in a real-world accident today, a human driver usually
has neither the time nor
the information needed to make the most ethical or least
harmful decisions. A person who
is startled by a small animal on an otherwise uneventful drive
may very well react poorly.
He might drive into oncoming traffic and kill a family, or
oversteer into a ditch and to his
own death. Neither of these results, however, is likely to lead to
criminal prosecution by
themselves, since there was no forethought, malice, negligence,
or bad intent in making a
forced, split-second reaction. But the programmer and OEM do
not operate under the sanc-
tuary of reasonable instincts; they make potentially life-and-
death decisions under no truly
urgent time-constraint and therefore incur the responsibility of
making better decisions than
human drivers reacting reflexively in surprise situations.
Why Ethics Matters for Autonomous Cars76
4.2.2 Self-sacrifice
As we can see, real-world accidents can be very complicated. In
philosophy and ethics,
a familiar method is to simplify the issues through hypothetical
scenarios, otherwise known
as “thought-experiments.” This is similar to everyday science
experiments in which re-
searchers create unusual conditions to isolate and test desired
variables, such as sending
spiders into outer space to see how micro-gravity affects their
ability to spin webs. It is not
a good objection to those experiments to say that no spiders
exist naturally in space; that
misses the point of the experiment.
Likewise, it is no objection to our hypothetical examples that
they are outlandish and
unlikely to happen in the real world, such as a car that can
distinguish an eight-year old
from an 80-year old (though with improving biometrics, facial
recognition technologies,
and linked databases, this doesn’t seem impossible). Our
thought-experiments are still
useful in drawing out certain ethical intuitions and principles
we want to test.
With that understanding, we can devise hypothetical scenarios
to see that reasonable
ethical principles can lead to controversial results in the context
of autonomous driving.
Digging into a standard philosophical toolbox for help with
ethical dilemmas, one of the
first principles we might reach for is consequentialism: that the
right thing to do is what ever
leads to the best results, especially in quantified terms [44]. As
it applies here, consequen-
tialism suggests that we should strive to minimize harm and
maximize whatever it is that
matters, such as, the number of happy lives.
In this thought-experiment, your future autonomous car is
driving you on a narrow road,
alongside a cliff. No one and no technology could foresee that a
school bus with 28 children
would appear around the corner, partially in your lane [29, 36].
Your car calculates that
crash is imminent; given the velocities and distance, there is no
possible action that can
avoid harming you. What should your robot car do?
A good, standard-issue consequentialist would want to optimize
results, that is, maxi-
mize the number of happy lives and minimize harm. Assuming
that all lives in this scenario
are more or less equally happy – for instance, there’s no super-
happy or super-depressed
person, and no very important person who has unusual influence
over the welfare of others
– they would each count for about the same in our moral
calculation. As you like, we may
either ignore or account for the issue of whether there is extra
value in the life of innocent
child who has more years of happiness ahead of her than an
average adult; that doesn’t
matter much for this scenario.
The robot car’s two main choices seem to be: (1) to slam on the
brakes and crash into
the bus, risking everyone’s lives, or (2) to drive off the cliff,
sparing the lives of everyone
on the bus. Performing a quick expected-utility calculation, if
the odds of death to each
person (including the adult bus driver) in the accident averaged
more than one in 30, then
colliding into the bus would yield the expected result of more
than one death, up to all 30
persons. (Let’s say the actual odds are one in three, which gives
an expected result of 10
deaths.) If driving off a cliff meant certain death, or the odds of
one in one, then the expect-
ed result of that would be exactly one death (your own) and no
more. The right consequen-
774.2 Scenarios that implicate ethics
tialist decision for the robot car – if all we care about is
maximizing lives and minimizing
deaths – is apparently to drive off the cliff and sacrifice the
driver, since it is better that only
one person should die rather than more than one, especially 10
or all 30 persons.
This decision would likely be different if, instead of a school
bus, your robot car were
about to collide with another passenger car carrying only one
person. Given the same
average odds of death, one in three, the expected number of
deaths in a collision would only
be 0.67, while the expected number of deaths in driving off a
cliff remains at one. In that
case, the right consequentialist decision would be to allow the
accident to occur, as long as
the average odds of death are less than one in two. If, instead
of another vehicle, your car
were about to collide with a deer, then the decision to stay on
the road, despite an ensuing
accident, would be even more obvious insofar as we value a
deer’s life less than a human
life.
Back to the school-bus scenario, programming an autonomous
car with a consequential-
ist framework for ethics would seem to imply your sacrifice.
But what is most striking about
this case might not even be your death or the moral
mathematics: if you were in a manually
driven car today, driving off the cliff might still be the most
ethical choice you could make,
so perhaps you would choose certain death anyway, had you the
time to consider the op-
tions. However, it is one thing for you to willingly make that
decision of sacrifice yourself,
and quite another matter for a machine to make that decision
without your consent or
foreknowledge that self-sacrifice was even a possibility. That
is, there is an astonishing lack
of transparency and therefore consent in such a grave decision,
one of the most important
that can be made about one’s life – perhaps noble if voluntary,
but criminal if not.
Thus, reasonable ethical principles – e.g., aiming to save the
greatest number of lives
– can be stressed in the context of autonomous driving. An
operator of an autonomous
vehicle, rightly or not, may very well value his own life over
that of everyone else’s, even
that of 29 others; or he may even explicitly reject
consequentialism. Even if consequential-
ism is the best ethical theory and the car’s moral calculations
are correct, the problem may
not be with the ethics but with a lack of discussion about ethics.
Industry, therefore, may
do well to have such a discussion and set expectations with the
public. Users – and news
headlines – may likely be more forgiving if it is explained in
advance that self-sacrifice
may be a justified feature, not a bug.
4.2.3 Ducking harm
Other ethical principles can create dilemmas, too. It is generally
uncontroversial that, if you
can easily avoid harm to yourself, then you should do it. Indeed,
it may be morally required
that you save yourself when possible, if your life is intrinsically
valuable or worth protect-
ing; and it is at least extrinsically valuable if you had a
dependent family. Auto manufac-
turers or OEMs seem to take this principle for granted as well:
if an autonomous car can
easily avoid a crash, e. g., by braking or swerving, then it
should. No ethical problem here
– or is there?
Why Ethics Matters for Autonomous Cars78
In another thought-experiment [15, 18, 33], your robotic car is
stopped at an intersection
and waits patiently for the children who are crossing in front of
you. Your car detects a
pickup truck coming up behind you, about to cause a rear-end
collision with you. The crash
would likely damage your car to some degree and perhaps cause
minor injury to you, such
as whiplash, but certainly not death. To avoid this harm, your
car is programmed to dash
out of the way, if it can do so safely. In this case, your car can
easily turn right at the inter-
section and avoid the rear-end collision. It follows this
programming, but in doing so, it
clears a path for the truck to continue through the intersection,
killing a couple children and
seriously injuring others.
Was this the correct way to program an autonomous car? In
most cases of an impending
rear-end collision, probably yes. But in this particular case, the
design decision meant
saving you from minor injury at the expense of serious injury
and death of several children,
and this hardly seems to be the right choice. In an important
respect, you (or the car) are
responsible for their deaths: you (or the car) killed the children
by removing an obstruction
that prevented harm from falling upon them, just as you would
be responsible for a person’s
death if you removed a shield he was holding in front of a
stream of gunfire. And killing
innocent people has legal and moral ramifications.
As with the self-sacrifice scenario above, it might be that in the
same situation today, in
a human-driven car, you would make the same decision to save
yourself from injury, if you
were to see a fast-approaching vehicle about to slam into you.
That is, the result might not
change if a human made the on-the-spot decision. But, again, it
is one thing to make such
a judgment in the panic of the moment, but another less
forgivable thing for a programmer
– far removed from the scene and a year or more in advance – to
create a cost-function that
resulted in these deaths. Either the programmer did so
deliberately, or she did it uninten-
tionally, unaware that this was a possibility. If the former, then
this could be construed as
premeditated homicide; and if the latter, gross negligence.
Either way is very bad for the programmer and perhaps an
inherent risk in the business,
when one attempts to replicate human decision-making in a
broad range of dynamic
scenarios. Sometimes, an autonomous car may be faced with a
“no-win” scenario, putting
the programmer in a difficult but all too real position. To
mitigate this risk, industry may
do well to set expectations not only with users but also with
broader society, educating them
that they could also become victims even if not operating or in a
robot car, and that perhaps
this is justified by a greater public or overall good.
4.2.4 Trolley problems
One of the most iconic thought-experiments in ethics is the
trolley problem [4, 8, 11, 47],
and this is one that may now occur in the real world, if
autonomous vehicles come to be.
Indeed, driverless trains are already operating in dozens of
cities worldwide and could bring
this scene to life [24]. The classical dilemma involves a
runaway trolley (or train) that is
about to run over and kill five unaware people standing on the
tracks. Looking at the scene
794.2 Scenarios that implicate ethics
from the outside, you find yourself standing next to a switch: if
you pull the switch, you
can shunt the train to a right-hand set of tracks, thereby saving
the five individuals on the
track. Unfortunately, there is one person standing on the right-
hand set of tracks who would
then be killed. What is the right decision?
The “correct” decision continues to be a subject of much debate
in philosophy. Both
answers seem reasonable and defensible. A consequentialist
might justify switching the
tracks to save five people, even at the regrettable expense of
one. But a non-consequential-
ist, someone who considers more than just the math or results,
might object on the grounds
that switching tracks constitutes an act of killing (the one
person), while doing nothing is
merely allowing someone to die (the five individuals); and that
it is morally and legally
worse to kill than to let die.
Killing implies that you are directly responsible for a person’s
death: had you not done
what you did, the person would have lived. Letting die,
however, involves much less re-
sponsibility on your part, if any, since some causal process was
already underway that was
not initiated or otherwise controlled by you. The question of
whether it is worse to kill than
to let die is also subject to debate in philosophy. But let us
bracket that for the moment, as
a final answer is not necessary for our discussion, only that it is
reasonable to believe that
proposition.
Adapting the trolley problem to the technology at hand, let us
suppose that you are driving
an autonomous car in manual mode; you are in control. Either
intentionally or not – you could
be homicidal or simply inattentive – you are about to run over
and kill five pedestrians. Your
car’s crash-avoidance system detects the possible accident and
activates, forcibly taking
control of the car from your hands. To avoid this disaster, it
swerves in the only direction it
can, let’s say to the right. But on the right is a single pedestrian
who is unfortunately killed.
Was this the right decision for your car to make? Again, a
consequentialist would say
yes: it is better that only one person dies than five. But a non-
consequentialist might appeal
to a moral distinction between killing and letting die, and this
matters to OEMs for liabili-
ty reasons. If the car does not wrestle control from the human
driver, then it (and the OEM)
would perhaps not be responsible for the deaths of the five
pedestrians while you were
driving the car; it is merely letting those victims die. But if the
car does take control
and make a decision that results in the death of a person, then it
(and the OEM) becomes
responsible for killing a person.
As with the trolley problem, either choice seems defensible.
Results do matter, so it is
not ridiculous to think that the car should be programmed to act
and save lives, even at the
expense of a fewer number of lives. Yet it also seems
reasonable to think that killing is worse
than letting die, especially in the eyes of the law. What I want
to highlight here is not so much
the answer but the process of deliberation that points us toward
one answer over another. To
the extent that there could be many acceptable answers to any
given ethical dilemma, how
well one answer can be defended is crucial toward supporting
that answer over others.
Industry again would do well to set expectations by debating
and explaining in advance
its reasoning behind key algorithms that could result in life or
death. Transparency, or
showing one’s math, is an important part of doing ethics, not
just the answer itself.
Why Ethics Matters for Autonomous Cars80
4.3 Next steps
Notice that the ethical issues discussed in this paper do not
depend on technology errors,
poor maintenance, improper servicing, security vulnerabilities,
or other failings – and all
those will occur too. No complex technology we have created
has been infallible. Even
industries with money directly at stake have not solved this
problem. For instance, bank
ATMs continue to make headlines when they hemorrhage cash –
tens of thousands of dol-
lars more than the account holder actually has – because of
software glitches alone [2, 10],
never mind hacking. And just about every computing device we
have created has been
hacked or is hackable, including neural implants and military
systems [3, 28].
These vulnerabilities and errors certainly can cause harm in the
context of autonomous
cars, and it would be unethically irresponsible to not safeguard
against them where we can.
Putting these technology issues aside and even assuming that
perfect technology is avail-
able, there are still many other safety and ethical questions to
worry about, such as the
programming issues above.
4.3.1 Broader ethical issues
But programming is only one of many areas to reflect upon as
society begins to adopt
autonomous driving technologies. Assigning legal and moral
responsibility for crashes is
a popular topic already [1, 14, 20, 22, 49, 51]. Here are a few
others, as part of a much
longer list of possible questions:
Does it matter to ethics if a car is publicly owned, for instance,
a city bus or fire truck?
The owner of a robot car may reasonably expect that its
property “owes allegiance” to the
owner and should value his or her life more than anonymous
pedestrians and drivers. But
a publicly owned automated vehicle might not have that
obligation, and this can change
moral calculations. Even for privately owned autonomous
vehicles, the occupants arguably
should bear more or all of the risk, since they are the ones
introducing the machine into
public spaces in the first place.
Do robot cars present an existential threat to the insurance
industry? Some believe that
ultra-safe cars that can avoid most or all accidents will mean
that many insurance compa-
nies will go bankrupt, since there would be no or very little risk
to insure against [40, 52].
But things could go the other way too: We could see mega-
accidents as cars are networked
together and vulnerable to wireless hacking – something like the
stock market’s “flash
crash” in 2010 [5]. What can the insurance industry do to
protect itself while not getting in
the way of the technology, which holds immense benefits?
How susceptible would robot cars be to hacking? So far, just
about every computing
device we have created has been hacked. If authorities and
owners (e. g., rental car compa-
ny) are able to remotely take control of a car – which is
reportedly under development for
law enforcement in the European Union [50] – this offers an
easy path for cyber-carjackers.
If under attack, whether a hijacking or ordinary break-in, what
should the car do: speed
814.3 Next steps
away, alert the police, remain at the crime scene to preserve
evidence, or maybe defend
itself?
For a future suite of in-car apps, as well as sensors and
persistent GPS/tracking, can we
safeguard personal information, or do we resign ourselves to a
world with disappearing
privacy rights [27]? To the extent that online services bring
online advertising, we could
see new, insidious advertising schemes that may allow third-
party advertisers to have
some influence on the autonomous car’s route selection, e.g.,
steering the car past their
businesses [32].
What kinds of abuse might we see with autonomous cars? If the
cars drive too conser-
vatively, they may become a traffic hazard or trigger road-rage
in human drivers with less
patience [26, 42]. If the crash-avoidance system of a robot car is
generally known, then
other drivers may be tempted to “game” it, e. g., by cutting in
front of it, knowing that the
automated car will slow down or swerve to avoid an accident. If
those cars can safely drive
us home in a fully-auto mode, that may encourage a culture of
more alcohol consumption,
since we won’t need to worry so much about drunk-driving.
More distant concerns include: How will law-abiding robot cars
affect city revenue,
which often depends on traffic fines imposed against law-
breaking human drivers? Inas-
much as many organ transplants come from car-accident
victims, how will society manage
a declining and already insufficient supply of donated organs
[41]?
Older-model autonomous cars may be unable to communicate
with later models or
future road infrastructure. How do we get those legacy models
– which may be less
safe, in addition to incompatible with newer technology – off
the roads [45]? Since 2009,
Microsoft has been trying to kill off its Windows XP operating
system [39], a much less
expensive investment than an autonomous car; but many users
still refuse to relinquish it,
including for critical military systems [37, 46]. This is a great
security risk since Microsoft
will no longer offer software patches for the operating system.
4.3.2 Conclusions
We don’t really know what our robot-car future will look like,
but we can already see that
much work needs to be done. Part of the problem is our lack of
imagination. Technology
policy expert Peter W. Singer observed, “We are still at the
‘horseless carriage’ stage of this
technology, describing these technologies as what they are not,
rather than wrestling with
what they truly are” [43].
As it applies here, robots aren’t merely replacing human
drivers, just as human drivers
in the first automobiles weren’t simply replacing horses: that
would like mistaking electric-
ity as merely a replacement for candles. The impact of
automating transportation will
change society in radical ways, and technology seems to be
accelerating. As Singer puts it,
“Yes, Moore’s Law is operative, but so is Murphy’s Law” [43].
When technology goes
wrong – and it will – thinking in advance about ethical design
and policies can help guide
us responsibility into the unknown.
Why Ethics Matters for Autonomous Cars82
In future autonomous cars, crash-avoidance features alone won’t
be enough. An accident
may be unavoidable as a matter of physics [12, 13], especially
as autonomous cars make
their way onto city streets [19, 21, 25], a more dynamic
environment than highways. It also
could be too dangerous to slam on the brakes, or not enough
time to hand control back
to the unaware human driver, assuming there’s a human in the
vehicle at all. Technology
errors, misaligned sensors, malicious actors, bad weather, and
bad luck can also contribute
to imminent collisions. Therefore, robot cars will also need to
have crash-optimization
strategies that are thoughtful about ethics.
If ethics is ignored and the robotic car behaves badly, a
powerful case could be made
that auto manufacturers were negligent in the design of their
product, and that opens
them up to tremendous legal liability, should such an event
happen. Today, we see activists
campaigning against “killer” military robots that don’t yet exist,
partly on the grounds
that machines should never be empowered to make life-and-
death decisions [31, 35].
It’s not outside the realm of possibility to think that the same
precautionary backlash
won’t happen to the autonomous car industry, if industry
doesn’t appear to be taking ethics
seriously.
The larger challenge, though, isn’t just about thinking through
ethical dilemmas. It’s also
about setting accurate expectations with users and the general
public who might find them-
selves surprised in bad ways by autonomous cars; and
expectations matter for market
acceptance and adoption. Whatever answer to an ethical
dilemma that industry might lean
towards will not be satisfying to everyone. Ethics and
expectations are challenges common
to all automotive manufacturers and tier-one suppliers who want
to play in this emerging
field, not just particular companies.
Automated cars promise great benefits and unintended effects
that are difficult to pre-
dict, and the technology is coming either way. Change is
inescapable and not necessarily
a bad thing in itself. But major disruptions and new harms
should be anticipated and
avoided where possible. That is the role of ethics in innovation
policy: it can pave the way
for a better future while enabling beneficial technologies.
Without looking at ethics, we are
driving with one eye closed.
References
1. Anderson, J., Kalra, N., Stanley, K., Sorensen, P., Samaras,
C., Oluwatola, O.: Autonomous ve-
hicle technology: a guide for policymakers. Report by RAND
Corporation. http://www.rand.org/
pubs/research_reports/RR443-1.html (2014). Accessed 8 July
2014
2. Associated Press. ATM ‘glitch’ gives $37,000 to lucky
homeless man. Daily Mail. http://www.
dailymail.co.uk/news/article-2596977/ATM-glitch-gives-OVER-
37-000-homeless-man-cash-re-
quested-140.html (2014). Accessed 8 July 2014
3. Baldor, L.: China hacked the Pentagon to get weapons data.
Talking Points Memo. http://talking-
pointsmemo.com/news/china-hacked-the-pentagon-to-get-
weapons-programs-data (2013).
Accessed 8 July 2014
4. Cathcart, T.: The Trolley Problem, or Would You Throw the
Fat Guy Off the Bridge? Workman
Publishing Company, New York (2013)
83References
5. Commodity Futures Trading Commission and the Securities
and Exchange Commission: Find-
ings regarding the market events of May 6, 2010. CFTC and
SEC, Washington DC. http://www.
sec.gov/news/studies/2010/marketevents-report.pdf (2010).
Accessed 8 July 2014
6. Curtis, P. and Hedlund, J.: Reducing deer-vehicle crashes.
Report funded by the Insurance Insti-
tute for Highway Safety. Cornell University, Ithaca.
http://wildlifecontrol.info/pubs/Documents/
Deer/Deer-Vehicle_factsheet1.pdf (2005). Accessed 8 July 2014
7. Davies, A.: Avoiding squirrels and other things Google’s
robot car can’t do. Wired. http://www.
wired.com/2014/05/google-self-driving-car-can-cant/ (2014).
Accessed 8 July 2014
8. Edmonds, D.: Would You Kill the Fat Man? The Trolley
Problem and What Your Answer Tells
Us About Right and Wrong. Princeton University Press,
Princeton (2014)
9. Federal Ministry of Justice and Consumer Protection: Basic
Law for the Federal Republic of
Germany. http://www.gesetze-im-
internet.de/englisch_gg/englisch_gg.html (2014). Accessed
8 July 2014
10. Floro, Z.: Man goes on a casino bender after an ATM let
him take out unlimited cash. Business
Insider. http://www.businessinsider.com/atm-gives-unlimited-
cash-2012-6 (2012). Accessed 8
July 2014
11. Foot, P.: The problem of abortion and the doctrine of the
double effect. Oxford Review 5, 5–15
(1967)
12. Fraichard, T.: Will the driver seat ever be empty? Report
funded by Inria. http://hal.inria.fr/hal-
00965176 (2014). Accessed 8 July 2014
13. Fraichard, T., and Asama, H.: Inevitable collision states: a
step towards safer robots? Advanced
Robotics 18(10), 1001–1024 (2004)
14. Garza, A.: ‘Look Ma, no hands!’: wrinkles and wrecks in
the age of autonomous vehicles. New
England Law Review 46(3), 581–616 (2012)
15. Goodall, N.J.: Autonomous car ethics. Interview with CBC
radio. http://www.cbc.ca/spark/
blog/2014/04/13/autonomous-car-ethics/ (2014). Accessed 8
July 2014
16. Goodall, N. J.: Machine ethics and automated vehicles. In:
Meyer, G. and Beiker, S. (eds.) Road
Vehicle Automation. Springer, Cham (2014)
17. Goodall, N. J.: Ethical decision making during automated
vehicle crashes. Transportation
Research Record: Journal of the Transportation Research Board
(forthcoming)
18. Goodall, N. J.: Vehicle automation and the duty to act. In:
Proceedings of the 21st World
Congress on Intelligent Transport Systems, 7–11 September
2014, Detroit, Michigan (forth-
coming)
19. Google: The latest chapter for the self-driving car:
mastering city street driving. http://googleblog.
blogspot.co.at/2014/04/the-latest-chapter-for-self-driving-
car.html (2014). Accessed 8 July
2014
20. Gurney, J.: Sue my car, not me: products liability and
accidents involving autonomous vehicles.
Journal of Law, Technology and Policy 2, 247–277 (2013)
21. Hern, A.: Self-driving cars face a long and winding road to
success. The Guardian. http://www.
theguardian.com/technology/2014/may/28/self-driving-cars-
google-success (2014). Accessed
8 July 2014
22. Hevelke, A. and Nida-Rumelin, J.: Responsibility for
crashes of autonomous vehicles: an ethical
analysis. Science and Engineering Ethics (2014). doi:
10.1007/s11948-014-9565-5
23. IEEE: IEEE code of ethics.
http://www.ieee.org/about/corporate/governance/p7-8.html
(2014).
Accessed 8 July 2014
24. International Association of Public Transport: Observatory
of automated metros. http://metroau-
tomation.org/ (2014). Accessed 8 July 2014
25. Jaffe, E.: The first look at how Google’s self-driving car
handles city streets. The Atlantic/CityL-
ab. http://www.citylab.com/tech/2014/04/first-look-how-
googles-self-driving-car-handles-city-
streets/8977/ (2014). Accessed 8 July 2014
Why Ethics Matters for Autonomous Cars84
26. KPMG: Self-driving cars: are we ready? Report by KPMG.
KPMG, Chicago. http://www.kpmg.
com/US/en/IssuesAndInsights/ArticlesPublications/Documents/s
elf-driving-cars-are-we-ready.
pdf (2013). Accessed 8 July 2014
27. Lee, T.: Self-driving cars are a privacy nightmare. And it’s
totally worth it. The Washington Post.
http://www.washingtonpost.com/blogs/wonkblog/wp/2013/05/21
/self-driving-cars-are-a-priva-
cy-nightmare-and-its-totally-worth-it/ (2013). Accessed 8 July
2014
28. Leggett, H.: The new hacking frontier: your brain? Wired.
http://www.wired.com/2009/07/neu-
rosecurity/ (2009). Accessed 8 July 2014
29. Lin, P.: The ethics of saving lives with autonomous cars is
far murkier than you think. Wired.
http://www.wired.com/2013/07/the-surprising-ethics-of-robot-
cars/ (2013). Accessed 8 July
2014
30. Lin, P.: The ethics of autonomous cars. The Atlantic.
http://www.theatlantic.com/technology/
archive/2013/10/the-ethics-of-autonomous-cars/280360/ (2013).
Accessed 8 July 2014
31. Lin, P.: Why the drone wars matter for automated cars.
Lecture presented at Proceedings in
Automated Driving, Stanford Law School, 12 December 2013.
http://stanford.io/1jeIQuw.
Accessed 8 July 2014
32. Lin, P.: What if your autonomous car keeps routing you past
Krispy Kreme? The Atlantic. http://
www.theatlantic.com/technology/archive/2014/01/what-if-your-
autonomous-car-keeps-rout-
ing-you-past-krispy-kreme/283221/ (2014). Accessed 8 July
2014
33. Lin, P.: Ethics and autonomous cars: why ethics matters,
and how to think about it. Lecture
presented at Daimler and Benz Foundation’s Villa Ladenburg
Project, Monterey, California, 21
February 2014
34. Lin, P.: The robot car of tomorrow might just be
programmed to hit you. Wired. http://www.
wired.com/2014/05/the-robot-car-of-tomorrow-might-just-be-
programmed-to-hit-you/ (2014).
Accessed 8 July 2014
35. Lin, P., Bekey, G., Abney, K.: Autonomous military
robotics: risk, ethics, and design. Report
funded by the US Office of Naval Research. California
Polytechnic State University, San Luis
Obispo. http://ethics.calpoly.edu/ONR_report.pdf (2008).
Accessed 8 July 2014
36. Marcus, G.: Moral machines. New Yorker.
http://www.newyorker.com/online/blogs/news-
desk/2012/11/google-driverless-car-morality.html (2012).
Accessed 8 July 2014
37. McCabe, R.: Navy, others still struggling to ditch Windows
XP. The Virginian-Pilot. http://hamp-
tonroads.com/2014/06/navy-others-still-struggle-ditching-
windows-xp (2014). Accessed 8 July
2014
38. Merat, N., Jamson, H., Lai F., and Carsten, O.: Human
factors of highly automated driving:
results from the EASY and CityMobil projects. In: Meyer, G.
and Beiker, S. (eds) Road Vehicle
Automation. Springer, Cham (2014)
39. Microsoft: Windows lifecycle fact sheet.
http://windows.microsoft.com/en-us/windows/lifecy-
cle (2014). Accessed 8 July 2014
40. Mui, C. and Carroll, P.: Driverless Cars: Trillions Are Up
For Grabs. Cornerloft Press, Seattle
(2013)
41. Park, A.: Driverless cars to kill organ transplantation.
DriverlessCarHQ. https://web.archive.
org/web/20120626151201/http://www.driverlesscarhq.com/trans
plants/ (2012). Accessed 8 July
2014
42. Roach, J.: Road rage at driverless cars? It’s possible. NBC
News. http://futureoftech-discuss.
nbcnews.com/_news/2012/01/20/10201865-road-rage-at-
driverless-cars-its-possible (2012).
Accessed 8 July 2014
43. Singer, P.W.: The robotic revolution. Brookings Institution.
http://www.brookings.edu/research/
opinions/2012/12/11-robotics-military-singer (2012). Accessed
8 July 2014
85References
44. Sinnott-Armstrong, W.: Consequentialism. Stanford
Encyclopedia of Philosophy. http://plato.
stanford.edu/entries/consequentialism/ (2011). Accessed 8 July
2014
45. Smith, B.W.: Planning for the obsolescence of technologies
not yet invented. Stanford Law
School’s Center for Internet and Society.
http://cyberlaw.stanford.edu/blog/2013/10/plan-
ning-obsolescence-technologies-not-yet-invented (2013).
Accessed 8 July 2014
46. Sternstein, A.: Why Feds are still buying IT that works with
Windows XP. Nextgov. http://www.
nextgov.com/cio-briefing/2014/04/why-feds-are-still-buying-it-
works-windows-xp/81667/
(2014). Accessed 8 July 2014
47. Thomson, J.J.: Killing, letting die, and the trolley problem.
The Monist 59, 204–217 (1976)
48. Transportation Research Board: Animal-vehicle collision
data collection. http://onlinepubs.trb.
org/onlinepubs/nchrp/nchrp_syn_370.pdf (2007). Accessed 8
July 2014
49. Villasenor, J.: Product liability and driverless cars: issues
and guiding principles for legisla-
tion. Report by The Brookings Institution.
http://www.brookings.edu/research/papers/2014/04/
products-liability-driverless-cars-villasenor (2014). Accessed 8
July 2014
50. Waterfield, B. and Day, M.: EU has secret plan for police to
‘remote stop’ cars. The Telegraph.
http://www.telegraph.co.uk/news/worldnews/europe/eu/1060532
8/EU-has-secret-plan-for-po-
lice-to-remote-stop-cars.html (2014). Accessed 8 July 2014
51. Wu, S.: Unmanned vehicles and US product liability law.
Journal of Law, Information & Science
21(2), 234–254 (2012)
52. Yeomans, G.: Autonomous vehicles: handing over control –
opportunities and risk for insur-
ance. Report by Lloyd’s. Lloyd’s, London.
http://www.lloyds.com/~/media/lloyds/reports/emerg-
ing%20risk%20reports/autonomous%20vehicles%20final.pdf
(2014). Accessed 8 July 2014
Teil I Human and Machine4
Why Ethics Matters for Autonomous Cars4.1 Why ethics
matters4.1.1 Beyond crash-avoidance4.1.2 Crash-optimization
means targeting4.1.3 Beyond harm4.2 Scenarios that implicate
ethics4.2.1 The deer4.2.2 Self-sacrifice4.2.3 Ducking harm4.2.4
Trolley problems4.3 Next steps4.3.1 Broader ethical issues4.3.2
ConclusionsReferences
Overview & Objectives
1. The case of Willy Just
2. The Ford Pinto
3. The Challenger Shuttle
4. Ethics in Practice: What are we to do?
Ethics in Practice
The Case of Willy Just
Ethics in Practice
The Case of Willy Just
“A shorter, fully loaded truck can operate much more quickly. A
shortening of the rear compartment will not disadvantageously
affect the weight balance, by overloading the front axle,
because a
correction in the weight distribution takes place automatically
through the fact that the cargo in the struggle towards the back
door during the operation is preponderantly located there…To
facilitate cleaning, an eight to twelve inch hole should be made
in
the floor and provided with a cover that can be opened from the
outside. The floor should be slightly inclined, and the cover
equipped with a small sieve. Thus all fluids will flow to the
middle, the thin fluids will exit during the operation, and the
thicker fluids can be hosed out afterwards.”
—Excerpt fromamemowrittenby theengineerWillyJust
Ethics in Practice
The Case of Willy Just
• Can you conceive of an ethical defence that could be
offered on behalf of Willy Just?
• Could his conduct be morally justified by any of the
ethical frameworks we employed in the trolley car
cases?
• Virtue Ethics?
• Deontology?
• Utilitarianism?
Ethics in Practice
The seemingly significant nature of roles
Our reasons for acting in a given way can conflict, depending
on their source
e.g. professional, moral, prudential, legal,…
We each occupy a number of roles, which bring with them
specific rights, duties, etc.
• In taking on certain roles, we also take on certain
obligations
Ethics in Practice
The Ford Pinto
A subcompact car released in the US in 1970
Design Aims
• Produce a small and light car (under 900 kilos)
• Low cost (under $2,000)
• Competitive in terms of fuel consumption, service costs, and
reliability
• Took 25 months from drawing board to on-road (43 months
was the norm at the time)
ETHICS IN PRACTICE
THE FORD PINTO CASE
Ethics in Practice
The Ford Pinto
In the ‘70s, over 2 million Pintos were sold in the U.S. alone
But, it had one fundamental design issue:
When rear-ended at over 20 mph (32kph), the gas tank –
which was located behind the rear axle – was liable to rupture,
often resulting in an explosion.
And, as the doors would crumple, passengers would very likely
be trapped inside.
Ethics in Practice
The Ford Pinto
Ford engineers proposed a number of possible solutions
1. A shock-absorbent plastic barrier ($4)
2. A nylon bladder inside the tank ($5-$8)
3. Placing the tank above the rear axle
All were successfully tested, but none were implemented
Ethics in Practice
The Ford Pinto
Can you conceive of an ethical defence that could be offered
on behalf of the Ford engineers?
What could the Ford engineers have done, when management
refused to improve the safety of the Pinto?
What were the moral responsibilities of the engineers? What
was their duty of care?
Ethics in Practice
The Challenger Space Shuttle
Ethics in Practice
The Challenger Space Shuttle
Background concerns
• Loss of public interest in the U.S. Space Program
• Political pressures
• Industrial consideration
• Launch previously delayed
Conditions on day of launch (28 Jan. ‘86)
Autonomous Vehicles Need Experimental EthicsAre We Ready .docx
Autonomous Vehicles Need Experimental EthicsAre We Ready .docx
Autonomous Vehicles Need Experimental EthicsAre We Ready .docx
Autonomous Vehicles Need Experimental EthicsAre We Ready .docx
Autonomous Vehicles Need Experimental EthicsAre We Ready .docx
Autonomous Vehicles Need Experimental EthicsAre We Ready .docx
Autonomous Vehicles Need Experimental EthicsAre We Ready .docx
Autonomous Vehicles Need Experimental EthicsAre We Ready .docx
Autonomous Vehicles Need Experimental EthicsAre We Ready .docx
Autonomous Vehicles Need Experimental EthicsAre We Ready .docx
Autonomous Vehicles Need Experimental EthicsAre We Ready .docx
Autonomous Vehicles Need Experimental EthicsAre We Ready .docx
Autonomous Vehicles Need Experimental EthicsAre We Ready .docx
Autonomous Vehicles Need Experimental EthicsAre We Ready .docx

More Related Content

Similar to Autonomous Vehicles Need Experimental EthicsAre We Ready .docx

The Urgency of Now
The Urgency of NowThe Urgency of Now
The Urgency of NowGlobal NCAP
 
Review: Development and trends in vehicle safety automation
Review: Development and trends in vehicle safety automationReview: Development and trends in vehicle safety automation
Review: Development and trends in vehicle safety automationDominic Portain
 
The Most Memorable College Admissions Essays Reddit Has Ever Seen Will
The Most Memorable College Admissions Essays Reddit Has Ever Seen WillThe Most Memorable College Admissions Essays Reddit Has Ever Seen Will
The Most Memorable College Admissions Essays Reddit Has Ever Seen WillDawn Rodriguez
 
Article SoftBank and Toyota Want Driverless Cars to Change the W.docx
Article SoftBank and Toyota Want Driverless Cars to Change the W.docxArticle SoftBank and Toyota Want Driverless Cars to Change the W.docx
Article SoftBank and Toyota Want Driverless Cars to Change the W.docxrossskuddershamus
 
Ten ways autonomous driving could redefine the automotive world
Ten ways autonomous driving could redefine the automotive worldTen ways autonomous driving could redefine the automotive world
Ten ways autonomous driving could redefine the automotive worldStradablog
 
AUTONOMOUS VEHICLES - THE RIGHT SOLUTIONS TO AVERT THE CRISIS OF MOTOR VEHIC...
 AUTONOMOUS VEHICLES - THE RIGHT SOLUTIONS TO AVERT THE CRISIS OF MOTOR VEHIC... AUTONOMOUS VEHICLES - THE RIGHT SOLUTIONS TO AVERT THE CRISIS OF MOTOR VEHIC...
AUTONOMOUS VEHICLES - THE RIGHT SOLUTIONS TO AVERT THE CRISIS OF MOTOR VEHIC...wle-ss
 
The Legal Implications of Autonomous Vehicles
The Legal Implications of Autonomous VehiclesThe Legal Implications of Autonomous Vehicles
The Legal Implications of Autonomous VehiclesDaniel Thompson
 
Policy Considerations for Autonomous Vehicles in Disasters
Policy Considerations for Autonomous Vehicles in DisastersPolicy Considerations for Autonomous Vehicles in Disasters
Policy Considerations for Autonomous Vehicles in DisastersMary Jo Flynn, MS, CEM
 
Driverless Cars: Time for Insurers to Shift Gears
Driverless Cars: Time for Insurers to Shift GearsDriverless Cars: Time for Insurers to Shift Gears
Driverless Cars: Time for Insurers to Shift GearsCognizant
 
DRIVING THE FUTURE: HOW AUTONOMOUS VEHICLES WILL CHANGE INDUSTRIES AND STRATEGY
DRIVING THE FUTURE: HOW AUTONOMOUS VEHICLES WILL CHANGE INDUSTRIES AND STRATEGYDRIVING THE FUTURE: HOW AUTONOMOUS VEHICLES WILL CHANGE INDUSTRIES AND STRATEGY
DRIVING THE FUTURE: HOW AUTONOMOUS VEHICLES WILL CHANGE INDUSTRIES AND STRATEGYNauman Hussaini
 
An efficient automotive collision avoidance system for indian traffic conditions
An efficient automotive collision avoidance system for indian traffic conditionsAn efficient automotive collision avoidance system for indian traffic conditions
An efficient automotive collision avoidance system for indian traffic conditionseSAT Journals
 
Driverless Cars
Driverless CarsDriverless Cars
Driverless Carsdani007007
 
Vehicle Size, Weight, and Injury Risk: High-Dimensional Modeling and
 Causal ...
Vehicle Size, Weight, and Injury Risk: High-Dimensional Modeling and
 Causal ...Vehicle Size, Weight, and Injury Risk: High-Dimensional Modeling and
 Causal ...
Vehicle Size, Weight, and Injury Risk: High-Dimensional Modeling and
 Causal ...Bayesia USA
 
Kindergarten Writing Paper With Picture Box
Kindergarten Writing Paper With Picture BoxKindergarten Writing Paper With Picture Box
Kindergarten Writing Paper With Picture BoxCarmen Sanborn
 
A Critical Examination of MAV Safety Standards by Patricia Kierans
A Critical Examination of MAV Safety Standards by Patricia KieransA Critical Examination of MAV Safety Standards by Patricia Kierans
A Critical Examination of MAV Safety Standards by Patricia KieransPatricia Kierans
 
Berkshire Hathaway INC
Berkshire Hathaway INCBerkshire Hathaway INC
Berkshire Hathaway INCMatthew Nash
 
12 part framework to structure safety assessment for autonomous driving
12 part framework to structure safety assessment for autonomous driving12 part framework to structure safety assessment for autonomous driving
12 part framework to structure safety assessment for autonomous drivingZiaullah Mirza
 

Similar to Autonomous Vehicles Need Experimental EthicsAre We Ready .docx (19)

The Urgency of Now
The Urgency of NowThe Urgency of Now
The Urgency of Now
 
Review: Development and trends in vehicle safety automation
Review: Development and trends in vehicle safety automationReview: Development and trends in vehicle safety automation
Review: Development and trends in vehicle safety automation
 
The Most Memorable College Admissions Essays Reddit Has Ever Seen Will
The Most Memorable College Admissions Essays Reddit Has Ever Seen WillThe Most Memorable College Admissions Essays Reddit Has Ever Seen Will
The Most Memorable College Admissions Essays Reddit Has Ever Seen Will
 
Article SoftBank and Toyota Want Driverless Cars to Change the W.docx
Article SoftBank and Toyota Want Driverless Cars to Change the W.docxArticle SoftBank and Toyota Want Driverless Cars to Change the W.docx
Article SoftBank and Toyota Want Driverless Cars to Change the W.docx
 
Ten ways autonomous driving could redefine the automotive world
Ten ways autonomous driving could redefine the automotive worldTen ways autonomous driving could redefine the automotive world
Ten ways autonomous driving could redefine the automotive world
 
AUTONOMOUS VEHICLES - THE RIGHT SOLUTIONS TO AVERT THE CRISIS OF MOTOR VEHIC...
 AUTONOMOUS VEHICLES - THE RIGHT SOLUTIONS TO AVERT THE CRISIS OF MOTOR VEHIC... AUTONOMOUS VEHICLES - THE RIGHT SOLUTIONS TO AVERT THE CRISIS OF MOTOR VEHIC...
AUTONOMOUS VEHICLES - THE RIGHT SOLUTIONS TO AVERT THE CRISIS OF MOTOR VEHIC...
 
Autonomously Driven to Distraction?
Autonomously Driven to Distraction?Autonomously Driven to Distraction?
Autonomously Driven to Distraction?
 
The Legal Implications of Autonomous Vehicles
The Legal Implications of Autonomous VehiclesThe Legal Implications of Autonomous Vehicles
The Legal Implications of Autonomous Vehicles
 
Policy Considerations for Autonomous Vehicles in Disasters
Policy Considerations for Autonomous Vehicles in DisastersPolicy Considerations for Autonomous Vehicles in Disasters
Policy Considerations for Autonomous Vehicles in Disasters
 
Driverless Cars: Time for Insurers to Shift Gears
Driverless Cars: Time for Insurers to Shift GearsDriverless Cars: Time for Insurers to Shift Gears
Driverless Cars: Time for Insurers to Shift Gears
 
DRIVING THE FUTURE: HOW AUTONOMOUS VEHICLES WILL CHANGE INDUSTRIES AND STRATEGY
DRIVING THE FUTURE: HOW AUTONOMOUS VEHICLES WILL CHANGE INDUSTRIES AND STRATEGYDRIVING THE FUTURE: HOW AUTONOMOUS VEHICLES WILL CHANGE INDUSTRIES AND STRATEGY
DRIVING THE FUTURE: HOW AUTONOMOUS VEHICLES WILL CHANGE INDUSTRIES AND STRATEGY
 
An efficient automotive collision avoidance system for indian traffic conditions
An efficient automotive collision avoidance system for indian traffic conditionsAn efficient automotive collision avoidance system for indian traffic conditions
An efficient automotive collision avoidance system for indian traffic conditions
 
Driverless Cars
Driverless CarsDriverless Cars
Driverless Cars
 
sr5108
sr5108sr5108
sr5108
 
Vehicle Size, Weight, and Injury Risk: High-Dimensional Modeling and
 Causal ...
Vehicle Size, Weight, and Injury Risk: High-Dimensional Modeling and
 Causal ...Vehicle Size, Weight, and Injury Risk: High-Dimensional Modeling and
 Causal ...
Vehicle Size, Weight, and Injury Risk: High-Dimensional Modeling and
 Causal ...
 
Kindergarten Writing Paper With Picture Box
Kindergarten Writing Paper With Picture BoxKindergarten Writing Paper With Picture Box
Kindergarten Writing Paper With Picture Box
 
A Critical Examination of MAV Safety Standards by Patricia Kierans
A Critical Examination of MAV Safety Standards by Patricia KieransA Critical Examination of MAV Safety Standards by Patricia Kierans
A Critical Examination of MAV Safety Standards by Patricia Kierans
 
Berkshire Hathaway INC
Berkshire Hathaway INCBerkshire Hathaway INC
Berkshire Hathaway INC
 
12 part framework to structure safety assessment for autonomous driving
12 part framework to structure safety assessment for autonomous driving12 part framework to structure safety assessment for autonomous driving
12 part framework to structure safety assessment for autonomous driving
 

More from celenarouzie

Attaining ExpertiseYou are tr.docx
Attaining ExpertiseYou are tr.docxAttaining ExpertiseYou are tr.docx
Attaining ExpertiseYou are tr.docxcelenarouzie
 
attachment Chloe” is a example of the whole packet. Please follow t.docx
attachment Chloe” is a example of the whole packet. Please follow t.docxattachment Chloe” is a example of the whole packet. Please follow t.docx
attachment Chloe” is a example of the whole packet. Please follow t.docxcelenarouzie
 
AttachmentFor this discussionUse Ericksons theoretic.docx
AttachmentFor this discussionUse Ericksons theoretic.docxAttachmentFor this discussionUse Ericksons theoretic.docx
AttachmentFor this discussionUse Ericksons theoretic.docxcelenarouzie
 
Attachment Programs and Families Working Together Learn.docx
Attachment Programs and Families Working Together Learn.docxAttachment Programs and Families Working Together Learn.docx
Attachment Programs and Families Working Together Learn.docxcelenarouzie
 
Attachment and Emotional Development in InfancyThe purpose o.docx
Attachment and Emotional Development in InfancyThe purpose o.docxAttachment and Emotional Development in InfancyThe purpose o.docx
Attachment and Emotional Development in InfancyThe purpose o.docxcelenarouzie
 
ATTACHEMENT from 7.1 and 7.2 Go back to the Powerpoint for thi.docx
ATTACHEMENT from 7.1 and 7.2 Go back to the Powerpoint for thi.docxATTACHEMENT from 7.1 and 7.2 Go back to the Powerpoint for thi.docx
ATTACHEMENT from 7.1 and 7.2 Go back to the Powerpoint for thi.docxcelenarouzie
 
Attached the dataset Kaggle has hosted a data science competitio.docx
Attached the dataset Kaggle has hosted a data science competitio.docxAttached the dataset Kaggle has hosted a data science competitio.docx
Attached the dataset Kaggle has hosted a data science competitio.docxcelenarouzie
 
Attached you will find all of the questions.These are just like th.docx
Attached you will find all of the questions.These are just like th.docxAttached you will find all of the questions.These are just like th.docx
Attached you will find all of the questions.These are just like th.docxcelenarouzie
 
Attached the dataset Kaggle has hosted a data science compet.docx
Attached the dataset Kaggle has hosted a data science compet.docxAttached the dataset Kaggle has hosted a data science compet.docx
Attached the dataset Kaggle has hosted a data science compet.docxcelenarouzie
 
B. Answer Learning Exercises  Matching words parts 1, 2, 3,.docx
B. Answer Learning Exercises  Matching words parts 1, 2, 3,.docxB. Answer Learning Exercises  Matching words parts 1, 2, 3,.docx
B. Answer Learning Exercises  Matching words parts 1, 2, 3,.docxcelenarouzie
 
B)What is Joe waiting for in order to forgive Missy May in The Gild.docx
B)What is Joe waiting for in order to forgive Missy May in The Gild.docxB)What is Joe waiting for in order to forgive Missy May in The Gild.docx
B)What is Joe waiting for in order to forgive Missy May in The Gild.docxcelenarouzie
 
B)Blanche and Stella both view Stanley very differently – how do the.docx
B)Blanche and Stella both view Stanley very differently – how do the.docxB)Blanche and Stella both view Stanley very differently – how do the.docx
B)Blanche and Stella both view Stanley very differently – how do the.docxcelenarouzie
 
b) What is the largest value that can be represented by 3 digits usi.docx
b) What is the largest value that can be represented by 3 digits usi.docxb) What is the largest value that can be represented by 3 digits usi.docx
b) What is the largest value that can be represented by 3 digits usi.docxcelenarouzie
 
b$ E=EE#s{gEgE lEgEHEFs ig=ii 5i= l; i € 3 r i.Er1 b €€.docx
b$ E=EE#s{gEgE lEgEHEFs ig=ii 5i= l; i € 3 r  i.Er1 b €€.docxb$ E=EE#s{gEgE lEgEHEFs ig=ii 5i= l; i € 3 r  i.Er1 b €€.docx
b$ E=EE#s{gEgE lEgEHEFs ig=ii 5i= l; i € 3 r i.Er1 b €€.docxcelenarouzie
 
B A S I C L O G I C M O D E L D E V E L O P M E N T Pr.docx
B A S I C  L O G I C  M O D E L  D E V E L O P M E N T  Pr.docxB A S I C  L O G I C  M O D E L  D E V E L O P M E N T  Pr.docx
B A S I C L O G I C M O D E L D E V E L O P M E N T Pr.docxcelenarouzie
 
B H1. The first issue that jumped out to me is that the presiden.docx
B H1. The first issue that jumped out to me is that the presiden.docxB H1. The first issue that jumped out to me is that the presiden.docx
B H1. The first issue that jumped out to me is that the presiden.docxcelenarouzie
 
b l u e p r i n t i CONSUMER PERCEPTIONSHQW DQPerception.docx
b l u e p r i n t i CONSUMER PERCEPTIONSHQW DQPerception.docxb l u e p r i n t i CONSUMER PERCEPTIONSHQW DQPerception.docx
b l u e p r i n t i CONSUMER PERCEPTIONSHQW DQPerception.docxcelenarouzie
 
B R O O K I N G SM E T R O P O L I TA N P O L I CY .docx
B R O O K I N G SM E T R O P O L I TA N P O L I CY .docxB R O O K I N G SM E T R O P O L I TA N P O L I CY .docx
B R O O K I N G SM E T R O P O L I TA N P O L I CY .docxcelenarouzie
 
B L O C K C H A I N & S U P P LY C H A I N SS U N I L.docx
B L O C K C H A I N  &  S U P P LY  C H A I N SS U N I L.docxB L O C K C H A I N  &  S U P P LY  C H A I N SS U N I L.docx
B L O C K C H A I N & S U P P LY C H A I N SS U N I L.docxcelenarouzie
 
Año 15, núm. 43 enero – abril de 2012. Análisis 97 Orien.docx
Año 15, núm. 43  enero – abril de 2012. Análisis 97 Orien.docxAño 15, núm. 43  enero – abril de 2012. Análisis 97 Orien.docx
Año 15, núm. 43 enero – abril de 2012. Análisis 97 Orien.docxcelenarouzie
 

More from celenarouzie (20)

Attaining ExpertiseYou are tr.docx
Attaining ExpertiseYou are tr.docxAttaining ExpertiseYou are tr.docx
Attaining ExpertiseYou are tr.docx
 
attachment Chloe” is a example of the whole packet. Please follow t.docx
attachment Chloe” is a example of the whole packet. Please follow t.docxattachment Chloe” is a example of the whole packet. Please follow t.docx
attachment Chloe” is a example of the whole packet. Please follow t.docx
 
AttachmentFor this discussionUse Ericksons theoretic.docx
AttachmentFor this discussionUse Ericksons theoretic.docxAttachmentFor this discussionUse Ericksons theoretic.docx
AttachmentFor this discussionUse Ericksons theoretic.docx
 
Attachment Programs and Families Working Together Learn.docx
Attachment Programs and Families Working Together Learn.docxAttachment Programs and Families Working Together Learn.docx
Attachment Programs and Families Working Together Learn.docx
 
Attachment and Emotional Development in InfancyThe purpose o.docx
Attachment and Emotional Development in InfancyThe purpose o.docxAttachment and Emotional Development in InfancyThe purpose o.docx
Attachment and Emotional Development in InfancyThe purpose o.docx
 
ATTACHEMENT from 7.1 and 7.2 Go back to the Powerpoint for thi.docx
ATTACHEMENT from 7.1 and 7.2 Go back to the Powerpoint for thi.docxATTACHEMENT from 7.1 and 7.2 Go back to the Powerpoint for thi.docx
ATTACHEMENT from 7.1 and 7.2 Go back to the Powerpoint for thi.docx
 
Attached the dataset Kaggle has hosted a data science competitio.docx
Attached the dataset Kaggle has hosted a data science competitio.docxAttached the dataset Kaggle has hosted a data science competitio.docx
Attached the dataset Kaggle has hosted a data science competitio.docx
 
Attached you will find all of the questions.These are just like th.docx
Attached you will find all of the questions.These are just like th.docxAttached you will find all of the questions.These are just like th.docx
Attached you will find all of the questions.These are just like th.docx
 
Attached the dataset Kaggle has hosted a data science compet.docx
Attached the dataset Kaggle has hosted a data science compet.docxAttached the dataset Kaggle has hosted a data science compet.docx
Attached the dataset Kaggle has hosted a data science compet.docx
 
B. Answer Learning Exercises  Matching words parts 1, 2, 3,.docx
B. Answer Learning Exercises  Matching words parts 1, 2, 3,.docxB. Answer Learning Exercises  Matching words parts 1, 2, 3,.docx
B. Answer Learning Exercises  Matching words parts 1, 2, 3,.docx
 
B)What is Joe waiting for in order to forgive Missy May in The Gild.docx
B)What is Joe waiting for in order to forgive Missy May in The Gild.docxB)What is Joe waiting for in order to forgive Missy May in The Gild.docx
B)What is Joe waiting for in order to forgive Missy May in The Gild.docx
 
B)Blanche and Stella both view Stanley very differently – how do the.docx
B)Blanche and Stella both view Stanley very differently – how do the.docxB)Blanche and Stella both view Stanley very differently – how do the.docx
B)Blanche and Stella both view Stanley very differently – how do the.docx
 
b) What is the largest value that can be represented by 3 digits usi.docx
b) What is the largest value that can be represented by 3 digits usi.docxb) What is the largest value that can be represented by 3 digits usi.docx
b) What is the largest value that can be represented by 3 digits usi.docx
 
b$ E=EE#s{gEgE lEgEHEFs ig=ii 5i= l; i € 3 r i.Er1 b €€.docx
b$ E=EE#s{gEgE lEgEHEFs ig=ii 5i= l; i € 3 r  i.Er1 b €€.docxb$ E=EE#s{gEgE lEgEHEFs ig=ii 5i= l; i € 3 r  i.Er1 b €€.docx
b$ E=EE#s{gEgE lEgEHEFs ig=ii 5i= l; i € 3 r i.Er1 b €€.docx
 
B A S I C L O G I C M O D E L D E V E L O P M E N T Pr.docx
B A S I C  L O G I C  M O D E L  D E V E L O P M E N T  Pr.docxB A S I C  L O G I C  M O D E L  D E V E L O P M E N T  Pr.docx
B A S I C L O G I C M O D E L D E V E L O P M E N T Pr.docx
 
B H1. The first issue that jumped out to me is that the presiden.docx
B H1. The first issue that jumped out to me is that the presiden.docxB H1. The first issue that jumped out to me is that the presiden.docx
B H1. The first issue that jumped out to me is that the presiden.docx
 
b l u e p r i n t i CONSUMER PERCEPTIONSHQW DQPerception.docx
b l u e p r i n t i CONSUMER PERCEPTIONSHQW DQPerception.docxb l u e p r i n t i CONSUMER PERCEPTIONSHQW DQPerception.docx
b l u e p r i n t i CONSUMER PERCEPTIONSHQW DQPerception.docx
 
B R O O K I N G SM E T R O P O L I TA N P O L I CY .docx
B R O O K I N G SM E T R O P O L I TA N P O L I CY .docxB R O O K I N G SM E T R O P O L I TA N P O L I CY .docx
B R O O K I N G SM E T R O P O L I TA N P O L I CY .docx
 
B L O C K C H A I N & S U P P LY C H A I N SS U N I L.docx
B L O C K C H A I N  &  S U P P LY  C H A I N SS U N I L.docxB L O C K C H A I N  &  S U P P LY  C H A I N SS U N I L.docx
B L O C K C H A I N & S U P P LY C H A I N SS U N I L.docx
 
Año 15, núm. 43 enero – abril de 2012. Análisis 97 Orien.docx
Año 15, núm. 43  enero – abril de 2012. Análisis 97 Orien.docxAño 15, núm. 43  enero – abril de 2012. Análisis 97 Orien.docx
Año 15, núm. 43 enero – abril de 2012. Análisis 97 Orien.docx
 

Recently uploaded

TỔNG HỢP HƠN 100 ĐỀ THI THỬ TỐT NGHIỆP THPT TOÁN 2024 - TỪ CÁC TRƯỜNG, TRƯỜNG...
TỔNG HỢP HƠN 100 ĐỀ THI THỬ TỐT NGHIỆP THPT TOÁN 2024 - TỪ CÁC TRƯỜNG, TRƯỜNG...TỔNG HỢP HƠN 100 ĐỀ THI THỬ TỐT NGHIỆP THPT TOÁN 2024 - TỪ CÁC TRƯỜNG, TRƯỜNG...
TỔNG HỢP HƠN 100 ĐỀ THI THỬ TỐT NGHIỆP THPT TOÁN 2024 - TỪ CÁC TRƯỜNG, TRƯỜNG...Nguyen Thanh Tu Collection
 
How To Create Editable Tree View in Odoo 17
How To Create Editable Tree View in Odoo 17How To Create Editable Tree View in Odoo 17
How To Create Editable Tree View in Odoo 17Celine George
 
The Story of Village Palampur Class 9 Free Study Material PDF
The Story of Village Palampur Class 9 Free Study Material PDFThe Story of Village Palampur Class 9 Free Study Material PDF
The Story of Village Palampur Class 9 Free Study Material PDFVivekanand Anglo Vedic Academy
 
PSYPACT- Practicing Over State Lines May 2024.pptx
PSYPACT- Practicing Over State Lines May 2024.pptxPSYPACT- Practicing Over State Lines May 2024.pptx
PSYPACT- Practicing Over State Lines May 2024.pptxMarlene Maheu
 
SURVEY I created for uni project research
SURVEY I created for uni project researchSURVEY I created for uni project research
SURVEY I created for uni project researchCaitlinCummins3
 
Improved Approval Flow in Odoo 17 Studio App
Improved Approval Flow in Odoo 17 Studio AppImproved Approval Flow in Odoo 17 Studio App
Improved Approval Flow in Odoo 17 Studio AppCeline George
 
male presentation...pdf.................
male presentation...pdf.................male presentation...pdf.................
male presentation...pdf.................MirzaAbrarBaig5
 
Graduate Outcomes Presentation Slides - English (v3).pptx
Graduate Outcomes Presentation Slides - English (v3).pptxGraduate Outcomes Presentation Slides - English (v3).pptx
Graduate Outcomes Presentation Slides - English (v3).pptxneillewis46
 
Basic Civil Engineering notes on Transportation Engineering & Modes of Transport
Basic Civil Engineering notes on Transportation Engineering & Modes of TransportBasic Civil Engineering notes on Transportation Engineering & Modes of Transport
Basic Civil Engineering notes on Transportation Engineering & Modes of TransportDenish Jangid
 
Stl Algorithms in C++ jjjjjjjjjjjjjjjjjj
Stl Algorithms in C++ jjjjjjjjjjjjjjjjjjStl Algorithms in C++ jjjjjjjjjjjjjjjjjj
Stl Algorithms in C++ jjjjjjjjjjjjjjjjjjMohammed Sikander
 
Major project report on Tata Motors and its marketing strategies
Major project report on Tata Motors and its marketing strategiesMajor project report on Tata Motors and its marketing strategies
Major project report on Tata Motors and its marketing strategiesAmanpreetKaur157993
 
SPLICE Working Group: Reusable Code Examples
SPLICE Working Group:Reusable Code ExamplesSPLICE Working Group:Reusable Code Examples
SPLICE Working Group: Reusable Code ExamplesPeter Brusilovsky
 
DEMONSTRATION LESSON IN ENGLISH 4 MATATAG CURRICULUM
DEMONSTRATION LESSON IN ENGLISH 4 MATATAG CURRICULUMDEMONSTRATION LESSON IN ENGLISH 4 MATATAG CURRICULUM
DEMONSTRATION LESSON IN ENGLISH 4 MATATAG CURRICULUMELOISARIVERA8
 
8 Tips for Effective Working Capital Management
8 Tips for Effective Working Capital Management8 Tips for Effective Working Capital Management
8 Tips for Effective Working Capital ManagementMBA Assignment Experts
 
An overview of the various scriptures in Hinduism
An overview of the various scriptures in HinduismAn overview of the various scriptures in Hinduism
An overview of the various scriptures in HinduismDabee Kamal
 
Observing-Correct-Grammar-in-Making-Definitions.pptx
Observing-Correct-Grammar-in-Making-Definitions.pptxObserving-Correct-Grammar-in-Making-Definitions.pptx
Observing-Correct-Grammar-in-Making-Definitions.pptxAdelaideRefugio
 

Recently uploaded (20)

TỔNG HỢP HƠN 100 ĐỀ THI THỬ TỐT NGHIỆP THPT TOÁN 2024 - TỪ CÁC TRƯỜNG, TRƯỜNG...
TỔNG HỢP HƠN 100 ĐỀ THI THỬ TỐT NGHIỆP THPT TOÁN 2024 - TỪ CÁC TRƯỜNG, TRƯỜNG...TỔNG HỢP HƠN 100 ĐỀ THI THỬ TỐT NGHIỆP THPT TOÁN 2024 - TỪ CÁC TRƯỜNG, TRƯỜNG...
TỔNG HỢP HƠN 100 ĐỀ THI THỬ TỐT NGHIỆP THPT TOÁN 2024 - TỪ CÁC TRƯỜNG, TRƯỜNG...
 
How To Create Editable Tree View in Odoo 17
How To Create Editable Tree View in Odoo 17How To Create Editable Tree View in Odoo 17
How To Create Editable Tree View in Odoo 17
 
The Story of Village Palampur Class 9 Free Study Material PDF
The Story of Village Palampur Class 9 Free Study Material PDFThe Story of Village Palampur Class 9 Free Study Material PDF
The Story of Village Palampur Class 9 Free Study Material PDF
 
OS-operating systems- ch05 (CPU Scheduling) ...
OS-operating systems- ch05 (CPU Scheduling) ...OS-operating systems- ch05 (CPU Scheduling) ...
OS-operating systems- ch05 (CPU Scheduling) ...
 
PSYPACT- Practicing Over State Lines May 2024.pptx
PSYPACT- Practicing Over State Lines May 2024.pptxPSYPACT- Practicing Over State Lines May 2024.pptx
PSYPACT- Practicing Over State Lines May 2024.pptx
 
SURVEY I created for uni project research
SURVEY I created for uni project researchSURVEY I created for uni project research
SURVEY I created for uni project research
 
Improved Approval Flow in Odoo 17 Studio App
Improved Approval Flow in Odoo 17 Studio AppImproved Approval Flow in Odoo 17 Studio App
Improved Approval Flow in Odoo 17 Studio App
 
male presentation...pdf.................
male presentation...pdf.................male presentation...pdf.................
male presentation...pdf.................
 
Graduate Outcomes Presentation Slides - English (v3).pptx
Graduate Outcomes Presentation Slides - English (v3).pptxGraduate Outcomes Presentation Slides - English (v3).pptx
Graduate Outcomes Presentation Slides - English (v3).pptx
 
Basic Civil Engineering notes on Transportation Engineering & Modes of Transport
Basic Civil Engineering notes on Transportation Engineering & Modes of TransportBasic Civil Engineering notes on Transportation Engineering & Modes of Transport
Basic Civil Engineering notes on Transportation Engineering & Modes of Transport
 
Mattingly "AI and Prompt Design: LLMs with NER"
Mattingly "AI and Prompt Design: LLMs with NER"Mattingly "AI and Prompt Design: LLMs with NER"
Mattingly "AI and Prompt Design: LLMs with NER"
 
Stl Algorithms in C++ jjjjjjjjjjjjjjjjjj
Stl Algorithms in C++ jjjjjjjjjjjjjjjjjjStl Algorithms in C++ jjjjjjjjjjjjjjjjjj
Stl Algorithms in C++ jjjjjjjjjjjjjjjjjj
 
Major project report on Tata Motors and its marketing strategies
Major project report on Tata Motors and its marketing strategiesMajor project report on Tata Motors and its marketing strategies
Major project report on Tata Motors and its marketing strategies
 
SPLICE Working Group: Reusable Code Examples
SPLICE Working Group:Reusable Code ExamplesSPLICE Working Group:Reusable Code Examples
SPLICE Working Group: Reusable Code Examples
 
ESSENTIAL of (CS/IT/IS) class 07 (Networks)
ESSENTIAL of (CS/IT/IS) class 07 (Networks)ESSENTIAL of (CS/IT/IS) class 07 (Networks)
ESSENTIAL of (CS/IT/IS) class 07 (Networks)
 
DEMONSTRATION LESSON IN ENGLISH 4 MATATAG CURRICULUM
DEMONSTRATION LESSON IN ENGLISH 4 MATATAG CURRICULUMDEMONSTRATION LESSON IN ENGLISH 4 MATATAG CURRICULUM
DEMONSTRATION LESSON IN ENGLISH 4 MATATAG CURRICULUM
 
8 Tips for Effective Working Capital Management
8 Tips for Effective Working Capital Management8 Tips for Effective Working Capital Management
8 Tips for Effective Working Capital Management
 
An overview of the various scriptures in Hinduism
An overview of the various scriptures in HinduismAn overview of the various scriptures in Hinduism
An overview of the various scriptures in Hinduism
 
Observing-Correct-Grammar-in-Making-Definitions.pptx
Observing-Correct-Grammar-in-Making-Definitions.pptxObserving-Correct-Grammar-in-Making-Definitions.pptx
Observing-Correct-Grammar-in-Making-Definitions.pptx
 
Mattingly "AI & Prompt Design: Named Entity Recognition"
Mattingly "AI & Prompt Design: Named Entity Recognition"Mattingly "AI & Prompt Design: Named Entity Recognition"
Mattingly "AI & Prompt Design: Named Entity Recognition"
 

Autonomous Vehicles Need Experimental EthicsAre We Ready .docx

  • 1. Autonomous Vehicles Need Experimental Ethics: Are We Ready for Utilitarian Cars? Jean-François Bonnefon1, Azim Shariff2, and Iyad Rahwan3 1enter for Research in Management, Toulouse School of Economics, Toulouse, 31000, France 2Department of Psychology, 1277 University of Oregon, Eugene, OR 97403-1227, USA 3Media Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA October 13, 2015 Abstract The wide adoption of self-driving, Autonomous Vehicles (AVs) promises to dramatically reduce the number of traffic accidents. Some accidents, though, will be inevitable, because some situations will require AVs to choose the lesser of two evils. For example, running over a pedestrian on the road or a passer-by on the side; or choosing whether to run over a group of pedestrians or to sacrifice the passenger by driving into a wall. It is a formidable challenge to define the algorithms that will guide AVs confronted with such moral dilemmas. In particular, these moral algorithms will need to accomplish three potentially incompatible objectives: being consistent, not causing
  • 2. public outrage, and not discouraging buyers. We argue to achieve these objectives, manufacturers and regulators will need psychologists to apply the methods of experimental ethics to situations involving AVs and unavoidable harm. To illustrate our claim, we report three surveys showing that laypersons are relatively comfortable with utilitarian AVs, programmed to minimize the death toll in case of unavoidable harm. We give special attention to whether an AV should save lives by sacrificing its owner, and provide insights into (i) the perceived morality of this self-sacrifice, (ii) the willingness to see this self-sacrifice being legally enforced, (iii) the expectations that AVs will be programmed to self-sacrifice, and (iv) the willingness to buy self-sacrificing AVs. 1 Introduction In 2007, a sequence of technical advances enabled six teams to complete the DARPA Urban Challenge, the first benchmark test for autonomous driving in realistic urban environments [Montemerlo, B. et al, 2008, Urmson, C. et al, 2008]. Since then, major research labs spearheaded efforts to build and test Autonomous Vehicles (AVs). The Google Car, for example, has already succeeded in covering thousands of miles of real-road driving [Waldrop, 2015]. AVs promise world- changing benefits by increasing traffic efficiency [Van Arem et al., 2006], reducing pollution [Spieser, K. et al, 2014], and eliminating up to 90% of traffic accidents [Gao et al., 2014].
  • 4. Howey Highlight Howey Highlight Howey Highlight Howey Highlight Howey Highlight Howey Highlight Howey Highlight Howey Highlight Not all accidents will be avoided, though, and accidents involving driverless cars will create the need for new kinds of regulation—especially in cases where harm cannot be entirely avoided. Fig. 1 illustrates three situations in which harm is unavoidable. In one case, an AV may avoid harming several pedestrians by swerving and sacrificing a passer-by. In the other two cases, the AV is faced with the choice of sacrific- ing its own passenger, who can be its owner, in
  • 5. order to save one or more pedestrians. The be- havior of AVs in these situations of unavoidable harm raises non-trivial questions, both legal and moral, which have commanded public interest but have not yet been empirically investigated. Indeed, while we could easily trace multiple pub- lic discussions of these situations, we have not identified a single scientific article dealing with the choices an AV must make in situations of unavoidable harm. On the legal side, since the control algo- rithm ultimately makes the decision to hit a pedestrian, passer-by, or wall, it is not obvious whether the passenger should be held legally accountable for this outcome. Some argue that liability must shift from the driver to the manufacturer, because failure to anticipate decisions like those in Fig. 1 may amount to negligence in design under Product Liability law [Villasenor, 2014]. In this case, car man- ufacturers may become the de facto insurers against accidents, or may rely on the expertise of existing motor insurers [Jain et al., 2015]. Today, facing regulation that lags behind tech- nology [UK Department for Transport, 2015], manufacturers of semi-autonomous vehicles are considering ad hoc workarounds. For example, to minimize manufacturer liability associated with their new ‘automated overtaking’ feature, Tesla Motors will actually require the driver to initiate the feature, thus ensuring legal responsibility for the maneuver’s consequences falls with the driver [Ramsey, 2015].
  • 6. In this context, defining the algorithms that will guide AVs in situations of unavoidable harm is a formidable challenge, given that the de- cision to run over pedestrians, to swerve into a passer-by, or to self-destruct is a moral one [Marcus, 2012]. Indeed, in situations where harm cannot entirely be avoided, it must be dis- tributed, and the distribution of harm is uni- versally considered to fall within the moral do- main [Haidt, 2012]. Accordingly, the control al- gorithms of AVs will need to embed moral prin- ciples guiding their decisions in situations of un- avoidable harm [Wallach and Allen, 2008]. In this respect, manufacturers and regulators will need to accomplish three potentially incompati- ble objectives: being reasonably consistent, not causing public outrage, and not discouraging buyers. Not discouraging buyers is a commercial necessity—but it is also in itself a moral impera- tive, given the social and safety benefits AVs pro- vide over conventional cars. Meanwhile, avoid- ing public outrage, that is, adopting moral algo- rithms that align with human moral attitudes, is key to fostering public comfort with allowing the broad use of AVs in the first place. However, to pursue these two objectives simultaneously may lead to moral inconsistencies. Consider for ex- ample the case displayed in Fig. 1a, and assume that the most common moral attitude is that the car should swerve. This would fit a utilitarian moral doctrine [Rosen, 2005], according to which the moral course of action is to minimize the death toll. But consider then the case displayed in Fig. 1c. The utilitarian course of action, in
  • 7. that situation, would be for the car to swerve and kill its owner—but a driverless car programmed 2 Howey Highlight Howey Highlight Howey Highlight Howey Highlight Howey Highlight Howey Highlight Howey Highlight Figure 1: Three traffic situations involving imminent unavoidable harm. (a) The car can stay on course and kill several pedestrians, or swerve and kill one passer-by. (b) The car can stay on course and kill one pedestrian, or swerve and kill its passenger. (c) The car can stay on course and kill several pedestrians, or swerve and kill its passenger.
  • 8. to follow this course of action might discourage buyers, who may consider that their own safety should trump other considerations. Even though such situations may be exceedingly rare, their emotional saliency is likely to give them broad public exposure and a disproportionate weight in individual and public decisions about AVs. We suggest that a way out of this conundrum is to adopt a data-driven approach, inspired by experimental ethics [Greene, 2014a], to identify moral algorithms that people are willing to ac- cept as citizens and to be subjected to as car owners. Indeed, situations of unavoidable harms, as illustrated in Fig. 1, bear a striking resem- blance with the flagship dilemmas of experimen- tal ethics—that is, the so called ‘trolley prob- lems’. In typical trolley problems, an agent has the opportunity to save several persons by sac- rificing the life of a single individual. This de- cision is commonly labeled as utilitarian, since it amounts to accepting the infliction of harm in the name of the greater good. Trolley problems inspired a huge number of experiments, aimed at identifying both the an- tecedents and the cognitive processing lead- ing to a utilitarian decision [Greene, 2014c, Greene, 2014b]. The results of these experiments do not apply well to situations involving AVs, though. For example, some pressing questions about AVs do not make sense in typical trolley problems that involve a human decision-maker: Would people purchase AVs programmed to be utilitarian? What are their expectations about
  • 9. the way car manufacturers will program AVs? Other questions could have been addressed in a human context, but were not because they were 3 Howey Highlight Howey Highlight Howey Highlight Howey Highlight only made salient in the context of AVs. For ex- ample, would people accept legal enforcement of utilitarian decisions, and more so for AVs than for humans? And finally, perhaps the most strik- ing problem created by AVs, at least in the pub- lic opinion, is whether a car may decide to sac- rifice its passenger in order to save several lives on the road. How do people balance their moti- vations for self-preservation as drivers with the protection of pedestrians and other drivers, when considering whether to own an AV, when think- ing about how AV will or should be programmed, and when pondering which programming might be legally enforced? We believe that regulators and manufactur-
  • 10. ers will soon be in pressing need of answers to these questions, and that answers are most likely to come from surveys employing the protocols of experimental ethics. To be clear, we do not mean that these thorny ethical questions can be solved by polling the public and enforcing the majority opinion. Survey data will inform the construc- tion and regulation of moral algorithms for AVs, not dictate them. The importance of survey data must not be underestimated, though, since algo- rithms that go against the moral expectations of citizens (or against the preferences or consumers) are likely to impair the smooth adoption of AVs. Having argued that autonomous vehicles need experimental ethics, we offer to illustrate this ap- proach in the rest of this article, by reporting a series of three surveys built upon the proto- cols commonly used to study trolley problems, which we adapted to the new context of traffic accidents involving AVs. In this first series of surveys, we focus on the question of whether re- spondents would approve of utilitarian AVs, pro- grammed to minimize the death toll of an acci- dent, even when it requires to sacrifice the car’s own passenger. 2 Materials and Methods We conducted three online surveys in June 2015. All studies were programmed on Qualtrics sur- vey software and recruited participants from the Mechanical Turk platform, for a compensation of 25 cents. In all studies, participants provided basic demographic information, such as age, sex, and religiosity. At the end of all studies, three
  • 11. 7-point scales measured the overall enthusiasm that participants felt for AVs: How excited they felt about a future in which autonomous self- driving cars are an everyday part of our motor- ing experience, how fearful they were of this fu- ture (reverse coded), and how likely they were to buy an autonomous vehicle, should they be- come commercially available by the time they would next purchase a new car. Cronbach alpha for these three items was .82 in Study 1, .78 in Study 2, and .81 in Study 3. Regression anal- yses (shown in the Supplementary Information) showed that enthusiasm was significantly greater for male participants, in all studies; and that enthusiasm was significantly greater for younger and less religious participants in two studies, al- though these two effects were smaller than that of sex. Given these results, all subsequent anal- yses controlled for sex, age and religiosity. 2.1 Study 1 Participants (241 men and 161 women, median age = 30) read a vignette in which one or more pedestrians could be saved if a car were to swerve into a barrier, killing its passenger. Par- ticipants were randomly assigned to one group of a 2×2×2 between-participant design, which manipulated the number of pedestrians which could be saved (one vs ten), whether the driver or the self-driving car would make the decision 4 Howey
  • 12. Highlight Howey Highlight Howey Highlight Howey Highlight Howey Highlight to swerve, and whether participants were in- structed to imagine themselves in the car, or to imagine an anonymous character. The ex- periment was programmed so that the sex of this anonymous character always matched that of the participant (this information being col- lected early in the experiment). In addition to the vignette, participants were given a pictorial representation of the swerve and stay options. See the Supplementary Information for detailed examples of the vignettes and their pictorial rep- resentation. 2.2 Study 2 Participants (144 men and 166 women, median age = 31) were randomly assigned to one of four versions of a vignette, in which either 1 or 10 pedestrians could be saved if an autonomous, self-driving car were to swerve into either a bar- rier (killing its passenger), or another pedestrian
  • 13. (killing that pedestrian). Participants were told that three algorithms could be implemented in the car for such a contingency: always stay, always swerve, and random. Additionally, participants were given a pictorial representa- tion of the swerve and stay options. Partici- pants then answered three questions, introduced in random order for each participant: How would you rate the relative morality of these three algo- rithms? How would you rate your relative will- ingness to buy a car with one of these three al- gorithms? How would you rate the relative com- fort with having each of the car with one of these three algorithms on the roads in your part of the world? To provide their responses, participants assigned points to each of the three algorithms, taken from an overall budget of 100 points. See the Supplementary Information for detailed ex- amples of the vignettes, their pictorial represen- tation, and a screen capture of the budget split- ting procedure. 2.3 Study 3 Participants (97 men and 104 women, median age = 30) read a vignette in which ten pedestri- ans could be saved if a car were to swerve into a barrier, killing its passenger. Participants were randomly assigned to one of two groups, ma- nipulating whether participants were instructed to imagine themselves in the car, or to imagine an anonymous character. The experiment was programmed so that the sex of this anonymous character always matched that of the participant (this information being collected early in the
  • 14. experiment). Thus, the experimental vignettes were similar to that used in Study 1 (restricted to situations involving ten pedestrians). Partic- ipants were asked whether the moral course of action was to swerve or stay on course (dichoto- mous response variable), and whether they ex- pected future driverless cars to be programmed to swerve or to stay on course in such a situation (dichotomous response variable). Next, partici- pants were asked to indicate whether it would be more appropriate to “protect the driver at all costs” or to “maximize the number of lives saved,” on a continuous slider scale anchored at these two expressions. 3 Results Results suggest that participants were generally comfortable with utilitarian AVs, programmed to minimize an accident’s death toll. This is es- pecially true of situations that do not involve the sacrifice of the AV’s owner. For example, Fig. 2b (right) displays the responses of participants in Study 2, who expressed their willingness to buy 5 Howey Highlight Howey Highlight Howey Highlight
  • 15. (and their comfort with others buying) different types of AVs. An analysis of covariance showed that participants preferred AVs programmed to swerve into a passer-by when it could save 10 (but not one) pedestrians, F(1, 141) = 41.3, p < .001, whether they were thinking of buying the car themselves or having other people drive the car. Critically important, however, are preferences and expectations about whether an AV should be programmed to save lives by sacrificing its owner. Our results provide information on (i) the morality of this self-sacrifice, (ii) the will- ingness to see this self-sacrifice being legally en- forced, (iii) the expectations that AVs will be programmed to self-sacrifice, and (iv) the will- ingness to buy self-sacrificing cars. First, participants in Study 1 approved of AVs making utilitarian self-sacrifices to the same ex- tent as they approved this decision for human drivers (Fig. 2a, left). This was consistent with the results of an analysis of covariance where morality was the dependent outcome, and the predictors were the number of lives that could be saved; whether the agent making the sacrifice decision was human or machine; whether par- ticipants pictured themselves or another person in the car; and all the 2-way and 3-way inter- action terms. (As in all analyses and as ex- plained in the Materials and Methods section, age, sex, and religiosity were entered as covari- ates). No effect was detected as significant but
  • 16. that of the number of lives which could be saved, F(1, 367) = 48.9, p < .001. Second, participants in Study 1 were more amenable to legally enforcing self-sacrifice for AVs than for human drivers (Fig. 2a, right). When the dependent variable in the analysis of covariance was the the legal enforceability of self-sacrifice, the analysis detected a signifi- cant effect of the number of lives that could be saved, F(1, 367) = 7.3, p < .01; and an effect of whether the decision-maker was human or ma- chine, F(1, 367) = 9.2, p < .01. Participants were more willing to consider legal enforcement of the sacrifice when it saved a greater number of lives, and when the decision was made by a self-driving car, rather than by a human driver. Third, whereas 3/4 of participants in Study 3 believed that an AV should self-sacrifice to save ten lives, less than 2/3 (a significantly lower proportion, McNemar’s χ2 = 6.7, p < .01) be- lieved that AVs would actually be programmed that way in the future. Similarly, whereas these participants believed that AVs should, generally speaking, place the greater good over the safety of the passenger, they were significantly less con- fident that manufacturers would programmed AVs with this goal, F(1, 179) = 18.9, p < .001. Thus, rather than being concerned about AVs being too utilitarian (as is often portrayed in science fiction), participants were generally wary that AVs would be programmed to protect their passengers at all costs.
  • 17. Fourth, whereas participants in Study 2 gener- ally supported others buying AVs programmed for utilitarian self-sacrifice, they were less will- ing to buy such AVs themselves (even when the sacrifice would save ten pedestrians). We ran an analysis of covariance of the budget share allo- cated to the AV programmed to self-sacrifice, in which the predictors were the number of lives that could be saved and whether participants were asked about their own willingness to buy such car, or their comfort with seeing other peo- ple drive such a car. Participants allocated a larger budget share to the self-sacrificing AV when it could save 10 pedestrians, F(1, 132) = 7.6, p < .01, and when they were asked about other people traveling in the AV, rather than 6 Howey Highlight Howey Highlight Howey Highlight Howey Highlight Howey Highlight
  • 18. Morality of Sacrifice Enforce Legality of Sacrifice Own Car Can Save One Pedestrian Another Car Can Save One Pedestrian Own Car Can Save Ten Pedestrians Another Car Can Save Ten Pedestrians 0 25 50 75 100 0 25 50 75 100 Approval Conventional Car Driverless Car a By swerving into a wall (killing the passenger) By swerving into one passer−by (preserving the passenger) Own Car Can Save
  • 19. One Pedestrian Another Car Can Save One Pedestrian Own Car Can Save Ten Pedestrians Another Car Can Save Ten Pedestrians 0 25 50 75 100 0 25 50 75 100 Preference among Algorithms Algorithm Stay Random Swerve b Figure 2: Selected experimental results. (a) People approve of autonomous vehicles (AVs) sacri- ficing their passenger to save pedestrians, and show greater willingness to see this action legally enforced when the decision is made by an AV than by a human driver. (b) People are willing to buy AVs programmed to swerve into a passer-by or into a wall in order to save pedestrians, although their utilitarianism is qualified by a self-preserving bias. 7
  • 20. themselves, F(1, 135) = 4.9,p = .03. When con- sidering which AV they would buy, participants were split between AVs that would self-sacrifice, AVs that would not, and AVs that would ran- domly pick a course of action. 4 Discussion Three surveys suggested that respondents might be prepared for autonomous vehicles pro- grammed to make utilitarian moral decisions in situations of unavoidable harm. This was even true, to some extent, of situations in which the AV could sacrifice its owner in order to save the lives of other individuals on the road. Respon- dents praised the moral value of such a sacrifice the same, whether human or machine made the decision. Although they were generally unwilling to see self-sacrifices enforced by law, they were more prepared for such legal enforcement if it ap- plied to AVs, than if it applied to humans. Sev- eral reasons may underlie this effect: unlike hu- mans, computers can be expected to dispassion- ately make utilitarian calculations in an instant; computers, unlike humans, can be expected to unerringly comply with the law, rendering moot the thorny issue of punishing non-compliers; and finally, a law requiring people to kill themselves would raise considerable ethical challenges. Even in the absence of legal enforcement, most respondents agreed that AVs should be pro- grammed for utilitarian self-sacrifice, and to pur- sue the greater good rather than protect their
  • 21. own passenger. However, they were not as con- fident that AVs would be programmed that way in reality—and for a good reason: They actually wished others to cruise in utilitarian AVs, more than they wanted to buy utilitarian AVs them- selves. What we observe here is the classic sig- nature of a social dilemma: People mostly agree on what should be done for the greater good of everyone, but it is in everybody’s self-interest not to do it themselves. This is both a chal- lenge and an opportunity for manufacturers or regulatory agencies wishing to push for utilitar- ian AVs: even though self-interest may initially work against such AVs, social norms may soon be formed that strongly favor their adoption. We emphasize that our results provide but a first foray into the thorny issues raised by moral algorithms for AVs. For example, our scenarios did not feature any uncertainty about decision outcomes, but future work will have to test sce- narios that introduce the concepts of expected risk, expected value, and blame assignment. Is it acceptable for an AV to avoid a motorcycle by swerving into a wall, considering that the prob- ability of survival is greater for the passenger of the car, than for the rider of the motorcy- cle? Should different decisions be made when children are on board, since they both have a longer time ahead of them than adults, and had less agency in being in the car in the first place? If a manufacturer offers different versions of its moral algorithm, and a buyer knowingly chose one of them, is the buyer to blame for the harm- ful consequences of the algorithm’s decisions?
  • 22. Figuring out how to build ethical autonomous machines is one of the thorniest challenges in ar- tificial intelligence today [Deng, 2015]. As we are about to endow millions of vehicles with auton- omy, taking algorithmic morality seriously has never been more urgent. Our data-driven ap- proach highlights how the field of experimental ethics can give us key insights into the moral and legal standards that people expect from au- tonomous driving algorithms. When it comes to split-second moral judgments, people may very well expect more from machines than they do 8 Howey Highlight Howey Highlight Howey Highlight Howey Highlight Howey Highlight Howey Highlight Howey
  • 23. Highlight Howey Highlight Howey Highlight Howey Highlight from each other. References [Deng, 2015] Deng, B. (2015). Machine ethics: The robot’s dilemma. Nature, 523:24–26. [Gao et al., 2014] Gao, P., Hensley, R., and Zielke, A. (2014). A road map to the future for the auto industry. [Greene, 2014a] Greene, J. D. (2014a). Be- yond point-and-shoot morality: Why cogni- tive (neuro)science matters for ethics. Ethics, 124:695–726. [Greene, 2014b] Greene, J. D. (2014b). The cog- nitive neuroscience of moral judgment and de- cision making. In Gazzaniga, M. S., editor, The cognitive neurosciences V, pages 1013– 1023. MIT Press. [Greene, 2014c] Greene, J. D. (2014c). Moral tribes: Emotion, reason and the gap between
  • 24. us and them. Atlantic Books. [Haidt, 2012] Haidt, J. (2012). The righteous mind: Why good people are divided by politics and religion. Pantheon Books. [Jain et al., 2015] Jain, N., O’Reilly, J., and Silk, N. (2015). Driverless cars: Insurers can- not be asleep at the wheel. [Marcus, 2012] Marcus, G. (2012). Moral ma- chines. [Montemerlo, B. et al, 2008] Montemerlo, B. et al (2008). Junior: The stanford entry in the urban challenge. J. Field Robotics, 25:569– 597. [Ramsey, 2015] Ramsey, M. (2015). Who’s responsible when a driverless car crashes? tesla’s got an idea. [Rosen, 2005] Rosen, F. (2005). Classical utili- tarianism from Hume to Mill. Routledge. [Spieser, K. et al, 2014] Spieser, K. et al (2014). Toward a systematic approach to the de- sign and evaluation of automated mobility-on- demand systems: A case study in Singapore. In Meyer, G. and Beiker, S., editors, Road Ve- hicle Automation, pages 229–245. Springer. [UK Department for Transport, 2015] UK De- partment for Transport (2015). The pathway to driverless cars.
  • 25. [Urmson, C. et al, 2008] Urmson, C. et al (2008). Autonomous driving in urban envi- ronments: Boss and the urban challenge. J. Field Robotics, 25:425–266. [Van Arem et al., 2006] Van Arem, B., Van Driel, C. J., and Visser, R. (2006). The im- pact of cooperative adaptive cruise control on traffic-flow characteristics. IEEE Transactions on Intelligent Transportation Systems, 7:429– 436. [Villasenor, 2014] Villasenor, J. (2014). Prod- ucts liability and driverless cars: Issues and guiding principles for legislation. [Waldrop, 2015] Waldrop, M. M. (2015). Au- tonomous vehicles: No drivers required. Na- ture, 518:20–23. [Wallach and Allen, 2008] Wallach, W. and Allen, C. (2008). Moral machines: Teaching robots right from wrong. Oxford University Press. 9 Howey Highlight A Experimental Methods The article is based on three online surveys con- ducted in June 2015. All studies were pro-
  • 26. grammed on Qualtrics and recruited participants from the Mechanical Turk platform, for a com- pensation of 25 cents. Study 1 involved 402 par- ticipants (161 women, median age = 30); Study 2 involved a different sample of 310 participants (166 women, median age = 31); and Study 3 involved yet a different sample of 201 partici- pants (104 women, median age = 30). We did not exclude any participant from the analyses, but because some participants failed to answer some questions, degrees of freedom show small variations between analyses. Participants pro- vided basic demographic information, such as age, sex, ethnicity, political ideology, and reli- giosity (the latter being measured on a continu- ous slider from not at all religious to extremely religious). At the end of all studies, three ques- tions measured the overall enthusiasm that par- ticipants felt for driverless cars: How excited they felt about a future in which autonomous self-driving cars are an everyday part of our mo- toring experience (7-point scale from not at all to very much), how fearful they were of this future (same scale, reverse coded), and how likely they were to buy an autonomous vehicle, should they become commercially available by the time they would next purchase a new car (7-point scale from not at all likely to extremely likely). A.1 Study 1 Participants read a vignette in which one or more pedestrians could be saved if a car were to swerve into a barrier, killing its passenger. Par- ticipants were randomly assigned to one group of a 2×2×2 between-participant design, which
  • 27. manipulated the number of pedestrians which could be saved (one vs ten), whether the driver or the self-driving car would make the decision to swerve, and whether participants were in- structed to imagine themselves in the car, or to imagine an anonymous character. The experi- ment was programmed so that the sex of this anonymous character always matched that of the participant (this information being collected early in the experiment). Here is an example of the vignette (one pedestrian, participant in the car, driver makes the decision): You are the sole passenger driving at the speed limit down a main road. Sud- denly, a pedestrian appears ahead, in the direct path of the car. You have the option to: swerve off to the side of road, where you will impact a barrier, killing you but leaving the pedestrian unharmed, or stay on your current path, where you will kill the pedestrian, but you will be unharmed. And here is the mirror example, as it would appear to a male participant (i.e., the anony- mous individual in the car is male): A man is the sole passenger in an au- tonomous self-driving vehicle traveling at the speed limit down a main road. Sud- denly, 10 people appear ahead, in the di- rect path of the car. The car could be programmed to: swerve off to the side of road, where it will impact a barrier, killing the passenger but leaving the ten pedestri-
  • 28. ans unharmed, or stay on its current path, where it will kill the 10 pedestrians, but the passenger will be unharmed. In addition to the vignette, participants were given a pictorial representation of the swerve and stay options. Figure 3 (top) displays the images that were shown to participants, in the 10 SWERVESWERVE STAYSTAY SWERVESWERVE STAYSTAY SWERVESWERVE STAYSTAY SWERVESWERVE STAYSTAY Figure 3: Pictorial representation of the swerve and stay options in Study 1 (top) and Study 2 (all). 11 Figure 4: Main measures of Study 1 (moral ap- proval and legal enforceability of sacrifice). conditions where one or ten pedestrians could be saved. In Study 1, we recorded the perceived morality of the swerve option (i.e., the utilitarian sacri- fice) as well as it legal enforceability, that is, the
  • 29. extent to which participants found it acceptable to have a law that would require cars/drivers to swerve. The sliders that were used for these two measures are shown in Figure 4. Responses were reverse coded so that higher scores would reflect higher moral approval, and higher legal enforceability of the swerve option. A.2 Study 2 Participants were randomly assigned to one of four versions of a vignette, in which either 1 or 10 pedestrians could be saved if an autonomous, self-driving car were to swerve into either a bar- rier (killing its passenger), or another pedestrian (killing that pedestrian). Participants were told that three algorithms could be implemented in the car for such a contingency: always stay, always swerve, and random. Here is an ex- ample of the vignette (one pedestrian, swerve into barrier): You are the sole passenger riding in an autonomous, self-driving car that is travel- ing at high speed down a main road. Sud- denly, one pedestrian appears ahead, in the direct path of the car. In preparing the car for such an eventuality, the computer engi- neers can program the car for three options: always stay. In such a situation, the car would continue on its path, kill the pedestrian on the main road, but you as the passenger will be unharmed. always swerve. In such a situation, the
  • 30. car would swerve quickly, diverting the car onto the side road where it will kill you as the passenger, but the pedestrian on the main road will be unharmed. random. In such a situation, the car would be programmed to randomly choose to either stay or swerve. And here is the mirror example (ten pedes- trian, swerve into another pedestrian): You are the sole passenger riding in an autonomous, self-driving car that is travel- ing at high speed down a main road. Sud- denly, ten pedestrians appears ahead, in the direct path of the car. The car could however swerve to the right where it would kill one pedestrian on the side of the road. In preparing the car for such an eventu- ality, the computer engineers can program the car for three options: always stay. In such a situation, the car would continue on its path, kill the ten pedestrians on the main road, but the pedestrian on the side of the road would be unharmed. always swerve. In such a situation, the car would swerve quickly, diverting the car onto the side road where it will kill the one pedestrian on the side 12
  • 31. Figure 5: Illustration of the budget-splitting pro- cedure used in Study 2, here applied to the will- ingness to buy self-driving cars programmed to always stay, always swerve, or random. of the road, but the pedestrian on the main road will be unharmed. random. In such a situation, the car would be programmed to randomly choose to either stay or swerve. Additionally, participants were given a picto- rial representation of the swerve and stay op- tions (see Figure 3). Participants then answered three questions, introduced in random order for each participant: How would you rate the rel- ative morality of these three algorithms? How would you rate your relative willingness to buy a car with one of these three algorithms? How would you rate the relative comfort with having each of the car with one of these three algorithms on the roads in your part of the world? To pro- vide their responses, participants assigned points to each of the three algorithms, taken from an overall budget of 100 points (Figure 5). A.3 Study 3 Participants read a vignette in which ten pedes- trians could be saved if a car were to swerve into a barrier, killing its passenger. Participants were randomly assigned to one of two groups, ma-
  • 32. nipulating whether participants were instructed to imagine themselves in the car, or to imag- ine an anonymous character. The experiment was programmed so that the sex of this anony- mous character always matched that of the par- ticipant (this information being collected early in the experiment). Thus, the experimental vi- gnettes were similar to that used in Study 1 (re- stricted to situations involving ten pedestrians). Participants were asked whether the moral course of action was to swerve or stay on course (dichotomous response variable), and whether they expected future driverless cars to be pro- grammed to swerve or to stay on course in such a situation (dichotomous response vari- able). Next, participants were asked to indicate whether it would be more appropriate to protect the driver at all costs or to maximize the num- ber of lives saved, on a continuous slider scale anchored at these two expressions. B Experimental Results In all studies, we computed an index of enthu- siasm about driverless cars by averaging the ex- citement participants felt about a future with driverless cars, their fearfulness about this fu- ture (reverse coded), and the likelihood that they might buy a driverless car next, if such cars were available. Cronbach alpha for these three items was .82 in Study 1, .78 in Study 2, and .81 in Study 3. Regression analyses showed that en- thusiasm was significantly greater for male par- ticipants, in all studies; and that enthusiasm was
  • 33. 13 significantly greater for younger and less reli- gious participants in two studies (see Table 1), although these two effects were much smaller than that of sex. Given these results, all sub- sequent analyses controlled for sex, age and reli- giosity. B.1 Study 1 To analyze participants’ rating of the morality of the sacrifice (swerve), we ran an analysis of co- variance where morality was the dependent out- come. The predictors were the number of lives that could be saved; whether the agent mak- ing the sacrifice decision was human or machine; whether participants pictured themselves or an- other person in the car; and all the 2-way and 3-way interaction terms. As in all analyses, age, sex, and religiosity were entered as covariates. No effect was detected as significant but that of the number of lives which could be saved, F(1, 367) = 48.9, p < .001. We ran a similar analysis in which the moral- ity of the sacrifice was replaced as the depen- dent outcome with the legal enforceability of swerve. The analysis detected a significant ef- fect of the number of lives that could be saved, F(1, 367) = 7.3, p < .01; and an effect of whether the decision-maker was human or ma- chine, F(1, 367) = 9.2, p < .01. Participants were more willing to consider legal enforcement
  • 34. of the sacrifice when the decision was made by a self-driving car, rather than by a human driver. This was mostly true, though, of situations in which they did not think of themselves in the car, as shown by analyzing separately the data provided by participants picturing another per- son in the car, and the data provided by partici- pants picturing themselves in the car. In the first dataset, whether the decision maker was human or machine had a large impact, F(1, 181) = 12.0, p < .001; in the second dataset, this predictor had a nonsignificant impact, F < 1, p = .37. B.2 Study 2 Our primary interest in Study 2 was partici- pants’ willingness to buy (or to have others buy) cars programmed with an always swerve algo- rithm. We first analyzed the data provided by participants who faced a similar situation than in Study 1, that is, a dilemma between crash- ing the car into a barrier (killing its passenger) or staying on course (killing 1 or 10 passengers). We ran an analysis of covariance of the budget share allocated to the always swerve car, in which the predictors were the number of lives that could be saved and whether participants were asked about their own willingness to buy such car, or their comfort with seeing other peo- ple drive such a car. Participants allocated a larger budget share to always swerve when it could save 10 pedestrians, F(1, 132) = 7.6, p < .01, and when they were asked about other people driving the car, rather than buying the
  • 35. car themselves, F(1, 135) = 4.9,p = .03. In the new situation in which the dilemma was between between swerving into a pedestrian, or staying on course and killing 1 or 10 pedestrians, participants’ responses followed a purely utilitar- ian logic. The analysis of covariance showed that they allocated a significantly larger budget share to always swerve when it could save 10 pedes- trians, F(1, 141) = 41.3, p < .001, whether they were thinking of buying the car themselves or having other people drive the car. 14 Study 1 (N = 402) Study 2 (N = 302) Study 3 (N = 201) β t β t β t Sex (Women) −0.96 −5.59∗ ∗ ∗ −0.88 −4.74∗ ∗ ∗ −0.71 −2.77∗ ∗ Age −0.02 −2.79∗ ∗ −0.02 −2.57∗ −0.02 −1.60 Religiosity −0.01 −2.45∗ −0.01 −2.15∗ −0.01 −0.99 Table 1: Impact of demographic variables on general enthusiasm for self-driving cars. B.3 Study 3 Logistic regressions for the two dichotomous re- sponse variables (including the control variables sex, age and religiosity) did not detect any signif- icant effect of the experimental condition (imag- ine yourself vs imagine another individual in the car). Though this may appear inconsistent with
  • 36. the results from Study 2, recall that here we are asking about moral appropriateness whereas there we were talking about willingness to buy. Overall, 75% of participants deemed more moral to swerve, but only 65% predicted that driverless cars would be programmed to swerve. The difference between these two proportions was statistically significant, McNemar’s χ2(1) = 6.7, p < .01. On a scale from −50 (protect the driver at all costs) to +50 (maximize the number of lives saved), the average response was +24 (SD = 30) to the question of what was morally appropri- ate, but it was only +8 (SD =35) to the ques- tion of how future driverless cars would be pro- grammed. The difference between these two re- sponses was significant, as shown by a repeated- measure analysis of covariance which included the experimental condition as a between-group predictor, and sex, age and religiosity as control variables, F(1, 179) = 18.9, p < .001. The only other significant effect was that of the age co- variate, F(1, 176) = 7.8, p < .01. 15 1 Introduction2 Materials and Methods2.1 Study 12.2 Study 22.3 Study 33 Results4 DiscussionA Experimental MethodsA.1 Study 1A.2 Study 2A.3 Study 3B Experimental ResultsB.1 Study 1B.2 Study 2B.3 Study 3 P. Lin ( )
  • 37. California Polytechnic State University, Philosophy Department, USA [email protected] 4Why Ethics Matters for Autonomous Cars Content 4.1 Why ethics matters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 4.1.1 Beyond crash-avoidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 4.1.2 Crash-optimization means targeting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 4.1.3 Beyond harm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 4.2 Scenarios that implicate ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 4.2.1 The deer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 4.2.2 Self-sacrifice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 4.2.3 Ducking harm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 4.2.4 Trolley problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 4.3 Next steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 4.3.1 Broader ethical issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 4.3.2 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
  • 38. If motor vehicles are to be truly autonomous and able to operate responsibly on our roads, they will need to replicate – or do better than – the human decision-making process. But some decisions are more than just a mechanical application of traffic laws and plotting a safe path. They seem to require a sense of ethics, and this is a notoriously difficult capabil- ity to reduce into algorithms for a computer to follow. This chapter will explain why ethics matters for autonomous road vehicles, looking at the most urgent area of their programming. Nearly all of this work is still in front of the M. Maurer et al. (Hrsg.), Autonomes Fahren, DOI 10.1007/978-3-662-45854-9_4, © The Editors and the Authors 2015. Why Ethics Matters for Autonomous Cars70 industry, which is to say that I will mainly raise the questions here and not presume to have any definitive answers at such an early stage of the technology. A brief note about terminology I will use “autonomous”, “self driving”, “driverless”, and “robot” interchangeably. These refer primarily to future vehicles that may have the ability to operate without human intervention for extended periods of time and to perform a broad range of actions. I will
  • 39. also use “cars” to refer loosely to all motor vehicles, from a motorcycle to a freight truck; those distinctions do not matter for the discussion here. 4.1 Why ethics matters To start, let me offer a simple scenario that illustrates the need for ethics in autonomous cars. Imagine in some distant future, your autonomous car encounters this terrible choice: it must either swerve left and strike an eight-year old girl, or swerve right and strike an 80-year old grandmother [33]. Given the car’s velocity, either victim would surely be killed on impact. If you do not swerve, both victims will be struck and killed; so there is good reason to think that you ought to swerve one way or another. But what would be the ethi- cally correct decision? If you were programming the self- driving car, how would you instruct it to behave if it ever encountered such a case, as rare as it may be? Striking the grandmother could be the lesser evil, at least to some eyes. The thinking is that the girl still has her entire life in front of her – a first love, a family of her own, a career, and other adventures and happiness – while the grandmother has already had a full life and her fair share of experiences. Further, the little girl is a moral innocent, more so than just about any adult. We might agree that the grandmother has a right to life and as valuable a life as the little girl’s; but nevertheless, there are reasons that seem to weigh in favor of saving the little girl over the grandmother, if an accident is
  • 40. unavoidable. Even the grand- mother may insist on her own sacrifice, if she were given the chance to choose. But either choice is ethically incorrect, at least according to the relevant professional codes of ethics. Among its many pledges, the Institute of Electrical and Electronics Engineers (IEEE), for instance, commits itself and its 430,000+ members “to treat fairly all persons and to not engage in acts of discrimination based on race, religion, gender, disability, age, national origin, sexual orientation, gender identity, or gender expression” [23]. Therefore, to treat individuals differently on the basis of their age, when age is not a relevant factor, seems to be exactly the kind of discrimination the IEEE prohibits [18, 33]. Age does not appear to be a relevant factor in our scenario as it might be in, say, casting a young actor to play a child’s character in a movie. In that movie scenario, it would be appropriate to reject adult actors for the role. Anyway, a reason to discriminate does not necessarily justify that discrimination, since some reasons may be illegitimate. Even if we point to the disparity of life experiences between the old and the young, that difference isn’t automatically an appropriate basis for different treatment. 714.1 Why ethics matters
  • 41. Discriminating on the basis of age in our crash scenario would seem to be the same evil as discriminating on the basis of race, religion, gender, disability, national origin, and so on, even if we can invent reasons to prefer one such group over another. In Germany – home to many influential automotive companies that are working to develop self-driving technologies – the right to life and human dignity is basic and set forth in the first two articles of the very first chapter in the nation’s constitution [9]. So it is difficult to see how German law could even allow a company to create a product that is capable to making such a horrific and apparently illegal choice. The United States similarly strives to offer equal protection to all persons, such as stipulated in the fourteenth amendment of its constitution. If we cannot ethically choose a path forward, then what ought to be done? One solution is to refuse to make a swerve decision, allowing both victims to be struck; but this seems much worse than having only one victim die, even if we are prejudiced against her. Anyway, we can force a decision by modifying the scenario: assume that 10 or 100 other pedestrians would die, if the car continued forward; and swerving would again result in only a single death. Another solution could be to arbitrarily and unpredictably choose a path, without prejudice to either person [34]. But this too seems ethically troubling, in that we are choos-
  • 42. ing between lives without any deliberation at all – to leave it to chance, when there are potentially some reasons to prefer one over the other, as distasteful and uncomfortable as those reasons may be. This is a dilemma that is not easily solvable and therefore points to a need for ethics in developing autonomous cars. 4.1.1 Beyond crash-avoidance Many readers may object right away that the dilemma above (and others that follow) will never occur with autonomous cars. It may be suggested that future cars need not confront hard ethical choices, that simply stopping the car or handing control back to the human operator is the easy path around ethics. But I will contend here that braking and relinquish- ing control will not always be enough. Those solutions may be the best we have today, but if automated cars are to ever operate more broadly outside of limited highway environ- ments, they will need more response-options. Current research already makes this case as a matter of physics [12, 13], but we can also make a case from commonsense. Many ordinary scenarios exist today in which braking is not the best or safest move, whether by human or self-driving car. A wet road or a tailgater, for instance, may make it dangerous to slam the brakes, as opposed to some other action such as steering around the obstacle or simply through it, if it is a small object. Today, the most advanced self-driving cars cannot detect small objects such as squirrels [7]; therefore,
  • 43. they presumably cannot also detect squirrel-sized rocks, potholes, kittens, and other small but consequential hazards can cause equipment failure, such as tire blowouts or sensor errors, or deviations from a safe path. Why Ethics Matters for Autonomous Cars72 In these and many other cases, there may not be enough time to hand control back to the driver. Some simulation experiments suggest that human drivers need up to 40 seconds to regain situation awareness, depending on the distracting activity, e. g., reading or napping – far longer than the 1–2 seconds of reaction time required for typical accident scenarios [38, 18]. This means that the car must be responsible for making decisions when it is un- reasonable to expect a timely transfer of control back to the human, and again braking might not be the most responsible action. One possible reply is that, while imperfect, braking could successfully avoid the major- ity of emergency situations a robot car may find itself it, even if it regrettably makes things worse in a small number of cases. The benefits far outweigh the risks, presumably, and the numbers speak for themselves. Or do they? I will discuss the dangers of morality by math throughout this chapter. Braking and other responses in the service of crash-avoidance won’t be enough, because
  • 44. crash-avoidance is not enough. Some accidents are unavoidable – such as when an animal or pedestrian darts out in front of your moving car – and therefore autonomous cars will need to engage in crash-optimization as well. Optimizing crashes means to choose the course of action that will likely lead to the least amount of harm, and this could mean a forced choice between two evils, for instance, choosing to strike either the eight-year old girl or the 80-year old grandmother in my first scenario above. 4.1.2 Crash-optimization means targeting There may be reasons, by the way, to prefer choosing to run over the eight-year old girl that I have not yet mentioned. If the autonomous car were most interested in protecting its own occupants, then it would make sense to choose a collision with the lightest object possible (the girl). If the choice were between two vehicles, then the car should be programmed to prefer striking a lighter vehicle (such as a Mini Cooper or motorcycle) than a heavier one (such as a sports utility vehicle (SUV) or truck) in an adjacent lane [18, 34]. On the other hand, if the car were charged with protecting other drivers and pedestrians over its own occupants – not an unreasonable imperative – then it should be programmed to prefer a collision with the heavier vehicle than the lighter one. If vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communications are rolled out (or V2X to refer
  • 45. to both), or if an autonomous car can identify the specific models of other cars on the road, then it seems to make sense to collide with a safer vehicle (such as a Volvo SUV that has a reputation for safety) over a car not known for crash-safety (such as a Ford Pinto that’s prone to exploding upon impact). This strategy may be both legally and ethically better than the previous one of jealously protecting the car’s own occupants. It could minimize lawsuits, because any injury to others would be less severe. Also, because the driver is the one who introduced the risk to society – operating an autonomous vehicle on public roads – the driver may be legally obligated, 734.1 Why ethics matters or at least morally obligated, to absorb the brunt of any harm, at least when squared off against pedestrians, bicycles, and perhaps lighter vehicles. The ethical point here, however, is that no matter which strategy is adopted by an orig- inal equipment manufacturer (OEM), i. e., auto manufacturer, programming a car to choose a collision with any particular kind of object over another very much resembles a targeting algorithm [33]. Somewhat related to the military sense of selecting targets, crash-optimi- zation algorithms may involve the deliberate and systematic discrimination of, say, large vehicles or Volvos to collide into. The owners or operators of
  • 46. these targeted vehicles bear this burden through no fault of their own, other than perhaps that they care about safety or need an SUV to transport a large family. 4.1.3 Beyond harm The problem is starkly highlighted by the following scenario [15, 16, 17, 34]: Again, imagine that an autonomous car is facing an imminent crash, but it could select one of two targets in adjacent lanes to swerve into: either a motorcyclist who is wearing a helmet, or a motorcyclist who is not. It probably doesn’t matter much to the safety of the car itself or its occupants whether the motorcyclist is wearing a helmet; the impact of a helmet into a car window doesn’t introduce that much more risk that the autonomous car should want to avoid it over anything else. But it matters a lot to the motorcyclist whether s/he is wear- ing a helmet: the one without a helmet would probably not survive such a collision. There- fore, in this dreadful scenario, it seems reasonable to program a good autonomous car to swerve into the motorcyclist with the helmet. But how well is justice and public policy served by this crash- optimization design? Motorcyclists who wear helmets are essentially being penalized and discriminated against for their responsible decision to wear a helmet. This may encourage some motorcyclists to not wear helmets, in order to avoid targeting by autonomous cars. Likewise, in the previous scenario, sales may decline for automotive brands known for
  • 47. safety, such as Volvo and Mercedes Benz, insofar as customers want to avoid being the preferred targets of crash- optimization systems. Some readers may want to argue that the motorcyclist without a helmet ought to be targeted, for instance, because he has acted recklessly and therefore is more deserving of harm. Even if that’s the correct design, notice that we are again moving beyond harm in making crash-optimization decisions. We’re still talking about justice and other such ethical considerations, and that’s the point: it’s not just a numbers game. Programmers in such scenarios, as rare as they may be, would need to design cost- functions – algorithms that assign and calculate the expected costs of various possible options, selecting the one with the lowest costs – that potentially determine who gets to live and who gets to die. And this is fundamentally an ethics problem, one that de- mands much more care and transparency in reasoning than seems currently offered. Indeed, it is difficult to imagine a weightier and more profoundly serious decision Why Ethics Matters for Autonomous Cars74 a programmer would ever have to make. Yet, there is little discussion about this core issue to date.
  • 48. 4.2 Scenarios that implicate ethics In addition to the ones posited above, there are many actual and hypothetical scenarios that involve judgments about ethics. I will describe some here to show how ordinary assump- tions in ethics can be challenged. 4.2.1 The deer Though difficult to quantify due to inconsistent and under- reporting, experts estimate that more than a million car accidents per year in the US are caused by deer [6, 48]. Many, if not most, drivers have been startled by an unexpected animal on the road, a dangerous situation for both parties. Deconstructing a typical accident, or near-accident, involving an animal illustrates the complexity of the decisions facing the driver [30]. While all this happens within seconds – not enough time for careful deliberations by human drivers – an autonomous car could have the virtue of a (presumably) thoughtful decision-making script to very quickly react in an optimal way. If it is able to account for the many variables, then it ought to, for the most informed decision possible. First, suppose an object appears on the road directly in front of a car in autonomous mode. Is there time to reasonably hand control back to the human behind the wheel? (Prob- ably not.) If not, is there time to stop the car? Would the car need to brake hard, or would moderate braking be sufficient? The decision to brake depends,
  • 49. again, on road conditions and whether a tailgater (such as a big-rig truck) is behind you, including its speed to deter- mine the severity of a possible rear-end collision. Second, what is the object? Is it an animal, a person, or something else? If it is an animal, are some animals permissible to run over? It may be safer to continue ahead and strike a squirrel, for instance, than to violently swerve around it and risk losing control of the car. However, larger animals, such as deer and cows, are more likely to cause serious damage to the car and injuries to occupants than a spun-out car. Other animals, still, have special places in our hearts and should be avoided if possible, such as pet dogs and cats. Third, if the car should get out of the way – either in conjunction with braking or not – should it swerve to the left or to the right? In the US and other nations in which drivers must stay on the right side of the road, turning to the right may mean driving off the road, poten- tially into a ditch or a tree. Not only could harm to the car and occupants be likely, but it also matters how many occupants are in the car. The decision to drive into an embankment seems different when only one adult driver is in the car, than when several children are inside too. On the other hand, turning to the left may mean driving into an opposite lane, potentially into a head-on collision with incoming vehicles. If such a collision is unavoidable, then it
  • 50. 754.2 Scenarios that implicate ethics matters what kind of vehicle we would crash into (e. g., is it a compact car or SUV?), how heavy incoming traffic is (e. g., would more than one vehicle be involved?), how many persons may be involved (e. g., are there children in the other car?). Of course, here we are assuming perfect sensing and V2X communications that can help answer these questions. If we cannot answer the questions, then we face a possibly large unknown risk, which makes driving into incoming traffic perhaps the worst option available. Other factors relevant to the decision-points above include: the road-shoulder type (paved, gravel, none, etc.), the condition of the car’s tires and brakes, whether the car’s occupants are seat-belted, whether the car is transporting dangerous cargo that could spill or explode, proximity to hospital or emergency rescue, damage to property such as houses and buildings, and more. These variables influence the probability of an accident as well as expected harm, both of which are needed in selecting the best course of action. From this short analysis of a typical crash (or possible crash) with an animal, we can already see a daunting number of factors to account for. Sensing technologies today can- not answer some or many of the questions above, but it is already unclear that braking
  • 51. should be the safest default option – as a proxy for the most ethical option – given these uncertain conditions, all things considered. Automated cars today can already detect whether there is oncoming traffic in the opposite lane. Therefore, it is at least possible that they can be programmed to maneuver slightly into the incoming lane under some conditions, e. g., when there are no incoming cars and when it may be dangerous to slam on the brakes. Whether or not sensing technologies will improve enough to deliver answers to our questions above, a programmer or OEM would still need to assign costs or weights to various actions and objects as best as they can. Yet these values are not intrinsic to or dis- coverable by science or engineering. Values are something that we humans must stipulate and ideally agree upon. In constructing algorithms to control an autonomous car, ethics is already implied in the design process. Any decision that involves a tradeoff such as to strike object x instead of object y requires a value-judgment about the wisdom of the tradeoff, that is, the relative weights of x and y. And the design process can be made better by rec- ognizing the ethical implications and by engaging the broader community to ensure that those values are represented correctly or at least transparently. Working in a moral bubble is less likely to deliver results that are acceptable to society. Again, in a real-world accident today, a human driver usually has neither the time nor
  • 52. the information needed to make the most ethical or least harmful decisions. A person who is startled by a small animal on an otherwise uneventful drive may very well react poorly. He might drive into oncoming traffic and kill a family, or oversteer into a ditch and to his own death. Neither of these results, however, is likely to lead to criminal prosecution by themselves, since there was no forethought, malice, negligence, or bad intent in making a forced, split-second reaction. But the programmer and OEM do not operate under the sanc- tuary of reasonable instincts; they make potentially life-and- death decisions under no truly urgent time-constraint and therefore incur the responsibility of making better decisions than human drivers reacting reflexively in surprise situations. Why Ethics Matters for Autonomous Cars76 4.2.2 Self-sacrifice As we can see, real-world accidents can be very complicated. In philosophy and ethics, a familiar method is to simplify the issues through hypothetical scenarios, otherwise known as “thought-experiments.” This is similar to everyday science experiments in which re- searchers create unusual conditions to isolate and test desired variables, such as sending spiders into outer space to see how micro-gravity affects their ability to spin webs. It is not a good objection to those experiments to say that no spiders exist naturally in space; that
  • 53. misses the point of the experiment. Likewise, it is no objection to our hypothetical examples that they are outlandish and unlikely to happen in the real world, such as a car that can distinguish an eight-year old from an 80-year old (though with improving biometrics, facial recognition technologies, and linked databases, this doesn’t seem impossible). Our thought-experiments are still useful in drawing out certain ethical intuitions and principles we want to test. With that understanding, we can devise hypothetical scenarios to see that reasonable ethical principles can lead to controversial results in the context of autonomous driving. Digging into a standard philosophical toolbox for help with ethical dilemmas, one of the first principles we might reach for is consequentialism: that the right thing to do is what ever leads to the best results, especially in quantified terms [44]. As it applies here, consequen- tialism suggests that we should strive to minimize harm and maximize whatever it is that matters, such as, the number of happy lives. In this thought-experiment, your future autonomous car is driving you on a narrow road, alongside a cliff. No one and no technology could foresee that a school bus with 28 children would appear around the corner, partially in your lane [29, 36]. Your car calculates that crash is imminent; given the velocities and distance, there is no possible action that can avoid harming you. What should your robot car do?
  • 54. A good, standard-issue consequentialist would want to optimize results, that is, maxi- mize the number of happy lives and minimize harm. Assuming that all lives in this scenario are more or less equally happy – for instance, there’s no super- happy or super-depressed person, and no very important person who has unusual influence over the welfare of others – they would each count for about the same in our moral calculation. As you like, we may either ignore or account for the issue of whether there is extra value in the life of innocent child who has more years of happiness ahead of her than an average adult; that doesn’t matter much for this scenario. The robot car’s two main choices seem to be: (1) to slam on the brakes and crash into the bus, risking everyone’s lives, or (2) to drive off the cliff, sparing the lives of everyone on the bus. Performing a quick expected-utility calculation, if the odds of death to each person (including the adult bus driver) in the accident averaged more than one in 30, then colliding into the bus would yield the expected result of more than one death, up to all 30 persons. (Let’s say the actual odds are one in three, which gives an expected result of 10 deaths.) If driving off a cliff meant certain death, or the odds of one in one, then the expect- ed result of that would be exactly one death (your own) and no more. The right consequen-
  • 55. 774.2 Scenarios that implicate ethics tialist decision for the robot car – if all we care about is maximizing lives and minimizing deaths – is apparently to drive off the cliff and sacrifice the driver, since it is better that only one person should die rather than more than one, especially 10 or all 30 persons. This decision would likely be different if, instead of a school bus, your robot car were about to collide with another passenger car carrying only one person. Given the same average odds of death, one in three, the expected number of deaths in a collision would only be 0.67, while the expected number of deaths in driving off a cliff remains at one. In that case, the right consequentialist decision would be to allow the accident to occur, as long as the average odds of death are less than one in two. If, instead of another vehicle, your car were about to collide with a deer, then the decision to stay on the road, despite an ensuing accident, would be even more obvious insofar as we value a deer’s life less than a human life. Back to the school-bus scenario, programming an autonomous car with a consequential- ist framework for ethics would seem to imply your sacrifice. But what is most striking about this case might not even be your death or the moral mathematics: if you were in a manually driven car today, driving off the cliff might still be the most ethical choice you could make, so perhaps you would choose certain death anyway, had you the
  • 56. time to consider the op- tions. However, it is one thing for you to willingly make that decision of sacrifice yourself, and quite another matter for a machine to make that decision without your consent or foreknowledge that self-sacrifice was even a possibility. That is, there is an astonishing lack of transparency and therefore consent in such a grave decision, one of the most important that can be made about one’s life – perhaps noble if voluntary, but criminal if not. Thus, reasonable ethical principles – e.g., aiming to save the greatest number of lives – can be stressed in the context of autonomous driving. An operator of an autonomous vehicle, rightly or not, may very well value his own life over that of everyone else’s, even that of 29 others; or he may even explicitly reject consequentialism. Even if consequential- ism is the best ethical theory and the car’s moral calculations are correct, the problem may not be with the ethics but with a lack of discussion about ethics. Industry, therefore, may do well to have such a discussion and set expectations with the public. Users – and news headlines – may likely be more forgiving if it is explained in advance that self-sacrifice may be a justified feature, not a bug. 4.2.3 Ducking harm Other ethical principles can create dilemmas, too. It is generally uncontroversial that, if you can easily avoid harm to yourself, then you should do it. Indeed, it may be morally required
  • 57. that you save yourself when possible, if your life is intrinsically valuable or worth protect- ing; and it is at least extrinsically valuable if you had a dependent family. Auto manufac- turers or OEMs seem to take this principle for granted as well: if an autonomous car can easily avoid a crash, e. g., by braking or swerving, then it should. No ethical problem here – or is there? Why Ethics Matters for Autonomous Cars78 In another thought-experiment [15, 18, 33], your robotic car is stopped at an intersection and waits patiently for the children who are crossing in front of you. Your car detects a pickup truck coming up behind you, about to cause a rear-end collision with you. The crash would likely damage your car to some degree and perhaps cause minor injury to you, such as whiplash, but certainly not death. To avoid this harm, your car is programmed to dash out of the way, if it can do so safely. In this case, your car can easily turn right at the inter- section and avoid the rear-end collision. It follows this programming, but in doing so, it clears a path for the truck to continue through the intersection, killing a couple children and seriously injuring others. Was this the correct way to program an autonomous car? In most cases of an impending rear-end collision, probably yes. But in this particular case, the design decision meant
  • 58. saving you from minor injury at the expense of serious injury and death of several children, and this hardly seems to be the right choice. In an important respect, you (or the car) are responsible for their deaths: you (or the car) killed the children by removing an obstruction that prevented harm from falling upon them, just as you would be responsible for a person’s death if you removed a shield he was holding in front of a stream of gunfire. And killing innocent people has legal and moral ramifications. As with the self-sacrifice scenario above, it might be that in the same situation today, in a human-driven car, you would make the same decision to save yourself from injury, if you were to see a fast-approaching vehicle about to slam into you. That is, the result might not change if a human made the on-the-spot decision. But, again, it is one thing to make such a judgment in the panic of the moment, but another less forgivable thing for a programmer – far removed from the scene and a year or more in advance – to create a cost-function that resulted in these deaths. Either the programmer did so deliberately, or she did it uninten- tionally, unaware that this was a possibility. If the former, then this could be construed as premeditated homicide; and if the latter, gross negligence. Either way is very bad for the programmer and perhaps an inherent risk in the business, when one attempts to replicate human decision-making in a broad range of dynamic scenarios. Sometimes, an autonomous car may be faced with a “no-win” scenario, putting
  • 59. the programmer in a difficult but all too real position. To mitigate this risk, industry may do well to set expectations not only with users but also with broader society, educating them that they could also become victims even if not operating or in a robot car, and that perhaps this is justified by a greater public or overall good. 4.2.4 Trolley problems One of the most iconic thought-experiments in ethics is the trolley problem [4, 8, 11, 47], and this is one that may now occur in the real world, if autonomous vehicles come to be. Indeed, driverless trains are already operating in dozens of cities worldwide and could bring this scene to life [24]. The classical dilemma involves a runaway trolley (or train) that is about to run over and kill five unaware people standing on the tracks. Looking at the scene 794.2 Scenarios that implicate ethics from the outside, you find yourself standing next to a switch: if you pull the switch, you can shunt the train to a right-hand set of tracks, thereby saving the five individuals on the track. Unfortunately, there is one person standing on the right- hand set of tracks who would then be killed. What is the right decision? The “correct” decision continues to be a subject of much debate in philosophy. Both answers seem reasonable and defensible. A consequentialist
  • 60. might justify switching the tracks to save five people, even at the regrettable expense of one. But a non-consequential- ist, someone who considers more than just the math or results, might object on the grounds that switching tracks constitutes an act of killing (the one person), while doing nothing is merely allowing someone to die (the five individuals); and that it is morally and legally worse to kill than to let die. Killing implies that you are directly responsible for a person’s death: had you not done what you did, the person would have lived. Letting die, however, involves much less re- sponsibility on your part, if any, since some causal process was already underway that was not initiated or otherwise controlled by you. The question of whether it is worse to kill than to let die is also subject to debate in philosophy. But let us bracket that for the moment, as a final answer is not necessary for our discussion, only that it is reasonable to believe that proposition. Adapting the trolley problem to the technology at hand, let us suppose that you are driving an autonomous car in manual mode; you are in control. Either intentionally or not – you could be homicidal or simply inattentive – you are about to run over and kill five pedestrians. Your car’s crash-avoidance system detects the possible accident and activates, forcibly taking control of the car from your hands. To avoid this disaster, it swerves in the only direction it can, let’s say to the right. But on the right is a single pedestrian
  • 61. who is unfortunately killed. Was this the right decision for your car to make? Again, a consequentialist would say yes: it is better that only one person dies than five. But a non- consequentialist might appeal to a moral distinction between killing and letting die, and this matters to OEMs for liabili- ty reasons. If the car does not wrestle control from the human driver, then it (and the OEM) would perhaps not be responsible for the deaths of the five pedestrians while you were driving the car; it is merely letting those victims die. But if the car does take control and make a decision that results in the death of a person, then it (and the OEM) becomes responsible for killing a person. As with the trolley problem, either choice seems defensible. Results do matter, so it is not ridiculous to think that the car should be programmed to act and save lives, even at the expense of a fewer number of lives. Yet it also seems reasonable to think that killing is worse than letting die, especially in the eyes of the law. What I want to highlight here is not so much the answer but the process of deliberation that points us toward one answer over another. To the extent that there could be many acceptable answers to any given ethical dilemma, how well one answer can be defended is crucial toward supporting that answer over others. Industry again would do well to set expectations by debating and explaining in advance its reasoning behind key algorithms that could result in life or
  • 62. death. Transparency, or showing one’s math, is an important part of doing ethics, not just the answer itself. Why Ethics Matters for Autonomous Cars80 4.3 Next steps Notice that the ethical issues discussed in this paper do not depend on technology errors, poor maintenance, improper servicing, security vulnerabilities, or other failings – and all those will occur too. No complex technology we have created has been infallible. Even industries with money directly at stake have not solved this problem. For instance, bank ATMs continue to make headlines when they hemorrhage cash – tens of thousands of dol- lars more than the account holder actually has – because of software glitches alone [2, 10], never mind hacking. And just about every computing device we have created has been hacked or is hackable, including neural implants and military systems [3, 28]. These vulnerabilities and errors certainly can cause harm in the context of autonomous cars, and it would be unethically irresponsible to not safeguard against them where we can. Putting these technology issues aside and even assuming that perfect technology is avail- able, there are still many other safety and ethical questions to worry about, such as the programming issues above.
  • 63. 4.3.1 Broader ethical issues But programming is only one of many areas to reflect upon as society begins to adopt autonomous driving technologies. Assigning legal and moral responsibility for crashes is a popular topic already [1, 14, 20, 22, 49, 51]. Here are a few others, as part of a much longer list of possible questions: Does it matter to ethics if a car is publicly owned, for instance, a city bus or fire truck? The owner of a robot car may reasonably expect that its property “owes allegiance” to the owner and should value his or her life more than anonymous pedestrians and drivers. But a publicly owned automated vehicle might not have that obligation, and this can change moral calculations. Even for privately owned autonomous vehicles, the occupants arguably should bear more or all of the risk, since they are the ones introducing the machine into public spaces in the first place. Do robot cars present an existential threat to the insurance industry? Some believe that ultra-safe cars that can avoid most or all accidents will mean that many insurance compa- nies will go bankrupt, since there would be no or very little risk to insure against [40, 52]. But things could go the other way too: We could see mega- accidents as cars are networked together and vulnerable to wireless hacking – something like the stock market’s “flash crash” in 2010 [5]. What can the insurance industry do to
  • 64. protect itself while not getting in the way of the technology, which holds immense benefits? How susceptible would robot cars be to hacking? So far, just about every computing device we have created has been hacked. If authorities and owners (e. g., rental car compa- ny) are able to remotely take control of a car – which is reportedly under development for law enforcement in the European Union [50] – this offers an easy path for cyber-carjackers. If under attack, whether a hijacking or ordinary break-in, what should the car do: speed 814.3 Next steps away, alert the police, remain at the crime scene to preserve evidence, or maybe defend itself? For a future suite of in-car apps, as well as sensors and persistent GPS/tracking, can we safeguard personal information, or do we resign ourselves to a world with disappearing privacy rights [27]? To the extent that online services bring online advertising, we could see new, insidious advertising schemes that may allow third- party advertisers to have some influence on the autonomous car’s route selection, e.g., steering the car past their businesses [32]. What kinds of abuse might we see with autonomous cars? If the cars drive too conser-
  • 65. vatively, they may become a traffic hazard or trigger road-rage in human drivers with less patience [26, 42]. If the crash-avoidance system of a robot car is generally known, then other drivers may be tempted to “game” it, e. g., by cutting in front of it, knowing that the automated car will slow down or swerve to avoid an accident. If those cars can safely drive us home in a fully-auto mode, that may encourage a culture of more alcohol consumption, since we won’t need to worry so much about drunk-driving. More distant concerns include: How will law-abiding robot cars affect city revenue, which often depends on traffic fines imposed against law- breaking human drivers? Inas- much as many organ transplants come from car-accident victims, how will society manage a declining and already insufficient supply of donated organs [41]? Older-model autonomous cars may be unable to communicate with later models or future road infrastructure. How do we get those legacy models – which may be less safe, in addition to incompatible with newer technology – off the roads [45]? Since 2009, Microsoft has been trying to kill off its Windows XP operating system [39], a much less expensive investment than an autonomous car; but many users still refuse to relinquish it, including for critical military systems [37, 46]. This is a great security risk since Microsoft will no longer offer software patches for the operating system. 4.3.2 Conclusions
  • 66. We don’t really know what our robot-car future will look like, but we can already see that much work needs to be done. Part of the problem is our lack of imagination. Technology policy expert Peter W. Singer observed, “We are still at the ‘horseless carriage’ stage of this technology, describing these technologies as what they are not, rather than wrestling with what they truly are” [43]. As it applies here, robots aren’t merely replacing human drivers, just as human drivers in the first automobiles weren’t simply replacing horses: that would like mistaking electric- ity as merely a replacement for candles. The impact of automating transportation will change society in radical ways, and technology seems to be accelerating. As Singer puts it, “Yes, Moore’s Law is operative, but so is Murphy’s Law” [43]. When technology goes wrong – and it will – thinking in advance about ethical design and policies can help guide us responsibility into the unknown. Why Ethics Matters for Autonomous Cars82 In future autonomous cars, crash-avoidance features alone won’t be enough. An accident may be unavoidable as a matter of physics [12, 13], especially as autonomous cars make their way onto city streets [19, 21, 25], a more dynamic environment than highways. It also could be too dangerous to slam on the brakes, or not enough
  • 67. time to hand control back to the unaware human driver, assuming there’s a human in the vehicle at all. Technology errors, misaligned sensors, malicious actors, bad weather, and bad luck can also contribute to imminent collisions. Therefore, robot cars will also need to have crash-optimization strategies that are thoughtful about ethics. If ethics is ignored and the robotic car behaves badly, a powerful case could be made that auto manufacturers were negligent in the design of their product, and that opens them up to tremendous legal liability, should such an event happen. Today, we see activists campaigning against “killer” military robots that don’t yet exist, partly on the grounds that machines should never be empowered to make life-and- death decisions [31, 35]. It’s not outside the realm of possibility to think that the same precautionary backlash won’t happen to the autonomous car industry, if industry doesn’t appear to be taking ethics seriously. The larger challenge, though, isn’t just about thinking through ethical dilemmas. It’s also about setting accurate expectations with users and the general public who might find them- selves surprised in bad ways by autonomous cars; and expectations matter for market acceptance and adoption. Whatever answer to an ethical dilemma that industry might lean towards will not be satisfying to everyone. Ethics and expectations are challenges common to all automotive manufacturers and tier-one suppliers who want
  • 68. to play in this emerging field, not just particular companies. Automated cars promise great benefits and unintended effects that are difficult to pre- dict, and the technology is coming either way. Change is inescapable and not necessarily a bad thing in itself. But major disruptions and new harms should be anticipated and avoided where possible. That is the role of ethics in innovation policy: it can pave the way for a better future while enabling beneficial technologies. Without looking at ethics, we are driving with one eye closed. References 1. Anderson, J., Kalra, N., Stanley, K., Sorensen, P., Samaras, C., Oluwatola, O.: Autonomous ve- hicle technology: a guide for policymakers. Report by RAND Corporation. http://www.rand.org/ pubs/research_reports/RR443-1.html (2014). Accessed 8 July 2014 2. Associated Press. ATM ‘glitch’ gives $37,000 to lucky homeless man. Daily Mail. http://www. dailymail.co.uk/news/article-2596977/ATM-glitch-gives-OVER- 37-000-homeless-man-cash-re- quested-140.html (2014). Accessed 8 July 2014 3. Baldor, L.: China hacked the Pentagon to get weapons data. Talking Points Memo. http://talking- pointsmemo.com/news/china-hacked-the-pentagon-to-get- weapons-programs-data (2013). Accessed 8 July 2014
  • 69. 4. Cathcart, T.: The Trolley Problem, or Would You Throw the Fat Guy Off the Bridge? Workman Publishing Company, New York (2013) 83References 5. Commodity Futures Trading Commission and the Securities and Exchange Commission: Find- ings regarding the market events of May 6, 2010. CFTC and SEC, Washington DC. http://www. sec.gov/news/studies/2010/marketevents-report.pdf (2010). Accessed 8 July 2014 6. Curtis, P. and Hedlund, J.: Reducing deer-vehicle crashes. Report funded by the Insurance Insti- tute for Highway Safety. Cornell University, Ithaca. http://wildlifecontrol.info/pubs/Documents/ Deer/Deer-Vehicle_factsheet1.pdf (2005). Accessed 8 July 2014 7. Davies, A.: Avoiding squirrels and other things Google’s robot car can’t do. Wired. http://www. wired.com/2014/05/google-self-driving-car-can-cant/ (2014). Accessed 8 July 2014 8. Edmonds, D.: Would You Kill the Fat Man? The Trolley Problem and What Your Answer Tells Us About Right and Wrong. Princeton University Press, Princeton (2014) 9. Federal Ministry of Justice and Consumer Protection: Basic Law for the Federal Republic of Germany. http://www.gesetze-im- internet.de/englisch_gg/englisch_gg.html (2014). Accessed 8 July 2014
  • 70. 10. Floro, Z.: Man goes on a casino bender after an ATM let him take out unlimited cash. Business Insider. http://www.businessinsider.com/atm-gives-unlimited- cash-2012-6 (2012). Accessed 8 July 2014 11. Foot, P.: The problem of abortion and the doctrine of the double effect. Oxford Review 5, 5–15 (1967) 12. Fraichard, T.: Will the driver seat ever be empty? Report funded by Inria. http://hal.inria.fr/hal- 00965176 (2014). Accessed 8 July 2014 13. Fraichard, T., and Asama, H.: Inevitable collision states: a step towards safer robots? Advanced Robotics 18(10), 1001–1024 (2004) 14. Garza, A.: ‘Look Ma, no hands!’: wrinkles and wrecks in the age of autonomous vehicles. New England Law Review 46(3), 581–616 (2012) 15. Goodall, N.J.: Autonomous car ethics. Interview with CBC radio. http://www.cbc.ca/spark/ blog/2014/04/13/autonomous-car-ethics/ (2014). Accessed 8 July 2014 16. Goodall, N. J.: Machine ethics and automated vehicles. In: Meyer, G. and Beiker, S. (eds.) Road Vehicle Automation. Springer, Cham (2014) 17. Goodall, N. J.: Ethical decision making during automated vehicle crashes. Transportation Research Record: Journal of the Transportation Research Board (forthcoming)
  • 71. 18. Goodall, N. J.: Vehicle automation and the duty to act. In: Proceedings of the 21st World Congress on Intelligent Transport Systems, 7–11 September 2014, Detroit, Michigan (forth- coming) 19. Google: The latest chapter for the self-driving car: mastering city street driving. http://googleblog. blogspot.co.at/2014/04/the-latest-chapter-for-self-driving- car.html (2014). Accessed 8 July 2014 20. Gurney, J.: Sue my car, not me: products liability and accidents involving autonomous vehicles. Journal of Law, Technology and Policy 2, 247–277 (2013) 21. Hern, A.: Self-driving cars face a long and winding road to success. The Guardian. http://www. theguardian.com/technology/2014/may/28/self-driving-cars- google-success (2014). Accessed 8 July 2014 22. Hevelke, A. and Nida-Rumelin, J.: Responsibility for crashes of autonomous vehicles: an ethical analysis. Science and Engineering Ethics (2014). doi: 10.1007/s11948-014-9565-5 23. IEEE: IEEE code of ethics. http://www.ieee.org/about/corporate/governance/p7-8.html (2014). Accessed 8 July 2014 24. International Association of Public Transport: Observatory of automated metros. http://metroau- tomation.org/ (2014). Accessed 8 July 2014
  • 72. 25. Jaffe, E.: The first look at how Google’s self-driving car handles city streets. The Atlantic/CityL- ab. http://www.citylab.com/tech/2014/04/first-look-how- googles-self-driving-car-handles-city- streets/8977/ (2014). Accessed 8 July 2014 Why Ethics Matters for Autonomous Cars84 26. KPMG: Self-driving cars: are we ready? Report by KPMG. KPMG, Chicago. http://www.kpmg. com/US/en/IssuesAndInsights/ArticlesPublications/Documents/s elf-driving-cars-are-we-ready. pdf (2013). Accessed 8 July 2014 27. Lee, T.: Self-driving cars are a privacy nightmare. And it’s totally worth it. The Washington Post. http://www.washingtonpost.com/blogs/wonkblog/wp/2013/05/21 /self-driving-cars-are-a-priva- cy-nightmare-and-its-totally-worth-it/ (2013). Accessed 8 July 2014 28. Leggett, H.: The new hacking frontier: your brain? Wired. http://www.wired.com/2009/07/neu- rosecurity/ (2009). Accessed 8 July 2014 29. Lin, P.: The ethics of saving lives with autonomous cars is far murkier than you think. Wired. http://www.wired.com/2013/07/the-surprising-ethics-of-robot- cars/ (2013). Accessed 8 July 2014 30. Lin, P.: The ethics of autonomous cars. The Atlantic. http://www.theatlantic.com/technology/
  • 73. archive/2013/10/the-ethics-of-autonomous-cars/280360/ (2013). Accessed 8 July 2014 31. Lin, P.: Why the drone wars matter for automated cars. Lecture presented at Proceedings in Automated Driving, Stanford Law School, 12 December 2013. http://stanford.io/1jeIQuw. Accessed 8 July 2014 32. Lin, P.: What if your autonomous car keeps routing you past Krispy Kreme? The Atlantic. http:// www.theatlantic.com/technology/archive/2014/01/what-if-your- autonomous-car-keeps-rout- ing-you-past-krispy-kreme/283221/ (2014). Accessed 8 July 2014 33. Lin, P.: Ethics and autonomous cars: why ethics matters, and how to think about it. Lecture presented at Daimler and Benz Foundation’s Villa Ladenburg Project, Monterey, California, 21 February 2014 34. Lin, P.: The robot car of tomorrow might just be programmed to hit you. Wired. http://www. wired.com/2014/05/the-robot-car-of-tomorrow-might-just-be- programmed-to-hit-you/ (2014). Accessed 8 July 2014 35. Lin, P., Bekey, G., Abney, K.: Autonomous military robotics: risk, ethics, and design. Report funded by the US Office of Naval Research. California Polytechnic State University, San Luis Obispo. http://ethics.calpoly.edu/ONR_report.pdf (2008). Accessed 8 July 2014 36. Marcus, G.: Moral machines. New Yorker.
  • 74. http://www.newyorker.com/online/blogs/news- desk/2012/11/google-driverless-car-morality.html (2012). Accessed 8 July 2014 37. McCabe, R.: Navy, others still struggling to ditch Windows XP. The Virginian-Pilot. http://hamp- tonroads.com/2014/06/navy-others-still-struggle-ditching- windows-xp (2014). Accessed 8 July 2014 38. Merat, N., Jamson, H., Lai F., and Carsten, O.: Human factors of highly automated driving: results from the EASY and CityMobil projects. In: Meyer, G. and Beiker, S. (eds) Road Vehicle Automation. Springer, Cham (2014) 39. Microsoft: Windows lifecycle fact sheet. http://windows.microsoft.com/en-us/windows/lifecy- cle (2014). Accessed 8 July 2014 40. Mui, C. and Carroll, P.: Driverless Cars: Trillions Are Up For Grabs. Cornerloft Press, Seattle (2013) 41. Park, A.: Driverless cars to kill organ transplantation. DriverlessCarHQ. https://web.archive. org/web/20120626151201/http://www.driverlesscarhq.com/trans plants/ (2012). Accessed 8 July 2014 42. Roach, J.: Road rage at driverless cars? It’s possible. NBC News. http://futureoftech-discuss. nbcnews.com/_news/2012/01/20/10201865-road-rage-at- driverless-cars-its-possible (2012). Accessed 8 July 2014
  • 75. 43. Singer, P.W.: The robotic revolution. Brookings Institution. http://www.brookings.edu/research/ opinions/2012/12/11-robotics-military-singer (2012). Accessed 8 July 2014 85References 44. Sinnott-Armstrong, W.: Consequentialism. Stanford Encyclopedia of Philosophy. http://plato. stanford.edu/entries/consequentialism/ (2011). Accessed 8 July 2014 45. Smith, B.W.: Planning for the obsolescence of technologies not yet invented. Stanford Law School’s Center for Internet and Society. http://cyberlaw.stanford.edu/blog/2013/10/plan- ning-obsolescence-technologies-not-yet-invented (2013). Accessed 8 July 2014 46. Sternstein, A.: Why Feds are still buying IT that works with Windows XP. Nextgov. http://www. nextgov.com/cio-briefing/2014/04/why-feds-are-still-buying-it- works-windows-xp/81667/ (2014). Accessed 8 July 2014 47. Thomson, J.J.: Killing, letting die, and the trolley problem. The Monist 59, 204–217 (1976) 48. Transportation Research Board: Animal-vehicle collision data collection. http://onlinepubs.trb. org/onlinepubs/nchrp/nchrp_syn_370.pdf (2007). Accessed 8 July 2014 49. Villasenor, J.: Product liability and driverless cars: issues and guiding principles for legisla-
  • 76. tion. Report by The Brookings Institution. http://www.brookings.edu/research/papers/2014/04/ products-liability-driverless-cars-villasenor (2014). Accessed 8 July 2014 50. Waterfield, B. and Day, M.: EU has secret plan for police to ‘remote stop’ cars. The Telegraph. http://www.telegraph.co.uk/news/worldnews/europe/eu/1060532 8/EU-has-secret-plan-for-po- lice-to-remote-stop-cars.html (2014). Accessed 8 July 2014 51. Wu, S.: Unmanned vehicles and US product liability law. Journal of Law, Information & Science 21(2), 234–254 (2012) 52. Yeomans, G.: Autonomous vehicles: handing over control – opportunities and risk for insur- ance. Report by Lloyd’s. Lloyd’s, London. http://www.lloyds.com/~/media/lloyds/reports/emerg- ing%20risk%20reports/autonomous%20vehicles%20final.pdf (2014). Accessed 8 July 2014 Teil I Human and Machine4 Why Ethics Matters for Autonomous Cars4.1 Why ethics matters4.1.1 Beyond crash-avoidance4.1.2 Crash-optimization means targeting4.1.3 Beyond harm4.2 Scenarios that implicate ethics4.2.1 The deer4.2.2 Self-sacrifice4.2.3 Ducking harm4.2.4 Trolley problems4.3 Next steps4.3.1 Broader ethical issues4.3.2 ConclusionsReferences Overview & Objectives 1. The case of Willy Just
  • 77. 2. The Ford Pinto 3. The Challenger Shuttle 4. Ethics in Practice: What are we to do? Ethics in Practice The Case of Willy Just Ethics in Practice The Case of Willy Just “A shorter, fully loaded truck can operate much more quickly. A shortening of the rear compartment will not disadvantageously affect the weight balance, by overloading the front axle, because a correction in the weight distribution takes place automatically through the fact that the cargo in the struggle towards the back door during the operation is preponderantly located there…To facilitate cleaning, an eight to twelve inch hole should be made in the floor and provided with a cover that can be opened from the outside. The floor should be slightly inclined, and the cover equipped with a small sieve. Thus all fluids will flow to the middle, the thin fluids will exit during the operation, and the thicker fluids can be hosed out afterwards.” —Excerpt fromamemowrittenby theengineerWillyJust Ethics in Practice The Case of Willy Just • Can you conceive of an ethical defence that could be
  • 78. offered on behalf of Willy Just? • Could his conduct be morally justified by any of the ethical frameworks we employed in the trolley car cases? • Virtue Ethics? • Deontology? • Utilitarianism? Ethics in Practice The seemingly significant nature of roles Our reasons for acting in a given way can conflict, depending on their source e.g. professional, moral, prudential, legal,… We each occupy a number of roles, which bring with them specific rights, duties, etc. • In taking on certain roles, we also take on certain obligations Ethics in Practice The Ford Pinto A subcompact car released in the US in 1970 Design Aims • Produce a small and light car (under 900 kilos)
  • 79. • Low cost (under $2,000) • Competitive in terms of fuel consumption, service costs, and reliability • Took 25 months from drawing board to on-road (43 months was the norm at the time) ETHICS IN PRACTICE THE FORD PINTO CASE Ethics in Practice The Ford Pinto In the ‘70s, over 2 million Pintos were sold in the U.S. alone But, it had one fundamental design issue: When rear-ended at over 20 mph (32kph), the gas tank – which was located behind the rear axle – was liable to rupture, often resulting in an explosion. And, as the doors would crumple, passengers would very likely be trapped inside. Ethics in Practice The Ford Pinto Ford engineers proposed a number of possible solutions 1. A shock-absorbent plastic barrier ($4) 2. A nylon bladder inside the tank ($5-$8)
  • 80. 3. Placing the tank above the rear axle All were successfully tested, but none were implemented Ethics in Practice The Ford Pinto Can you conceive of an ethical defence that could be offered on behalf of the Ford engineers? What could the Ford engineers have done, when management refused to improve the safety of the Pinto? What were the moral responsibilities of the engineers? What was their duty of care? Ethics in Practice The Challenger Space Shuttle Ethics in Practice The Challenger Space Shuttle Background concerns • Loss of public interest in the U.S. Space Program • Political pressures • Industrial consideration • Launch previously delayed Conditions on day of launch (28 Jan. ‘86)