SlideShare a Scribd company logo
1 of 11
Download to read offline
Running head: PHILOSOPHICAL AND EITHICAL IMPACTS ON AUTONOMOUS 1
Philosophical and Ethical Impacts On Autonomous Vehicles
Corey Messer
Liberty University
PHILOSOPHICAL AND EITHICAL IMPACTS ON AUTONOMOUS VEHICLES 2
Abstract
With vehicles becoming more automated each year, very soon completely self-driven cars knows
as Autonomous Vehicles (AV) will be coexisting with human driven vehicles. With AV on the
breaking edge of automotive technology, eventually a AV will come upon an unavoidable
circumstance where human life will be loss. The correct action the AV chooses in such a
circumstance is guided by the moral algorithm engineered into the AV. This moral algorithm,
guided by public opinion, will instill different ethical and philosophical concepts. Among the
possible ethical concepts Consequentialism, Deontology, and Virtue Ethics can play a major role
in the creation of the moral algorithms. From the ethical concepts are different philosophers and
philosophies such as, Jeremy Bentham and John Stuart Mill supporting Utilitarianism, Immanuel
Kant a big proponent for Deontology, and Friedrich Nietzsche who fashioned Egoism, all these
philosophers can persuade how automotive companies create their moral algorithm. Beside
ethical and philosophical concepts, biblical and economical persuasion for a certain moral
algorithm is discussed. With the help of public opinion, ethical and philosophical concepts, some
automotive companies are now leaning towards a Utilitarian moral algorithm, but soon a moral
algorithm for AV will be adopted and acceptable into today’s society.
PHILOSOPHICAL AND EITHICAL IMPACTS ON AUTONOMOUS VEHICLES 3
Philosophical and Ethical Impacts On Autonomous Vehicles
Within the last decade the automobile industry has made significant progress dealing with
automation being integrated into vehicles. Currently cars have intelligent cruise control, parallel
parking algorithms, automatic overtaking, lane guidance control and many other features. Within
the next decade many vehicles will be equipped with self-driving capabilities. These vehicles are
known as Autonomous Vehicles (AV), and they are the leading technology in today’s car
industry. Companies such as Google, Tesla, Audi, and Delphi have created AV that are being
driven on roads today. These AV in the near future will come upon an unavoidable circumstance,
where a decision has to be made and human life will be loss. The correct action the AV chooses
in such a circumstance is guided by the moral algorithm engineered into the AV. With the help
from public opinion, philosophical, ethical biblical and economical understanding the correct
moral algorithm for these AV can be determined.
Literature Review
Bonnefon et al’s (2015) article explored the science of experimental ethics dealing with
public opinion on ethical decisions made by AV. Jean-Francois Bonnefon, and some colleagues
at the Toulouse School of Economics in France, questioned several hundred workers on
Amazon’s Mechanical Turk, a crowdsourcing internet market place polling service. The
participants were given scenarios in which one or more pedestrians could be saved if an AV were
to swerve into a barrier, killing its occupants or a pedestrian. Throughout the survey the number
of pedestrians being saved were varied, and the participants were asked to imagine themselves as
the occupants of the AV. The study found that in general people are comfortable with the idea
that self-driving vehicles should be programmed to minimize the amount of death involved in an
accident. Also, participants wished others to cruise in AV, more than they wanted to buy AV
PHILOSOPHICAL AND EITHICAL IMPACTS ON AUTONOMOUS VEHICLES 4
vehicles themselves. Therefore, people are in favor of cars that sacrifice the occupants to save
other lives, as long as they do not have to drive one themselves.
Bonnefon and his colleagues admit that this is the beginning process of working through
this moral maze. Other factors will have to be sorted out such as the nature of uncertainty and
the assignment of blame or fault after an AV accident has happened. This study raises more
questions such as the acceptability for an AV to avoid a motorcycle by swerving into a wall,
considering that the probability of survival is greater for the passenger of the AV verses the
motorcyclist. Should different decisions be made when children are on board, since they have a
longer lifespan then adults. If a manufactures offers different version of the moral algorithm, and
a buyer knowingly chooses one of them, is the buyer to blame for the consequences of the
algorithm’s decision? This paper by Bonnefon and his co-workers are shaping the ethics the will
be engineering into future AV. Many articles have been written discussing different philosophies
that could be used to create such a moral algorithm.
Discussion
One of these articles was written by MIT, raising the question on how an AV should be
programmed to act in the event of an unavoidable accident. Should the AV minimize the loss of
life, at the sacrifice of it’s occupants, or protect the occupants at all costs, or choose between
these extremes at random? Also, the moral algorithm in AV will impact the way AV are
accepted into society. Looking at Bonnefon’s research on public opinion, MIT proposed their
own ethical AV case where a AV is headed toward a crowd of 10 people crossing the road, and
the AV cannot stop in time, but can avoid killing 10 people by steering into the wall. This
collision with the wall would kill the driver, and the occupants (MIT, 2015).
PHILOSOPHICAL AND EITHICAL IMPACTS ON AUTONOMOUS VEHICLES 5
Ethical Concepts
The action chosen by the AV will be based on different ethical and philosophical
influences. Three possible ethical concepts that could be adopted would fall under the Normative
Ethics. “Normative Ethics is a branch of ethics that establishes how things should or ought to be,
how to value them, which things are good or bad, and which action are right or wrong”, Maston
(2008). Also, Normative ethics creates a set of rules governing human conduct, and how humans
have to act in a given situation.
Consequentialism. The first of these ethical concepts is Consequentialism, which argues
that the morality of an action is contingent on the action’s outcome or result Maston (2008).
Therefore, the morally right action is the action that produces a good outcome. Within this
concept is Utilitarianism, which is one option when creating a moral algorithm for the AV,
because it will choose the right action that will lead to the most happiness for the greatest
number of people. Also to be considered is Hedonism, similar to Utilitarianism as it aims to
maximize the most amount of pleasure. Therefore, given the unpredictable situation the AV
would choose the right action that would be pleasurable or most desired by the people. For
instance the President of the United States were riding in an AV, his life would be saved over the
life of the homeless man in the street, because this would be the pleasurable or desired action
among the most amount of people.
Deontology. The second ethical concept to be discussed is Deontology, which focuses on
the action that is most right, being based off a set of duties. These duties could follow Isaac
Asimov’s laws of robotics, where the action chosen by the moral algorithm would be right,
because it was chosen out of a duty to follow the written rights or laws. Raul Rojas wrote a paper
on the four laws of robotic cars, and these four laws were based on Asimov’s laws of robotics.
PHILOSOPHICAL AND EITHICAL IMPACTS ON AUTONOMOUS VEHICLES 6
The science fiction writer Isaac Asimov wrote about robotic worlds where humans and
robots coexisted. In order to control these robots with intelligence and self-awareness, Asimov
wrote three laws by which they should be governed. A robot may not injure a human being or,
through inaction, allow a human being to come to harm. A robot must obey the orders given to it
by human beings, except where such orders would conflict with the First Law. A robot must
protect its own existence as long as such protection does not conflict with the First or Second
Laws. With AV increasing in manufacturing popularity, Asimov’s robotic laws could be applied
to AV. As Rojas writes, A car may not injure a human being or, through inaction, allow a human
being to come to harm. A car must obey the traffic rules, except when they would conflict with
the First Law. A car must obey the orders given to it by human beings, except where such orders
would conflict with the First or Second Laws. A car must protect its own existence as long as
such protection does not conflict with the First, Second or Third Laws.
These robotic laws can give way for a moral algorithm persuaded by the Natural Rights
Theory, that was carried out by Thomas Hobbes and John Lock, which holds that humans have
absolutes, natural rights. Therefore these natural rights, would be the bases of the moral
algorithm and the AV would save the most amount of life possible, and the longest life possible
thereby being persuaded to save the children in the car verses the adults in the street. Immanuel
Kant is also a large proponent of ideas that fall under deontology where the duty determines the
actions. If the AV was programed to save the people in the car, and that was it’s duty, then that
would be the right action to choose.
Virtue Ethics. The last ethical concept underneath of Normative Ethics is Virtue Ethics.
Virtue Ethics is based on the character of a virtues person. The action chosen would be the action
that a good person would do. Many say it would be the action your average mother would say to
PHILOSOPHICAL AND EITHICAL IMPACTS ON AUTONOMOUS VEHICLES 7
do. Therefore, with this philosophy instilled into the moral algorithm of AV the virtues action
would be saving the kids in the car verses the life of the adults on the street. Plato and Aristotle
first advocated this philosophy as it can lead to happiness and the good life.
Philosophical Concepts
The previously mentioned ethical concepts are birthed from different philosophies and
philosophers. Such as Jeremy Betham and John Stuart Mill, who brought Utilitarianism
philosophies to their full formation. Or Immanuel Kant, who supported Deontology, and
Friedrich Nietzsche who upheld Egoism. These different philosophers each can play a role in the
creation of a moral algorithm for AV.
Jeremy Bentham and John Stuart Mill. Utilitarianism holds that an action is right if it
leads to the most happiness for the greatest number of people. Within the given decision creating
the maximum amount of pleasure and the minimal amount of pain would be the definition of
happiness. Dealing with AV creating a moral algorithm that determines the most happiness in the
given unpredictable circumstance would follow the Betham’s and Mill’s utilitarianism mindset.
This might look like the AV crashing into the parked car on the side of the street verses
hitting the mom and daughter that stepped out into the street without looking. Following the
Utilitarianism philosophy the moral algorithm for the AV would take into account, life
expectancy, lifespan, and the amount of life taken, when determining the proper course of action.
Another example of the utilitarianism algorithm would have the AV crashing into the building
verses the motorcyclist because the probability of the passenger in the AV living through the
crash would be greater then the motorcyclist. When everyone cannot be happy, Immanuel Kant’s
philosophies can be introduce into the moral algorithm.
PHILOSOPHICAL AND EITHICAL IMPACTS ON AUTONOMOUS VEHICLES 8
Looking at Schinzerger’s and Martin’s book on the Introduction to Engineering Ethics,
the engineering code of ethics holds serving and protecting the public paramount (2000). Dealing
with AV, public safety is the highest priority, then it would be logical that the safety of the most
people would be the decision the moral algorithm would choose. Holding the safety of the most
amount of people falls into the utilitarianism moral algorithm, which is the bases for many
automotive companies when designing their moral algorithm.
Immanuel Kant. Deontology is based on the person’s motives to determine the right or
wrong action. “To act in the morally right way, people must act according to duty, and the
motives of a person who carries out the action that makes them right or wrong”, writes Maston
(2008). This philosophy for a moral algorithm up help by Immanuel Kant, might be more
accepting to someone buying a AV, because the AV would be in right standing saving the
passengers because the motives of the AV were right and pure, by saving the life of the
passengers. This philosophy would also follow Isaac Asimov’s laws of robotics, where a AV
would have a pure motives and actions would be right because the AV carried out Isaac
Asimov’s laws, and therefore was in right standing.
Friedrich Nietzsche. When automobile companies want to make AV marketable they
might choose to put a little Egoism philosophical ideas into their moral algorithm. Friedrich
Nietzsche was a large proponent for Egoism, which holds that the right action is the one that
maximizes the good for oneself. This philosophy might also be very attractive to AV buyers
because they can be assured that their life will be not loss, but within any unpredictable
circumstance the moral algorithm would hold the safety of the passengers paramount. On the
other hand, the downfall this philosophy is if the passengers can live with the choices the AV
will make. For example, with a Egoism ideology, the AV would run the children in the street
PHILOSOPHICAL AND EITHICAL IMPACTS ON AUTONOMOUS VEHICLES 9
over, verses crashing in the parked cars along the side of the street, because this choice is the
safest option for the passengers on board. If this were the action chosen, could the passengers
live with the action made by the AV? Each of these different philosophers can influence how
automobile companies will write their moral algorithm.
Biblical Persuasion
From the biblical context all human life is important and held to the same value. “There
is neither Jew nor Greek, there is neither slave nor free, there is no male and female, for you are
all one in Christ Jesus” (Galtaion: 3:28, NIV). “So Peter opened his mouth and said, “Truly I
understand that God shows no partiality” (Acts 10:34, NIV). When determining the moral
algorithm, from a Biblical perspective sacrificing those who are weak in the views of our society
would not be an option. For God sees everyone equal, therefore the homeless man in the street
and the child in the AV are both held to the same value. This would rule out the option of
sacrificing those who are less worthy of life, in order to save those who are of higher social
priority.
Economic Persuasion
With any moral algorithm automotive companies choose to create, the public audience
will be persuaded economically or financially to buy an AV. With an AV there will be fewer
accidents, injuries, and fatalities, which will save the owners money. Google has reported 11
accidents over the past six years, and the Google car was never at fault. From owning a AV
owners will receive fewer traffic tickets, because AV will not accede the speed limit, or run
through stop signs like a human driver can do. Therefore, saving the buyers of an AV more
money and hassle not having to handle traffic violations. With less accidents and tickets the need
for insurance will decrease, which means owners of AV will save more money on insurance. The
PHILOSOPHICAL AND EITHICAL IMPACTS ON AUTONOMOUS VEHICLES 10
human driven Tesla Model S cost for insurance per year is $1,942 verses the self-driven Model
S, which is $388 per year (Read 2015). Over ten years with yearly compounding and a 2%
interest rate, owners of a self-driven Tesla Model S could save over $17,000 dollars, in insurance
costs alone. Also, with many investment companies using calculus to determine continuous
compounding, buyers of a self-driven Tesla Model S could save close to $19,000 dollars.
According to Read (2008), on average insurance for AV would be around $1000 dollars cheaper
then self-driven vehicles. From an economic standpoint alone AV will become very attractive to
buyers looking to save money.
Conclusion and Final Thoughts
When determining the correct moral algorithm for AV, there is not a right or wrong
answer, but public opinion will help determine a moral algorithm that will be acceptable in
society. With the help from Bonnefon and the science of experimental ethics AV will have moral
algorithms that will draw from many different ethical and philosophical ideas. Public opinion,
and public acceptance will be the final answer to how these moral algorithms will be created. In
the near future AV will be coexisting with self-driven vehicles, and Asimov’s robotic laws might
not be as fictional as once thought of.
PHILOSOPHICAL AND EITHICAL IMPACTS ON AUTONOMOUS VEHICLES 11
References
Bonnefon, J., Shariff, A., & Rahwan, I. (2015). Autonomous Vehicles Need Experimental
Ethics: Are We Ready for Utilitarian Cars?
Maston, L. (2008). Ethics - By Branch / Doctrine - The Basics of Philosophy. Retrieved
November 5, 2015.
Read, R. (2015, August 13). How Much Cash Will An Autonomous Car Save You? More Than
$1,000 Per Year. Retrieved November 5, 2015.
Rojas, R. (n.d.). I, Car: The Four Laws of Robotic Cars. Retrieved November 4, 2015.
Schinzinger, R., & Martin, M. (2000). Introduction to engineering ethics. Boston: McGraw Hill.
Why Self-Driving Cars Must Be Programmed to Kill | MIT Technology Review. (2015, October
22). Retrieved November 3, 2015.

More Related Content

Similar to Autonomous Vehicles

Kimethics in a world of robots 2013
Kimethics in a world of robots 2013Kimethics in a world of robots 2013
Kimethics in a world of robots 2013AlexandruBuruc
 
Toward machines that behave ethically better than humans do
Toward machines that behave ethically better than humans doToward machines that behave ethically better than humans do
Toward machines that behave ethically better than humans doMatthijs Pontier
 
Rights of Machine
Rights of MachineRights of Machine
Rights of MachineNilaRehin
 
Ethics within the_code_the_machine_the_o
Ethics within the_code_the_machine_the_oEthics within the_code_the_machine_the_o
Ethics within the_code_the_machine_the_oPayson Johnston
 
Writing A Summary In 3 Steps
Writing A Summary In 3 StepsWriting A Summary In 3 Steps
Writing A Summary In 3 StepsRachel Walters
 
The Ethics of Artificial Intelligence
The Ethics of Artificial IntelligenceThe Ethics of Artificial Intelligence
The Ethics of Artificial IntelligenceKarl Seiler
 
Ielts Writing Sample Essay Task 1
Ielts Writing Sample Essay Task 1Ielts Writing Sample Essay Task 1
Ielts Writing Sample Essay Task 1Katie Stewart
 
Full Course - Law and Regulation of Machine Intelligence - Bar Ilan Universit...
Full Course - Law and Regulation of Machine Intelligence - Bar Ilan Universit...Full Course - Law and Regulation of Machine Intelligence - Bar Ilan Universit...
Full Course - Law and Regulation of Machine Intelligence - Bar Ilan Universit...Nicolas Petit
 
Gianmarco Veruggio. Roboethics on Skolkovo Robotics
Gianmarco Veruggio. Roboethics on Skolkovo RoboticsGianmarco Veruggio. Roboethics on Skolkovo Robotics
Gianmarco Veruggio. Roboethics on Skolkovo RoboticsAlbert Yefimov
 
Ethics in Advanced Robotics Article Presentatio
Ethics in Advanced Robotics Article PresentatioEthics in Advanced Robotics Article Presentatio
Ethics in Advanced Robotics Article PresentatioMehmet Çağrı Aksoy
 
Robbie the robot goes (w)rong!
Robbie the robot goes (w)rong!Robbie the robot goes (w)rong!
Robbie the robot goes (w)rong!lilianedwards
 
Perspectives on Ethics of AI Computer Science∗ Benjamin .docx
Perspectives on Ethics of AI Computer Science∗  Benjamin .docxPerspectives on Ethics of AI Computer Science∗  Benjamin .docx
Perspectives on Ethics of AI Computer Science∗ Benjamin .docxkarlhennesey
 
Perspectives on Ethics of AI Computer Science∗ Benjamin .docx
Perspectives on Ethics of AI Computer Science∗  Benjamin .docxPerspectives on Ethics of AI Computer Science∗  Benjamin .docx
Perspectives on Ethics of AI Computer Science∗ Benjamin .docxssuser562afc1
 
The paper must have the following subheadings which is not include.docx
The paper must have the following subheadings which is not include.docxThe paper must have the following subheadings which is not include.docx
The paper must have the following subheadings which is not include.docxoreo10
 
Running head ROBOTIC SURGERY TECHNOLOGY1OPERATING SYSTE.docx
Running head ROBOTIC SURGERY TECHNOLOGY1OPERATING SYSTE.docxRunning head ROBOTIC SURGERY TECHNOLOGY1OPERATING SYSTE.docx
Running head ROBOTIC SURGERY TECHNOLOGY1OPERATING SYSTE.docxtoltonkendal
 
ethical challenges for robotics and automation engineering.pptx
ethical challenges for robotics and automation engineering.pptxethical challenges for robotics and automation engineering.pptx
ethical challenges for robotics and automation engineering.pptxRimyaF
 
Presentation on Artificial Intelligence
Presentation on Artificial IntelligencePresentation on Artificial Intelligence
Presentation on Artificial IntelligenceIshwar Bulbule
 
Response Paper Due Monday, February 6th Write an .docx
Response Paper  Due Monday, February 6th  Write an .docxResponse Paper  Due Monday, February 6th  Write an .docx
Response Paper Due Monday, February 6th Write an .docxronak56
 
How To Write Recommendation In Thesis - Amanda Rutherf
How To Write Recommendation In Thesis - Amanda RutherfHow To Write Recommendation In Thesis - Amanda Rutherf
How To Write Recommendation In Thesis - Amanda RutherfCasey Hudson
 
Autonomous weapon systems, quo vadis
Autonomous weapon systems, quo vadisAutonomous weapon systems, quo vadis
Autonomous weapon systems, quo vadisKaran Khosla
 

Similar to Autonomous Vehicles (20)

Kimethics in a world of robots 2013
Kimethics in a world of robots 2013Kimethics in a world of robots 2013
Kimethics in a world of robots 2013
 
Toward machines that behave ethically better than humans do
Toward machines that behave ethically better than humans doToward machines that behave ethically better than humans do
Toward machines that behave ethically better than humans do
 
Rights of Machine
Rights of MachineRights of Machine
Rights of Machine
 
Ethics within the_code_the_machine_the_o
Ethics within the_code_the_machine_the_oEthics within the_code_the_machine_the_o
Ethics within the_code_the_machine_the_o
 
Writing A Summary In 3 Steps
Writing A Summary In 3 StepsWriting A Summary In 3 Steps
Writing A Summary In 3 Steps
 
The Ethics of Artificial Intelligence
The Ethics of Artificial IntelligenceThe Ethics of Artificial Intelligence
The Ethics of Artificial Intelligence
 
Ielts Writing Sample Essay Task 1
Ielts Writing Sample Essay Task 1Ielts Writing Sample Essay Task 1
Ielts Writing Sample Essay Task 1
 
Full Course - Law and Regulation of Machine Intelligence - Bar Ilan Universit...
Full Course - Law and Regulation of Machine Intelligence - Bar Ilan Universit...Full Course - Law and Regulation of Machine Intelligence - Bar Ilan Universit...
Full Course - Law and Regulation of Machine Intelligence - Bar Ilan Universit...
 
Gianmarco Veruggio. Roboethics on Skolkovo Robotics
Gianmarco Veruggio. Roboethics on Skolkovo RoboticsGianmarco Veruggio. Roboethics on Skolkovo Robotics
Gianmarco Veruggio. Roboethics on Skolkovo Robotics
 
Ethics in Advanced Robotics Article Presentatio
Ethics in Advanced Robotics Article PresentatioEthics in Advanced Robotics Article Presentatio
Ethics in Advanced Robotics Article Presentatio
 
Robbie the robot goes (w)rong!
Robbie the robot goes (w)rong!Robbie the robot goes (w)rong!
Robbie the robot goes (w)rong!
 
Perspectives on Ethics of AI Computer Science∗ Benjamin .docx
Perspectives on Ethics of AI Computer Science∗  Benjamin .docxPerspectives on Ethics of AI Computer Science∗  Benjamin .docx
Perspectives on Ethics of AI Computer Science∗ Benjamin .docx
 
Perspectives on Ethics of AI Computer Science∗ Benjamin .docx
Perspectives on Ethics of AI Computer Science∗  Benjamin .docxPerspectives on Ethics of AI Computer Science∗  Benjamin .docx
Perspectives on Ethics of AI Computer Science∗ Benjamin .docx
 
The paper must have the following subheadings which is not include.docx
The paper must have the following subheadings which is not include.docxThe paper must have the following subheadings which is not include.docx
The paper must have the following subheadings which is not include.docx
 
Running head ROBOTIC SURGERY TECHNOLOGY1OPERATING SYSTE.docx
Running head ROBOTIC SURGERY TECHNOLOGY1OPERATING SYSTE.docxRunning head ROBOTIC SURGERY TECHNOLOGY1OPERATING SYSTE.docx
Running head ROBOTIC SURGERY TECHNOLOGY1OPERATING SYSTE.docx
 
ethical challenges for robotics and automation engineering.pptx
ethical challenges for robotics and automation engineering.pptxethical challenges for robotics and automation engineering.pptx
ethical challenges for robotics and automation engineering.pptx
 
Presentation on Artificial Intelligence
Presentation on Artificial IntelligencePresentation on Artificial Intelligence
Presentation on Artificial Intelligence
 
Response Paper Due Monday, February 6th Write an .docx
Response Paper  Due Monday, February 6th  Write an .docxResponse Paper  Due Monday, February 6th  Write an .docx
Response Paper Due Monday, February 6th Write an .docx
 
How To Write Recommendation In Thesis - Amanda Rutherf
How To Write Recommendation In Thesis - Amanda RutherfHow To Write Recommendation In Thesis - Amanda Rutherf
How To Write Recommendation In Thesis - Amanda Rutherf
 
Autonomous weapon systems, quo vadis
Autonomous weapon systems, quo vadisAutonomous weapon systems, quo vadis
Autonomous weapon systems, quo vadis
 

Autonomous Vehicles

  • 1. Running head: PHILOSOPHICAL AND EITHICAL IMPACTS ON AUTONOMOUS 1 Philosophical and Ethical Impacts On Autonomous Vehicles Corey Messer Liberty University
  • 2. PHILOSOPHICAL AND EITHICAL IMPACTS ON AUTONOMOUS VEHICLES 2 Abstract With vehicles becoming more automated each year, very soon completely self-driven cars knows as Autonomous Vehicles (AV) will be coexisting with human driven vehicles. With AV on the breaking edge of automotive technology, eventually a AV will come upon an unavoidable circumstance where human life will be loss. The correct action the AV chooses in such a circumstance is guided by the moral algorithm engineered into the AV. This moral algorithm, guided by public opinion, will instill different ethical and philosophical concepts. Among the possible ethical concepts Consequentialism, Deontology, and Virtue Ethics can play a major role in the creation of the moral algorithms. From the ethical concepts are different philosophers and philosophies such as, Jeremy Bentham and John Stuart Mill supporting Utilitarianism, Immanuel Kant a big proponent for Deontology, and Friedrich Nietzsche who fashioned Egoism, all these philosophers can persuade how automotive companies create their moral algorithm. Beside ethical and philosophical concepts, biblical and economical persuasion for a certain moral algorithm is discussed. With the help of public opinion, ethical and philosophical concepts, some automotive companies are now leaning towards a Utilitarian moral algorithm, but soon a moral algorithm for AV will be adopted and acceptable into today’s society.
  • 3. PHILOSOPHICAL AND EITHICAL IMPACTS ON AUTONOMOUS VEHICLES 3 Philosophical and Ethical Impacts On Autonomous Vehicles Within the last decade the automobile industry has made significant progress dealing with automation being integrated into vehicles. Currently cars have intelligent cruise control, parallel parking algorithms, automatic overtaking, lane guidance control and many other features. Within the next decade many vehicles will be equipped with self-driving capabilities. These vehicles are known as Autonomous Vehicles (AV), and they are the leading technology in today’s car industry. Companies such as Google, Tesla, Audi, and Delphi have created AV that are being driven on roads today. These AV in the near future will come upon an unavoidable circumstance, where a decision has to be made and human life will be loss. The correct action the AV chooses in such a circumstance is guided by the moral algorithm engineered into the AV. With the help from public opinion, philosophical, ethical biblical and economical understanding the correct moral algorithm for these AV can be determined. Literature Review Bonnefon et al’s (2015) article explored the science of experimental ethics dealing with public opinion on ethical decisions made by AV. Jean-Francois Bonnefon, and some colleagues at the Toulouse School of Economics in France, questioned several hundred workers on Amazon’s Mechanical Turk, a crowdsourcing internet market place polling service. The participants were given scenarios in which one or more pedestrians could be saved if an AV were to swerve into a barrier, killing its occupants or a pedestrian. Throughout the survey the number of pedestrians being saved were varied, and the participants were asked to imagine themselves as the occupants of the AV. The study found that in general people are comfortable with the idea that self-driving vehicles should be programmed to minimize the amount of death involved in an accident. Also, participants wished others to cruise in AV, more than they wanted to buy AV
  • 4. PHILOSOPHICAL AND EITHICAL IMPACTS ON AUTONOMOUS VEHICLES 4 vehicles themselves. Therefore, people are in favor of cars that sacrifice the occupants to save other lives, as long as they do not have to drive one themselves. Bonnefon and his colleagues admit that this is the beginning process of working through this moral maze. Other factors will have to be sorted out such as the nature of uncertainty and the assignment of blame or fault after an AV accident has happened. This study raises more questions such as the acceptability for an AV to avoid a motorcycle by swerving into a wall, considering that the probability of survival is greater for the passenger of the AV verses the motorcyclist. Should different decisions be made when children are on board, since they have a longer lifespan then adults. If a manufactures offers different version of the moral algorithm, and a buyer knowingly chooses one of them, is the buyer to blame for the consequences of the algorithm’s decision? This paper by Bonnefon and his co-workers are shaping the ethics the will be engineering into future AV. Many articles have been written discussing different philosophies that could be used to create such a moral algorithm. Discussion One of these articles was written by MIT, raising the question on how an AV should be programmed to act in the event of an unavoidable accident. Should the AV minimize the loss of life, at the sacrifice of it’s occupants, or protect the occupants at all costs, or choose between these extremes at random? Also, the moral algorithm in AV will impact the way AV are accepted into society. Looking at Bonnefon’s research on public opinion, MIT proposed their own ethical AV case where a AV is headed toward a crowd of 10 people crossing the road, and the AV cannot stop in time, but can avoid killing 10 people by steering into the wall. This collision with the wall would kill the driver, and the occupants (MIT, 2015).
  • 5. PHILOSOPHICAL AND EITHICAL IMPACTS ON AUTONOMOUS VEHICLES 5 Ethical Concepts The action chosen by the AV will be based on different ethical and philosophical influences. Three possible ethical concepts that could be adopted would fall under the Normative Ethics. “Normative Ethics is a branch of ethics that establishes how things should or ought to be, how to value them, which things are good or bad, and which action are right or wrong”, Maston (2008). Also, Normative ethics creates a set of rules governing human conduct, and how humans have to act in a given situation. Consequentialism. The first of these ethical concepts is Consequentialism, which argues that the morality of an action is contingent on the action’s outcome or result Maston (2008). Therefore, the morally right action is the action that produces a good outcome. Within this concept is Utilitarianism, which is one option when creating a moral algorithm for the AV, because it will choose the right action that will lead to the most happiness for the greatest number of people. Also to be considered is Hedonism, similar to Utilitarianism as it aims to maximize the most amount of pleasure. Therefore, given the unpredictable situation the AV would choose the right action that would be pleasurable or most desired by the people. For instance the President of the United States were riding in an AV, his life would be saved over the life of the homeless man in the street, because this would be the pleasurable or desired action among the most amount of people. Deontology. The second ethical concept to be discussed is Deontology, which focuses on the action that is most right, being based off a set of duties. These duties could follow Isaac Asimov’s laws of robotics, where the action chosen by the moral algorithm would be right, because it was chosen out of a duty to follow the written rights or laws. Raul Rojas wrote a paper on the four laws of robotic cars, and these four laws were based on Asimov’s laws of robotics.
  • 6. PHILOSOPHICAL AND EITHICAL IMPACTS ON AUTONOMOUS VEHICLES 6 The science fiction writer Isaac Asimov wrote about robotic worlds where humans and robots coexisted. In order to control these robots with intelligence and self-awareness, Asimov wrote three laws by which they should be governed. A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. With AV increasing in manufacturing popularity, Asimov’s robotic laws could be applied to AV. As Rojas writes, A car may not injure a human being or, through inaction, allow a human being to come to harm. A car must obey the traffic rules, except when they would conflict with the First Law. A car must obey the orders given to it by human beings, except where such orders would conflict with the First or Second Laws. A car must protect its own existence as long as such protection does not conflict with the First, Second or Third Laws. These robotic laws can give way for a moral algorithm persuaded by the Natural Rights Theory, that was carried out by Thomas Hobbes and John Lock, which holds that humans have absolutes, natural rights. Therefore these natural rights, would be the bases of the moral algorithm and the AV would save the most amount of life possible, and the longest life possible thereby being persuaded to save the children in the car verses the adults in the street. Immanuel Kant is also a large proponent of ideas that fall under deontology where the duty determines the actions. If the AV was programed to save the people in the car, and that was it’s duty, then that would be the right action to choose. Virtue Ethics. The last ethical concept underneath of Normative Ethics is Virtue Ethics. Virtue Ethics is based on the character of a virtues person. The action chosen would be the action that a good person would do. Many say it would be the action your average mother would say to
  • 7. PHILOSOPHICAL AND EITHICAL IMPACTS ON AUTONOMOUS VEHICLES 7 do. Therefore, with this philosophy instilled into the moral algorithm of AV the virtues action would be saving the kids in the car verses the life of the adults on the street. Plato and Aristotle first advocated this philosophy as it can lead to happiness and the good life. Philosophical Concepts The previously mentioned ethical concepts are birthed from different philosophies and philosophers. Such as Jeremy Betham and John Stuart Mill, who brought Utilitarianism philosophies to their full formation. Or Immanuel Kant, who supported Deontology, and Friedrich Nietzsche who upheld Egoism. These different philosophers each can play a role in the creation of a moral algorithm for AV. Jeremy Bentham and John Stuart Mill. Utilitarianism holds that an action is right if it leads to the most happiness for the greatest number of people. Within the given decision creating the maximum amount of pleasure and the minimal amount of pain would be the definition of happiness. Dealing with AV creating a moral algorithm that determines the most happiness in the given unpredictable circumstance would follow the Betham’s and Mill’s utilitarianism mindset. This might look like the AV crashing into the parked car on the side of the street verses hitting the mom and daughter that stepped out into the street without looking. Following the Utilitarianism philosophy the moral algorithm for the AV would take into account, life expectancy, lifespan, and the amount of life taken, when determining the proper course of action. Another example of the utilitarianism algorithm would have the AV crashing into the building verses the motorcyclist because the probability of the passenger in the AV living through the crash would be greater then the motorcyclist. When everyone cannot be happy, Immanuel Kant’s philosophies can be introduce into the moral algorithm.
  • 8. PHILOSOPHICAL AND EITHICAL IMPACTS ON AUTONOMOUS VEHICLES 8 Looking at Schinzerger’s and Martin’s book on the Introduction to Engineering Ethics, the engineering code of ethics holds serving and protecting the public paramount (2000). Dealing with AV, public safety is the highest priority, then it would be logical that the safety of the most people would be the decision the moral algorithm would choose. Holding the safety of the most amount of people falls into the utilitarianism moral algorithm, which is the bases for many automotive companies when designing their moral algorithm. Immanuel Kant. Deontology is based on the person’s motives to determine the right or wrong action. “To act in the morally right way, people must act according to duty, and the motives of a person who carries out the action that makes them right or wrong”, writes Maston (2008). This philosophy for a moral algorithm up help by Immanuel Kant, might be more accepting to someone buying a AV, because the AV would be in right standing saving the passengers because the motives of the AV were right and pure, by saving the life of the passengers. This philosophy would also follow Isaac Asimov’s laws of robotics, where a AV would have a pure motives and actions would be right because the AV carried out Isaac Asimov’s laws, and therefore was in right standing. Friedrich Nietzsche. When automobile companies want to make AV marketable they might choose to put a little Egoism philosophical ideas into their moral algorithm. Friedrich Nietzsche was a large proponent for Egoism, which holds that the right action is the one that maximizes the good for oneself. This philosophy might also be very attractive to AV buyers because they can be assured that their life will be not loss, but within any unpredictable circumstance the moral algorithm would hold the safety of the passengers paramount. On the other hand, the downfall this philosophy is if the passengers can live with the choices the AV will make. For example, with a Egoism ideology, the AV would run the children in the street
  • 9. PHILOSOPHICAL AND EITHICAL IMPACTS ON AUTONOMOUS VEHICLES 9 over, verses crashing in the parked cars along the side of the street, because this choice is the safest option for the passengers on board. If this were the action chosen, could the passengers live with the action made by the AV? Each of these different philosophers can influence how automobile companies will write their moral algorithm. Biblical Persuasion From the biblical context all human life is important and held to the same value. “There is neither Jew nor Greek, there is neither slave nor free, there is no male and female, for you are all one in Christ Jesus” (Galtaion: 3:28, NIV). “So Peter opened his mouth and said, “Truly I understand that God shows no partiality” (Acts 10:34, NIV). When determining the moral algorithm, from a Biblical perspective sacrificing those who are weak in the views of our society would not be an option. For God sees everyone equal, therefore the homeless man in the street and the child in the AV are both held to the same value. This would rule out the option of sacrificing those who are less worthy of life, in order to save those who are of higher social priority. Economic Persuasion With any moral algorithm automotive companies choose to create, the public audience will be persuaded economically or financially to buy an AV. With an AV there will be fewer accidents, injuries, and fatalities, which will save the owners money. Google has reported 11 accidents over the past six years, and the Google car was never at fault. From owning a AV owners will receive fewer traffic tickets, because AV will not accede the speed limit, or run through stop signs like a human driver can do. Therefore, saving the buyers of an AV more money and hassle not having to handle traffic violations. With less accidents and tickets the need for insurance will decrease, which means owners of AV will save more money on insurance. The
  • 10. PHILOSOPHICAL AND EITHICAL IMPACTS ON AUTONOMOUS VEHICLES 10 human driven Tesla Model S cost for insurance per year is $1,942 verses the self-driven Model S, which is $388 per year (Read 2015). Over ten years with yearly compounding and a 2% interest rate, owners of a self-driven Tesla Model S could save over $17,000 dollars, in insurance costs alone. Also, with many investment companies using calculus to determine continuous compounding, buyers of a self-driven Tesla Model S could save close to $19,000 dollars. According to Read (2008), on average insurance for AV would be around $1000 dollars cheaper then self-driven vehicles. From an economic standpoint alone AV will become very attractive to buyers looking to save money. Conclusion and Final Thoughts When determining the correct moral algorithm for AV, there is not a right or wrong answer, but public opinion will help determine a moral algorithm that will be acceptable in society. With the help from Bonnefon and the science of experimental ethics AV will have moral algorithms that will draw from many different ethical and philosophical ideas. Public opinion, and public acceptance will be the final answer to how these moral algorithms will be created. In the near future AV will be coexisting with self-driven vehicles, and Asimov’s robotic laws might not be as fictional as once thought of.
  • 11. PHILOSOPHICAL AND EITHICAL IMPACTS ON AUTONOMOUS VEHICLES 11 References Bonnefon, J., Shariff, A., & Rahwan, I. (2015). Autonomous Vehicles Need Experimental Ethics: Are We Ready for Utilitarian Cars? Maston, L. (2008). Ethics - By Branch / Doctrine - The Basics of Philosophy. Retrieved November 5, 2015. Read, R. (2015, August 13). How Much Cash Will An Autonomous Car Save You? More Than $1,000 Per Year. Retrieved November 5, 2015. Rojas, R. (n.d.). I, Car: The Four Laws of Robotic Cars. Retrieved November 4, 2015. Schinzinger, R., & Martin, M. (2000). Introduction to engineering ethics. Boston: McGraw Hill. Why Self-Driving Cars Must Be Programmed to Kill | MIT Technology Review. (2015, October 22). Retrieved November 3, 2015.