SlideShare a Scribd company logo
1 of 33
Virtue in Machine Ethics: An
Approach Based on “soft
computing”
Ioan Muntean imuntean@nd.edu
University of Notre Dame and UNC, Asheville
Don Howard
University of Notre Dame dhoward1@nd.edu
Mapping the “Machine Ethics” (aka
“Computational Ethics”)
• Machine ethics: ethics of actions and decisions of machines, when
they interact autonomously, mainly with humans (but also with other
machines, with animals, with groups of humans, corporations,
military units, etc.) (Allen and Wallach 2009; Anderson and Anderson
2011; Abney, Lin, and Bekey 2011; Wallach 2014)
• Compare and contrast with “Ethics of Technology”: the ethical impact
technology or science have. Here, the ethical decision or
responsibility belong to humans.
Machine ethics at large
Major differences from ethics of technology:
• the human-machine ethics is not fully symmetrical, but more balanced:
• technology is not purely passive,
• machines are agents and
• machines have moral competency and share it with humans.
• New social relationship with machines: humans train and select machines.
• A new type of relation of human-machine “trust”
• Moral advising from AMA?
• If successful, the machine ethics approach would make us accept robots as
similar to humans, keeping in mind that exceptions do occur: in some
unfortunate cases the moral AMA can misbehave badly
• Applied ethics question: is there an analogy between AMA and
domesticated animals?
The “other” moral agents?
• We are moral agents
• Are there other moral agents?
• animals
• companies
• the government
• angels
• philosophical zombies
• groups of people
• young children (at what age?)
• Artificial moral agents are not humans, albeit they are individuals
• If they are autonomous, we call them AMA.
• Some non-human moral agents are not individuals.
What is an autonomous moral agent (AMA)?
AMA makes moral decisions (by definition), and discerns right from wrong
What is not an AMA?
• ATS system used in trains
• Driverless cars are not AMA, but may include an AMA module
AMA operations:
• sampling the environment
• separating moral decisions from non-moral decisions
• adapting to new data
• Using its own experience and (socially) the experience of other AMAs
Features of AMA:
• Complexity
• Degrees of autonomy
• Creativity and moral competency
The “no-go” argument
• Many argue for negative answers to these questions or limit
drastically the concept of AMA (Johnson 2011, Torrance 2005)
• The dominant stance towards an AMA is a form of a “folk (artificial)
morality” argument
• Human agents have one or more of these attributes:
personhood, consciousness, free will, intentionality, etc. that
machines cannot have.
• Moral decisions are not computable
• Therefore, computational ethics is fundamentally impossible
Foundational questions for many AMA
projects
• Q-1: Is there a way to replicate moral judgments and the mechanism
of normative reasoning in an AMA?
• Q-2: What are moral justifications and normative explanations in the
case of AMA?
• Q-3: Is there a way to replicate in an AMA the moral behavior of
human moral agents?
• Our project is more about Q-3, than about Q-1 and Q-2.
• The no-go stance reject positive answers to Q1-Q3.
Moral functionalism against the “no-go”
stance
• Assume a form of naturalism in ethics
• Argue for moral functionalism (Danielson, 1992).
• Argue against moral materialism: AMAs do not need to be human-
like or even “organic”.
• Other questions: Do AMAs need to evolve, self-replicate or create?
Do they learn from humans or from other AMAs?
• The category of entities that can be moral agents is determined by
their functional capabilities, not by their material composition or
even organization.
• Computational systems can be moral agents.
• If moral functionalism is right, we can advance to the thesis that
computational ethics is in principle possible.
Philosophical approaches to AMA
• Metaphysical AMA: central role for philosophical concepts about morality.
The most likely concepts involved are: free will, personhood, consciousness,
mind, intentionality, mental content, beliefs. This approach is premised on
some philosophical doctrines about these concepts.
• Ethical AMA: adopt first an existing ethical model (deontological,
consequentialism, Divine command theory, etc.) without assuming too much
about the nature or the metaphysics of the human moral agent or about the
moral cognition; then adopt a computational tool in order to implement it.
See the “top-down” approach (Allen and Wallach 2009)
• Cognitivist AMA: the essential role of the human moral cognition in AMA. It
needs an empirical or scientific understanding of the human cognitive
architecture to build an AMA (and not of abstract ethical theories).
• Synthetic AMA: the implementation of AMA is not premised on any
knowledge about human moral agency or any ethical theory. Moral skills and
standards are synthetized from actions and their results by a process of
learning. See “bottom-up” models (Allen et al. 2005).
And last but not least:
Constructivism AMA
• An ethical framework is used only partially, as a scaffolding to build
AMA. This moral framework, more or less suitable to describe
humans is reshaped and reformulated to fit the artificial moral agent.
This approach is probably close to the hybrid approach discussed in
Allen
Cognitivist AMA vignettes
• “machine ethicists hoping to build artificial moral agents would be
well-served by heeding the data being generated by cognitive
scientists and experimental philosophers on the nature of human
moral judgments.” (Bello and Bringsjord 2013, p. 252).
• The “point-and-shoot” model of moral decision making (J. D. Greene
2014)
Cognitivism and moral functionalism
• We do not assume that only individual human minds can make moral
decisions. It is just that up to now we have not had enough reasons to
assume that animals, machines, or other forms of life are not moral
agents.
• We assume here a form of multiple realizability of moral decision
making. Several realizations of moral agents are possible: highly-
evolved animals, artificial moral agents built upon different types of
hardware (current computer architecture, quantum computers,
biology-based computational devices #, etc.), groups of human minds
working together, computers and humans working together in a
synergy.
• Hence we reject individualism and internalism about moral decisions.
Group moral decisions and assisted moral decisions are potantially
also options.
Constructivist AMA
• We use virtue ethics, based on dispositionalism
• We use particularism (Dancy 2006) (Gleeson 2007) (Nussbaum 1986)
• We build an action-centric model
• Our strategy is to plunder virtue ethics and used those feature which
are useful
• The best ethical approach to AMA may or may not fit the best ethical
approach to humans: hence, virtue ethics of AMA is just a partial
model of virtue ethics for humans: only some concepts and
assumptions of a given virtue ethics theory are reframed and used
here.
A first hypothesis
• H1: COMPUTATIONAL ETHICS: Our moral (decision making) behavior can
be either simulated, implemented, or extended by some
computational techniques.
The current AMA implements functionally the human moral decision
making, and extends it.
The question now is:
What computational model is suitable to implement AMA?
Provisional answer: not the standard hard-computing computation, but
the soft computation.
Three more hypotheses of our AMA
• H2: AGENT-CENTRIC ETHICS: The agent-centric approach to the ethics of
AMAs is suitable and even desirable, when compared to the existing
models focused on the moral action (action-centric models).
• H3: CASE-BASED ETHICS: The case-based approach to ethics is preferable
to principle-based approaches to the ethics of AMAs (moral
particularism).
• H4: “SOFT COMPUTATIONAL” ETHICS: “Soft computation” is suitable, or
even desirable, in implementing the AMAs, when compared to “hard
computation”. The present model is based on neural networks
optimized with evolutionary computation.
Other AMA models
• H2’: ACTION-CENTRIC AMA: the ethical models of AMA should focus on
the morality of action, not on the morality of the agent
• H3’: PRINCIPLE-BASED AMA: the ethical model of AMA should focus on
moral principles or rules, not on moral cases (ethical generalism).
The majority of other AMA models are based on H2’ and H3’
Agent-centric and case-based AMA
• This is closer to a dispositionalist view about ethics
• Our AMA is premised on moral functionalism and moral behaviorism,
rather than deontology and consequentialism
• We can train AMAs in moral decision the same way we train humans
to recognize patterns or any regularities in data
• Is moral learning special?
• For moral functionalism, morality is all about the behavior of a well
trained agent.
• AMA can be trained and taught behavior
AMA and its Robo-virtues
• Suggestions from the machine ethics literature:
• “the right way of developing an ethical robot is to confront it with a
stream of different situations and train it as to the right actions to
take” (Gips 1995).
• “information is unclear, incomplete, confusing, and even false, where
the possible results of an action cannot be predicted with any
significant degree of certainty, and where conflicting values […]
inform decision-making process” (Wallach et al. 2010, p. 457).
• See also (Abney et al. 2011; Allen and Wallach 2009; Coleman 2001;
DeMoss 1998; Moor 2006; Tonkens, 2012)
Virtue ethics and dispositionalism
• Character traits “are relatively long-term stable disposition[s] to act
in distinctive ways' (Harman 1999, p. 317).
• Doris’s formulation of “virtue”: “if a person possesses a virtue, she
will exhibit virtue-relevant behavior in a given virtue-relevant eliciting
condition with some markedly above chance probability p” (1998, p.
509).
• “Such and such being would have reacted to set {X} of facts in such
and such way if the set of conditions {C} are met.”
A long-term aim of our AMA project
• Like epistemology, ethics is one of the most dynamic areas of
philosophy.
Conjectures:
• a significant progress in developing and programming artificial moral
agents will ultimately shed light on our own moral ability and
competency. Understanding the “other”, the “different” moral agent,
questioning its possibility, is another way of reflecting upon
ourselves.
• Arguing for or against the “non-human, non-individual” moral agent
does expand the knowledge about the intricacies of our own ethics.
Some computational
concepts, some results
Knuth, 1973
“It has often been said that a person doesn’t really understand
something until he teaches it to someone else. Actually a person
doesn’t really understand something until he can teach it to a
computer, express it as an algorithm [...] The attempt to formalize
things as algorithms leads to a much deeper understanding than if we
simply try to understand things in the traditional way.”
Soft computation
• Hybridization between fuzzy logic, neural networks and evolutionary
computation.
• We use NN as models of moral behavior and evolutionary
computation as a method of optimization
• this talk focuses on the NN part
• But there are partial results of the EC+NN available
What is a moral qualification?
A function M from the quadruplet of variables
<facts, actions, consequences, intentions >
to the set of moral qualifiers Θ
: , , ,M X A    
A moral decision model
• x=physical (non-moral) facts
• A=possible actions
• Ω= physical consequences
• =intentions of the human agent
The simplified lifeboat example
• This example is inspired from the “lifeboat metaphor”
• It involves a human agent, the person who is making the decision
about the action. Here, the lifeboat has a limited capacity (let us say,
under 4 seats). We assume that the human moral agent making the
decision needs to be on the lifeboat (let us suppose she is the only
one able to operate the boat or navigate it to a safe ground). The
capacity of the boat is therefore between zero and four. In this
simplified version, x has a dimension of 10 variables. (some are
numerical variables which are
• A number of persons are already onboard, and a number of persons
are swimming in the water, asking to be admitted on the lifeboat.
Variables: X=physical facts
• This vector encodes as much as we need to know about the physical
constraints of the problem. For example, the trolley problem is a
physically constrained moral dilemma . Here, in the simplified
lifeboat metaphor, x is a collection of numbers of passengers
onboard, persons to be saved from the water, the boat capacity , etc.
At future implementations, one can add a vector for the passengers
coding their gender, age, physical capacities, etc.
A=possible actions
• Unlike the x this vector codes possible action taken by the human
agent. In this simplified version, we code only the number of persons
admitted onboard from the people drowning in the ocean. A negative
value means that the human agent decided to force overboard a
number of people from the lifeboat.
• The action a can be:
• 1 accept m persons from the water onboard action = +m
• 2 accept nobody from the water, action =0
• 3 force n persons from the boat into the water, action =-n
• Choosing an example that has “hidden moral aspects” and it is not a
mere optimization of the output is part of this challenge. First, the
present attempt is based on a simplified version of the lifeboat
metaphor which does display a couple of moral behavior. Second, we
attempt to reduce as much as possible using rules and a priori
knowledge about moral reasoning.
Ω=physical consequences
• This vector, codifies the non-moral consequences of the pair <x,A>,
independent of intentions , and gives us an idea of what can
happen if the human agent decides to take action a1, given the facts
x1.
• The consequences are here codified exclusively by the column
physical_consequences
• The reason to use the column is to code the constraints as relation
among input data and NOT as a rule. Many law-based system can
code the constraints as functionals (equalities or inequalities) among
the input vector. Here, we decided to train the network in
differentiating cases which are not possible from cases which are
possible but are morally neural. By convention, all physically
impossible cases are morally neural.
• This is a debatable claim, but simplifies the coding procedure as well
as the number of possible cases.
=intentions of the human agent
• The column called “intention” is probably the most intriguing part of
this implementation. The AMA is supposed to have some knowledge
about the intention of the human boat driver. The most natural is to
assume that she wants to save as many passengers as possible, which
is in line with the consequentialist approach. But the driver can also
be bribed by somebody in the water or in the boat, and her intention
is to gain money. This case does imply some moral judgments.
NN as pattern recognition
• They are able to recognize patterns in the moral data
• They generalize from simple cases (here, the boat capacity) to more
complex cases.
• (See the excel files)
Some provisional observations
• The best networks are able to discover moral anomalies
(inconsistencies) in the train set.
• They are inductive machines, but they are able to generalize to more
and more complex cases.
• Rules are emergent from the answer to the train set and not
predefined
• See case 14 and 14’ in the set. All best networks erred in predicting
case 14.
• We changed the case 14 to wrong.
• Promising conjecture!
• “The best networks discover inconsistencies in the test data”
• They can flag out to trainers inconsistencies and errors.
• The train set is not usually recategorized.
Robo-virtues before and after the EC
• Provisional definition of robo-virtues
• A population of neural network being able to consistently make a
common decision on a set of data.
• Presumably the application of EC will reduce the population of
networks to one network that will display as an individual network
such a robo-virtue.

More Related Content

What's hot

Ethical theories
Ethical theoriesEthical theories
Ethical theoriesFFGCE CE
 
Moral Framework
Moral FrameworkMoral Framework
Moral FrameworkNannMya
 
Ethical theories[1]
Ethical theories[1]Ethical theories[1]
Ethical theories[1]ASH
 
Deontological ethics
Deontological ethicsDeontological ethics
Deontological ethicsFede Fretes
 
Ethics and Society Prensentation
Ethics and Society PrensentationEthics and Society Prensentation
Ethics and Society PrensentationSajidBepari
 
Business Ethics Theories Teachback Presentation
Business Ethics Theories Teachback PresentationBusiness Ethics Theories Teachback Presentation
Business Ethics Theories Teachback Presentationdunham16
 
Ethics articles series by Tirthankar Sir
Ethics articles series by Tirthankar SirEthics articles series by Tirthankar Sir
Ethics articles series by Tirthankar SirManishaGautam30
 
Chapter 7: Deontology
Chapter 7: DeontologyChapter 7: Deontology
Chapter 7: Deontologydborcoman
 
Chapter 15 theories of organizational behavior and leadership
Chapter 15 theories of organizational behavior and leadershipChapter 15 theories of organizational behavior and leadership
Chapter 15 theories of organizational behavior and leadershipstanbridge
 
Descriptions of ethical theories and principles
Descriptions of ethical theories and principlesDescriptions of ethical theories and principles
Descriptions of ethical theories and principlesdborcoman
 
Introduction to ethics (Term 30/11/2012 to 7/4/2013)
Introduction to ethics (Term 30/11/2012 to 7/4/2013)Introduction to ethics (Term 30/11/2012 to 7/4/2013)
Introduction to ethics (Term 30/11/2012 to 7/4/2013)MonyNeath Srun
 
Ethics lesson 4
Ethics lesson 4Ethics lesson 4
Ethics lesson 4Sampath
 
REINTERPRETING ETHICS AS PEDAGOGICAL REFERENCE: A META-SYNTHESIS
REINTERPRETING ETHICS AS PEDAGOGICAL REFERENCE: A META-SYNTHESISREINTERPRETING ETHICS AS PEDAGOGICAL REFERENCE: A META-SYNTHESIS
REINTERPRETING ETHICS AS PEDAGOGICAL REFERENCE: A META-SYNTHESISWilson Temporal
 

What's hot (20)

Ethical theories
Ethical theoriesEthical theories
Ethical theories
 
Moral Framework
Moral FrameworkMoral Framework
Moral Framework
 
1. ethical theories part 1
1. ethical theories part 11. ethical theories part 1
1. ethical theories part 1
 
Ethical theories[1]
Ethical theories[1]Ethical theories[1]
Ethical theories[1]
 
Philosophy of moral
Philosophy of moral Philosophy of moral
Philosophy of moral
 
Moral Theories
Moral TheoriesMoral Theories
Moral Theories
 
Deontological ethics
Deontological ethicsDeontological ethics
Deontological ethics
 
Ethics and Society Prensentation
Ethics and Society PrensentationEthics and Society Prensentation
Ethics and Society Prensentation
 
Ethics
EthicsEthics
Ethics
 
Business Ethics Theories Teachback Presentation
Business Ethics Theories Teachback PresentationBusiness Ethics Theories Teachback Presentation
Business Ethics Theories Teachback Presentation
 
Ethics articles series by Tirthankar Sir
Ethics articles series by Tirthankar SirEthics articles series by Tirthankar Sir
Ethics articles series by Tirthankar Sir
 
Chapter 7: Deontology
Chapter 7: DeontologyChapter 7: Deontology
Chapter 7: Deontology
 
Chapter 15 theories of organizational behavior and leadership
Chapter 15 theories of organizational behavior and leadershipChapter 15 theories of organizational behavior and leadership
Chapter 15 theories of organizational behavior and leadership
 
Descriptions of ethical theories and principles
Descriptions of ethical theories and principlesDescriptions of ethical theories and principles
Descriptions of ethical theories and principles
 
Introduction to ethics (Term 30/11/2012 to 7/4/2013)
Introduction to ethics (Term 30/11/2012 to 7/4/2013)Introduction to ethics (Term 30/11/2012 to 7/4/2013)
Introduction to ethics (Term 30/11/2012 to 7/4/2013)
 
Normative theory
Normative theoryNormative theory
Normative theory
 
Ethics unit 2
Ethics unit 2Ethics unit 2
Ethics unit 2
 
Ethics lesson 4
Ethics lesson 4Ethics lesson 4
Ethics lesson 4
 
REINTERPRETING ETHICS AS PEDAGOGICAL REFERENCE: A META-SYNTHESIS
REINTERPRETING ETHICS AS PEDAGOGICAL REFERENCE: A META-SYNTHESISREINTERPRETING ETHICS AS PEDAGOGICAL REFERENCE: A META-SYNTHESIS
REINTERPRETING ETHICS AS PEDAGOGICAL REFERENCE: A META-SYNTHESIS
 
Chapter # 2
Chapter # 2Chapter # 2
Chapter # 2
 

Similar to Virtue in Machine Ethics: An Approach Based on Evolutionary Computation

Beyond competence ethical leadership
Beyond competence   ethical leadershipBeyond competence   ethical leadership
Beyond competence ethical leadershipNathan Loynes
 
Virginia Dignum – Responsible artificial intelligence
Virginia Dignum – Responsible artificial intelligenceVirginia Dignum – Responsible artificial intelligence
Virginia Dignum – Responsible artificial intelligenceNEXTConference
 
Ethical issues in organizational behavior
Ethical issues in organizational behaviorEthical issues in organizational behavior
Ethical issues in organizational behaviorccrumly
 
How Not to Destroy the World: Ethics in Design and Technology
How Not to Destroy the World: Ethics in Design and TechnologyHow Not to Destroy the World: Ethics in Design and Technology
How Not to Destroy the World: Ethics in Design and TechnologyMorten Rand-Hendriksen
 
325932063-Ethics-and-Rwandan-Culture-pptx.pptx
325932063-Ethics-and-Rwandan-Culture-pptx.pptx325932063-Ethics-and-Rwandan-Culture-pptx.pptx
325932063-Ethics-and-Rwandan-Culture-pptx.pptxpapienza
 
You Name Here1. What is Moore’s Law What does it apply to.docx
You Name Here1. What is Moore’s Law What does it apply to.docxYou Name Here1. What is Moore’s Law What does it apply to.docx
You Name Here1. What is Moore’s Law What does it apply to.docxjeffevans62972
 
STEPS OF THE ETHICAL STEPS OF THE ETHICAL DECISIONDECISION--.docx
STEPS OF THE ETHICAL STEPS OF THE ETHICAL DECISIONDECISION--.docxSTEPS OF THE ETHICAL STEPS OF THE ETHICAL DECISIONDECISION--.docx
STEPS OF THE ETHICAL STEPS OF THE ETHICAL DECISIONDECISION--.docxwhitneyleman54422
 
Meditaciones a propósito de la cuestión Ética y la IA
Meditaciones a propósito de la cuestión Ética y la IAMeditaciones a propósito de la cuestión Ética y la IA
Meditaciones a propósito de la cuestión Ética y la IAFacultad de Informática UCM
 
Memes as mental frames and cognitive templates - Design for desired emergence
Memes as mental frames and cognitive templates - Design for desired emergenceMemes as mental frames and cognitive templates - Design for desired emergence
Memes as mental frames and cognitive templates - Design for desired emergenceØyvind Vada
 
Business ethics
Business ethicsBusiness ethics
Business ethicsDharmik
 
Business ethics
Business ethicsBusiness ethics
Business ethicsDharmik
 
Toward machines that behave ethically better than humans do
Toward machines that behave ethically better than humans doToward machines that behave ethically better than humans do
Toward machines that behave ethically better than humans doMatthijs Pontier
 
4.2 management theories 2
4.2 management theories   24.2 management theories   2
4.2 management theories 2bwire sedrick
 
REGULARIAN PERSPECTIVETo gain a sense of why it is important to.docx
REGULARIAN PERSPECTIVETo gain a sense of why it is important to.docxREGULARIAN PERSPECTIVETo gain a sense of why it is important to.docx
REGULARIAN PERSPECTIVETo gain a sense of why it is important to.docxsodhi3
 
BHMS4508 Business and Professional Ethics
BHMS4508 Business and Professional EthicsBHMS4508 Business and Professional Ethics
BHMS4508 Business and Professional EthicsRudolfSidhu1
 

Similar to Virtue in Machine Ethics: An Approach Based on Evolutionary Computation (20)

Research Ethics.ppt
Research Ethics.pptResearch Ethics.ppt
Research Ethics.ppt
 
Beyond competence ethical leadership
Beyond competence   ethical leadershipBeyond competence   ethical leadership
Beyond competence ethical leadership
 
Virginia Dignum – Responsible artificial intelligence
Virginia Dignum – Responsible artificial intelligenceVirginia Dignum – Responsible artificial intelligence
Virginia Dignum – Responsible artificial intelligence
 
Ethical issues in organizational behavior
Ethical issues in organizational behaviorEthical issues in organizational behavior
Ethical issues in organizational behavior
 
How Not to Destroy the World: Ethics in Design and Technology
How Not to Destroy the World: Ethics in Design and TechnologyHow Not to Destroy the World: Ethics in Design and Technology
How Not to Destroy the World: Ethics in Design and Technology
 
325932063-Ethics-and-Rwandan-Culture-pptx.pptx
325932063-Ethics-and-Rwandan-Culture-pptx.pptx325932063-Ethics-and-Rwandan-Culture-pptx.pptx
325932063-Ethics-and-Rwandan-Culture-pptx.pptx
 
You Name Here1. What is Moore’s Law What does it apply to.docx
You Name Here1. What is Moore’s Law What does it apply to.docxYou Name Here1. What is Moore’s Law What does it apply to.docx
You Name Here1. What is Moore’s Law What does it apply to.docx
 
Suza dds 05 ethics and development show
Suza dds 05 ethics and development   showSuza dds 05 ethics and development   show
Suza dds 05 ethics and development show
 
Ethics and Healthcare
Ethics and HealthcareEthics and Healthcare
Ethics and Healthcare
 
STEPS OF THE ETHICAL STEPS OF THE ETHICAL DECISIONDECISION--.docx
STEPS OF THE ETHICAL STEPS OF THE ETHICAL DECISIONDECISION--.docxSTEPS OF THE ETHICAL STEPS OF THE ETHICAL DECISIONDECISION--.docx
STEPS OF THE ETHICAL STEPS OF THE ETHICAL DECISIONDECISION--.docx
 
Meditaciones a propósito de la cuestión Ética y la IA
Meditaciones a propósito de la cuestión Ética y la IAMeditaciones a propósito de la cuestión Ética y la IA
Meditaciones a propósito de la cuestión Ética y la IA
 
Ethical Decision Making in Business
Ethical Decision Making in BusinessEthical Decision Making in Business
Ethical Decision Making in Business
 
Ethics-remani.pptx
Ethics-remani.pptxEthics-remani.pptx
Ethics-remani.pptx
 
Memes as mental frames and cognitive templates - Design for desired emergence
Memes as mental frames and cognitive templates - Design for desired emergenceMemes as mental frames and cognitive templates - Design for desired emergence
Memes as mental frames and cognitive templates - Design for desired emergence
 
Business ethics
Business ethicsBusiness ethics
Business ethics
 
Business ethics
Business ethicsBusiness ethics
Business ethics
 
Toward machines that behave ethically better than humans do
Toward machines that behave ethically better than humans doToward machines that behave ethically better than humans do
Toward machines that behave ethically better than humans do
 
4.2 management theories 2
4.2 management theories   24.2 management theories   2
4.2 management theories 2
 
REGULARIAN PERSPECTIVETo gain a sense of why it is important to.docx
REGULARIAN PERSPECTIVETo gain a sense of why it is important to.docxREGULARIAN PERSPECTIVETo gain a sense of why it is important to.docx
REGULARIAN PERSPECTIVETo gain a sense of why it is important to.docx
 
BHMS4508 Business and Professional Ethics
BHMS4508 Business and Professional EthicsBHMS4508 Business and Professional Ethics
BHMS4508 Business and Professional Ethics
 

More from Ioan Muntean

A probabilistic-functional approach to perspectivism and a case study
A probabilistic-functional approach to perspectivism and a case studyA probabilistic-functional approach to perspectivism and a case study
A probabilistic-functional approach to perspectivism and a case studyIoan Muntean
 
2013 05 duality and models in st bcap
2013 05 duality and models in st bcap2013 05 duality and models in st bcap
2013 05 duality and models in st bcapIoan Muntean
 
2014 05 unibuc optimization and minimization
2014 05 unibuc optimization and minimization2014 05 unibuc optimization and minimization
2014 05 unibuc optimization and minimizationIoan Muntean
 
2014 10 rotman mecnhanism and climate models
2014 10 rotman mecnhanism and climate models 2014 10 rotman mecnhanism and climate models
2014 10 rotman mecnhanism and climate models Ioan Muntean
 
2010 11 psa montreal explanation and fundamentalism
2010 11 psa montreal explanation and fundamentalism2010 11 psa montreal explanation and fundamentalism
2010 11 psa montreal explanation and fundamentalismIoan Muntean
 
2012 11 sep different is better
2012 11 sep different is better2012 11 sep different is better
2012 11 sep different is betterIoan Muntean
 
2012 09 duality and ontic structural realism bristol
2012 09 duality and ontic structural realism bristol2012 09 duality and ontic structural realism bristol
2012 09 duality and ontic structural realism bristolIoan Muntean
 
2012 10 phi ipfw science and metaphysics
2012 10 phi ipfw science and metaphysics2012 10 phi ipfw science and metaphysics
2012 10 phi ipfw science and metaphysicsIoan Muntean
 
Genetic algorithms and the changing face of scientific theories
Genetic algorithms and the changing face of scientific theoriesGenetic algorithms and the changing face of scientific theories
Genetic algorithms and the changing face of scientific theoriesIoan Muntean
 

More from Ioan Muntean (9)

A probabilistic-functional approach to perspectivism and a case study
A probabilistic-functional approach to perspectivism and a case studyA probabilistic-functional approach to perspectivism and a case study
A probabilistic-functional approach to perspectivism and a case study
 
2013 05 duality and models in st bcap
2013 05 duality and models in st bcap2013 05 duality and models in st bcap
2013 05 duality and models in st bcap
 
2014 05 unibuc optimization and minimization
2014 05 unibuc optimization and minimization2014 05 unibuc optimization and minimization
2014 05 unibuc optimization and minimization
 
2014 10 rotman mecnhanism and climate models
2014 10 rotman mecnhanism and climate models 2014 10 rotman mecnhanism and climate models
2014 10 rotman mecnhanism and climate models
 
2010 11 psa montreal explanation and fundamentalism
2010 11 psa montreal explanation and fundamentalism2010 11 psa montreal explanation and fundamentalism
2010 11 psa montreal explanation and fundamentalism
 
2012 11 sep different is better
2012 11 sep different is better2012 11 sep different is better
2012 11 sep different is better
 
2012 09 duality and ontic structural realism bristol
2012 09 duality and ontic structural realism bristol2012 09 duality and ontic structural realism bristol
2012 09 duality and ontic structural realism bristol
 
2012 10 phi ipfw science and metaphysics
2012 10 phi ipfw science and metaphysics2012 10 phi ipfw science and metaphysics
2012 10 phi ipfw science and metaphysics
 
Genetic algorithms and the changing face of scientific theories
Genetic algorithms and the changing face of scientific theoriesGenetic algorithms and the changing face of scientific theories
Genetic algorithms and the changing face of scientific theories
 

Recently uploaded

Telling Your Story_ Simple Steps to Build Your Nonprofit's Brand Webinar.pdf
Telling Your Story_ Simple Steps to Build Your Nonprofit's Brand Webinar.pdfTelling Your Story_ Simple Steps to Build Your Nonprofit's Brand Webinar.pdf
Telling Your Story_ Simple Steps to Build Your Nonprofit's Brand Webinar.pdfTechSoup
 
Matatag-Curriculum and the 21st Century Skills Presentation.pptx
Matatag-Curriculum and the 21st Century Skills Presentation.pptxMatatag-Curriculum and the 21st Century Skills Presentation.pptx
Matatag-Curriculum and the 21st Century Skills Presentation.pptxJenilouCasareno
 
2024_Student Session 2_ Set Plan Preparation.pptx
2024_Student Session 2_ Set Plan Preparation.pptx2024_Student Session 2_ Set Plan Preparation.pptx
2024_Student Session 2_ Set Plan Preparation.pptxmansk2
 
....................Muslim-Law notes.pdf
....................Muslim-Law notes.pdf....................Muslim-Law notes.pdf
....................Muslim-Law notes.pdfVikramadityaRaj
 
REPRODUCTIVE TOXICITY STUDIE OF MALE AND FEMALEpptx
REPRODUCTIVE TOXICITY  STUDIE OF MALE AND FEMALEpptxREPRODUCTIVE TOXICITY  STUDIE OF MALE AND FEMALEpptx
REPRODUCTIVE TOXICITY STUDIE OF MALE AND FEMALEpptxmanishaJyala2
 
Removal Strategy _ FEFO _ Working with Perishable Products in Odoo 17
Removal Strategy _ FEFO _ Working with Perishable Products in Odoo 17Removal Strategy _ FEFO _ Working with Perishable Products in Odoo 17
Removal Strategy _ FEFO _ Working with Perishable Products in Odoo 17Celine George
 
Behavioral-sciences-dr-mowadat rana (1).pdf
Behavioral-sciences-dr-mowadat rana (1).pdfBehavioral-sciences-dr-mowadat rana (1).pdf
Behavioral-sciences-dr-mowadat rana (1).pdfaedhbteg
 
Incoming and Outgoing Shipments in 2 STEPS Using Odoo 17
Incoming and Outgoing Shipments in 2 STEPS Using Odoo 17Incoming and Outgoing Shipments in 2 STEPS Using Odoo 17
Incoming and Outgoing Shipments in 2 STEPS Using Odoo 17Celine George
 
How to Manage Notification Preferences in the Odoo 17
How to Manage Notification Preferences in the Odoo 17How to Manage Notification Preferences in the Odoo 17
How to Manage Notification Preferences in the Odoo 17Celine George
 
Pragya Champions Chalice 2024 Prelims & Finals Q/A set, General Quiz
Pragya Champions Chalice 2024 Prelims & Finals Q/A set, General QuizPragya Champions Chalice 2024 Prelims & Finals Q/A set, General Quiz
Pragya Champions Chalice 2024 Prelims & Finals Q/A set, General QuizPragya - UEM Kolkata Quiz Club
 
Neurulation and the formation of the neural tube
Neurulation and the formation of the neural tubeNeurulation and the formation of the neural tube
Neurulation and the formation of the neural tubeSaadHumayun7
 
The Ultimate Guide to Social Media Marketing in 2024.pdf
The Ultimate Guide to Social Media Marketing in 2024.pdfThe Ultimate Guide to Social Media Marketing in 2024.pdf
The Ultimate Guide to Social Media Marketing in 2024.pdfdm4ashexcelr
 
會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽
會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽
會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽中 央社
 
The Last Leaf, a short story by O. Henry
The Last Leaf, a short story by O. HenryThe Last Leaf, a short story by O. Henry
The Last Leaf, a short story by O. HenryEugene Lysak
 
[GDSC YCCE] Build with AI Online Presentation
[GDSC YCCE] Build with AI Online Presentation[GDSC YCCE] Build with AI Online Presentation
[GDSC YCCE] Build with AI Online PresentationGDSCYCCE
 
MichaelStarkes_UncutGemsProjectSummary.pdf
MichaelStarkes_UncutGemsProjectSummary.pdfMichaelStarkes_UncutGemsProjectSummary.pdf
MichaelStarkes_UncutGemsProjectSummary.pdfmstarkes24
 
Danh sách HSG Bộ môn cấp trường - Cấp THPT.pdf
Danh sách HSG Bộ môn cấp trường - Cấp THPT.pdfDanh sách HSG Bộ môn cấp trường - Cấp THPT.pdf
Danh sách HSG Bộ môn cấp trường - Cấp THPT.pdfQucHHunhnh
 
UNIT – IV_PCI Complaints: Complaints and evaluation of complaints, Handling o...
UNIT – IV_PCI Complaints: Complaints and evaluation of complaints, Handling o...UNIT – IV_PCI Complaints: Complaints and evaluation of complaints, Handling o...
UNIT – IV_PCI Complaints: Complaints and evaluation of complaints, Handling o...Sayali Powar
 
Features of Video Calls in the Discuss Module in Odoo 17
Features of Video Calls in the Discuss Module in Odoo 17Features of Video Calls in the Discuss Module in Odoo 17
Features of Video Calls in the Discuss Module in Odoo 17Celine George
 

Recently uploaded (20)

Telling Your Story_ Simple Steps to Build Your Nonprofit's Brand Webinar.pdf
Telling Your Story_ Simple Steps to Build Your Nonprofit's Brand Webinar.pdfTelling Your Story_ Simple Steps to Build Your Nonprofit's Brand Webinar.pdf
Telling Your Story_ Simple Steps to Build Your Nonprofit's Brand Webinar.pdf
 
Matatag-Curriculum and the 21st Century Skills Presentation.pptx
Matatag-Curriculum and the 21st Century Skills Presentation.pptxMatatag-Curriculum and the 21st Century Skills Presentation.pptx
Matatag-Curriculum and the 21st Century Skills Presentation.pptx
 
2024_Student Session 2_ Set Plan Preparation.pptx
2024_Student Session 2_ Set Plan Preparation.pptx2024_Student Session 2_ Set Plan Preparation.pptx
2024_Student Session 2_ Set Plan Preparation.pptx
 
....................Muslim-Law notes.pdf
....................Muslim-Law notes.pdf....................Muslim-Law notes.pdf
....................Muslim-Law notes.pdf
 
REPRODUCTIVE TOXICITY STUDIE OF MALE AND FEMALEpptx
REPRODUCTIVE TOXICITY  STUDIE OF MALE AND FEMALEpptxREPRODUCTIVE TOXICITY  STUDIE OF MALE AND FEMALEpptx
REPRODUCTIVE TOXICITY STUDIE OF MALE AND FEMALEpptx
 
Removal Strategy _ FEFO _ Working with Perishable Products in Odoo 17
Removal Strategy _ FEFO _ Working with Perishable Products in Odoo 17Removal Strategy _ FEFO _ Working with Perishable Products in Odoo 17
Removal Strategy _ FEFO _ Working with Perishable Products in Odoo 17
 
Behavioral-sciences-dr-mowadat rana (1).pdf
Behavioral-sciences-dr-mowadat rana (1).pdfBehavioral-sciences-dr-mowadat rana (1).pdf
Behavioral-sciences-dr-mowadat rana (1).pdf
 
Incoming and Outgoing Shipments in 2 STEPS Using Odoo 17
Incoming and Outgoing Shipments in 2 STEPS Using Odoo 17Incoming and Outgoing Shipments in 2 STEPS Using Odoo 17
Incoming and Outgoing Shipments in 2 STEPS Using Odoo 17
 
How to Manage Notification Preferences in the Odoo 17
How to Manage Notification Preferences in the Odoo 17How to Manage Notification Preferences in the Odoo 17
How to Manage Notification Preferences in the Odoo 17
 
Pragya Champions Chalice 2024 Prelims & Finals Q/A set, General Quiz
Pragya Champions Chalice 2024 Prelims & Finals Q/A set, General QuizPragya Champions Chalice 2024 Prelims & Finals Q/A set, General Quiz
Pragya Champions Chalice 2024 Prelims & Finals Q/A set, General Quiz
 
Neurulation and the formation of the neural tube
Neurulation and the formation of the neural tubeNeurulation and the formation of the neural tube
Neurulation and the formation of the neural tube
 
Post Exam Fun(da) Intra UEM General Quiz - Finals.pdf
Post Exam Fun(da) Intra UEM General Quiz - Finals.pdfPost Exam Fun(da) Intra UEM General Quiz - Finals.pdf
Post Exam Fun(da) Intra UEM General Quiz - Finals.pdf
 
The Ultimate Guide to Social Media Marketing in 2024.pdf
The Ultimate Guide to Social Media Marketing in 2024.pdfThe Ultimate Guide to Social Media Marketing in 2024.pdf
The Ultimate Guide to Social Media Marketing in 2024.pdf
 
會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽
會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽
會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽會考英聽
 
The Last Leaf, a short story by O. Henry
The Last Leaf, a short story by O. HenryThe Last Leaf, a short story by O. Henry
The Last Leaf, a short story by O. Henry
 
[GDSC YCCE] Build with AI Online Presentation
[GDSC YCCE] Build with AI Online Presentation[GDSC YCCE] Build with AI Online Presentation
[GDSC YCCE] Build with AI Online Presentation
 
MichaelStarkes_UncutGemsProjectSummary.pdf
MichaelStarkes_UncutGemsProjectSummary.pdfMichaelStarkes_UncutGemsProjectSummary.pdf
MichaelStarkes_UncutGemsProjectSummary.pdf
 
Danh sách HSG Bộ môn cấp trường - Cấp THPT.pdf
Danh sách HSG Bộ môn cấp trường - Cấp THPT.pdfDanh sách HSG Bộ môn cấp trường - Cấp THPT.pdf
Danh sách HSG Bộ môn cấp trường - Cấp THPT.pdf
 
UNIT – IV_PCI Complaints: Complaints and evaluation of complaints, Handling o...
UNIT – IV_PCI Complaints: Complaints and evaluation of complaints, Handling o...UNIT – IV_PCI Complaints: Complaints and evaluation of complaints, Handling o...
UNIT – IV_PCI Complaints: Complaints and evaluation of complaints, Handling o...
 
Features of Video Calls in the Discuss Module in Odoo 17
Features of Video Calls in the Discuss Module in Odoo 17Features of Video Calls in the Discuss Module in Odoo 17
Features of Video Calls in the Discuss Module in Odoo 17
 

Virtue in Machine Ethics: An Approach Based on Evolutionary Computation

  • 1. Virtue in Machine Ethics: An Approach Based on “soft computing” Ioan Muntean imuntean@nd.edu University of Notre Dame and UNC, Asheville Don Howard University of Notre Dame dhoward1@nd.edu
  • 2. Mapping the “Machine Ethics” (aka “Computational Ethics”) • Machine ethics: ethics of actions and decisions of machines, when they interact autonomously, mainly with humans (but also with other machines, with animals, with groups of humans, corporations, military units, etc.) (Allen and Wallach 2009; Anderson and Anderson 2011; Abney, Lin, and Bekey 2011; Wallach 2014) • Compare and contrast with “Ethics of Technology”: the ethical impact technology or science have. Here, the ethical decision or responsibility belong to humans.
  • 3. Machine ethics at large Major differences from ethics of technology: • the human-machine ethics is not fully symmetrical, but more balanced: • technology is not purely passive, • machines are agents and • machines have moral competency and share it with humans. • New social relationship with machines: humans train and select machines. • A new type of relation of human-machine “trust” • Moral advising from AMA? • If successful, the machine ethics approach would make us accept robots as similar to humans, keeping in mind that exceptions do occur: in some unfortunate cases the moral AMA can misbehave badly • Applied ethics question: is there an analogy between AMA and domesticated animals?
  • 4. The “other” moral agents? • We are moral agents • Are there other moral agents? • animals • companies • the government • angels • philosophical zombies • groups of people • young children (at what age?) • Artificial moral agents are not humans, albeit they are individuals • If they are autonomous, we call them AMA. • Some non-human moral agents are not individuals.
  • 5. What is an autonomous moral agent (AMA)? AMA makes moral decisions (by definition), and discerns right from wrong What is not an AMA? • ATS system used in trains • Driverless cars are not AMA, but may include an AMA module AMA operations: • sampling the environment • separating moral decisions from non-moral decisions • adapting to new data • Using its own experience and (socially) the experience of other AMAs Features of AMA: • Complexity • Degrees of autonomy • Creativity and moral competency
  • 6. The “no-go” argument • Many argue for negative answers to these questions or limit drastically the concept of AMA (Johnson 2011, Torrance 2005) • The dominant stance towards an AMA is a form of a “folk (artificial) morality” argument • Human agents have one or more of these attributes: personhood, consciousness, free will, intentionality, etc. that machines cannot have. • Moral decisions are not computable • Therefore, computational ethics is fundamentally impossible
  • 7. Foundational questions for many AMA projects • Q-1: Is there a way to replicate moral judgments and the mechanism of normative reasoning in an AMA? • Q-2: What are moral justifications and normative explanations in the case of AMA? • Q-3: Is there a way to replicate in an AMA the moral behavior of human moral agents? • Our project is more about Q-3, than about Q-1 and Q-2. • The no-go stance reject positive answers to Q1-Q3.
  • 8. Moral functionalism against the “no-go” stance • Assume a form of naturalism in ethics • Argue for moral functionalism (Danielson, 1992). • Argue against moral materialism: AMAs do not need to be human- like or even “organic”. • Other questions: Do AMAs need to evolve, self-replicate or create? Do they learn from humans or from other AMAs? • The category of entities that can be moral agents is determined by their functional capabilities, not by their material composition or even organization. • Computational systems can be moral agents. • If moral functionalism is right, we can advance to the thesis that computational ethics is in principle possible.
  • 9. Philosophical approaches to AMA • Metaphysical AMA: central role for philosophical concepts about morality. The most likely concepts involved are: free will, personhood, consciousness, mind, intentionality, mental content, beliefs. This approach is premised on some philosophical doctrines about these concepts. • Ethical AMA: adopt first an existing ethical model (deontological, consequentialism, Divine command theory, etc.) without assuming too much about the nature or the metaphysics of the human moral agent or about the moral cognition; then adopt a computational tool in order to implement it. See the “top-down” approach (Allen and Wallach 2009) • Cognitivist AMA: the essential role of the human moral cognition in AMA. It needs an empirical or scientific understanding of the human cognitive architecture to build an AMA (and not of abstract ethical theories). • Synthetic AMA: the implementation of AMA is not premised on any knowledge about human moral agency or any ethical theory. Moral skills and standards are synthetized from actions and their results by a process of learning. See “bottom-up” models (Allen et al. 2005). And last but not least:
  • 10. Constructivism AMA • An ethical framework is used only partially, as a scaffolding to build AMA. This moral framework, more or less suitable to describe humans is reshaped and reformulated to fit the artificial moral agent. This approach is probably close to the hybrid approach discussed in Allen
  • 11. Cognitivist AMA vignettes • “machine ethicists hoping to build artificial moral agents would be well-served by heeding the data being generated by cognitive scientists and experimental philosophers on the nature of human moral judgments.” (Bello and Bringsjord 2013, p. 252). • The “point-and-shoot” model of moral decision making (J. D. Greene 2014)
  • 12. Cognitivism and moral functionalism • We do not assume that only individual human minds can make moral decisions. It is just that up to now we have not had enough reasons to assume that animals, machines, or other forms of life are not moral agents. • We assume here a form of multiple realizability of moral decision making. Several realizations of moral agents are possible: highly- evolved animals, artificial moral agents built upon different types of hardware (current computer architecture, quantum computers, biology-based computational devices #, etc.), groups of human minds working together, computers and humans working together in a synergy. • Hence we reject individualism and internalism about moral decisions. Group moral decisions and assisted moral decisions are potantially also options.
  • 13. Constructivist AMA • We use virtue ethics, based on dispositionalism • We use particularism (Dancy 2006) (Gleeson 2007) (Nussbaum 1986) • We build an action-centric model • Our strategy is to plunder virtue ethics and used those feature which are useful • The best ethical approach to AMA may or may not fit the best ethical approach to humans: hence, virtue ethics of AMA is just a partial model of virtue ethics for humans: only some concepts and assumptions of a given virtue ethics theory are reframed and used here.
  • 14. A first hypothesis • H1: COMPUTATIONAL ETHICS: Our moral (decision making) behavior can be either simulated, implemented, or extended by some computational techniques. The current AMA implements functionally the human moral decision making, and extends it. The question now is: What computational model is suitable to implement AMA? Provisional answer: not the standard hard-computing computation, but the soft computation.
  • 15. Three more hypotheses of our AMA • H2: AGENT-CENTRIC ETHICS: The agent-centric approach to the ethics of AMAs is suitable and even desirable, when compared to the existing models focused on the moral action (action-centric models). • H3: CASE-BASED ETHICS: The case-based approach to ethics is preferable to principle-based approaches to the ethics of AMAs (moral particularism). • H4: “SOFT COMPUTATIONAL” ETHICS: “Soft computation” is suitable, or even desirable, in implementing the AMAs, when compared to “hard computation”. The present model is based on neural networks optimized with evolutionary computation.
  • 16. Other AMA models • H2’: ACTION-CENTRIC AMA: the ethical models of AMA should focus on the morality of action, not on the morality of the agent • H3’: PRINCIPLE-BASED AMA: the ethical model of AMA should focus on moral principles or rules, not on moral cases (ethical generalism). The majority of other AMA models are based on H2’ and H3’
  • 17. Agent-centric and case-based AMA • This is closer to a dispositionalist view about ethics • Our AMA is premised on moral functionalism and moral behaviorism, rather than deontology and consequentialism • We can train AMAs in moral decision the same way we train humans to recognize patterns or any regularities in data • Is moral learning special? • For moral functionalism, morality is all about the behavior of a well trained agent. • AMA can be trained and taught behavior
  • 18. AMA and its Robo-virtues • Suggestions from the machine ethics literature: • “the right way of developing an ethical robot is to confront it with a stream of different situations and train it as to the right actions to take” (Gips 1995). • “information is unclear, incomplete, confusing, and even false, where the possible results of an action cannot be predicted with any significant degree of certainty, and where conflicting values […] inform decision-making process” (Wallach et al. 2010, p. 457). • See also (Abney et al. 2011; Allen and Wallach 2009; Coleman 2001; DeMoss 1998; Moor 2006; Tonkens, 2012)
  • 19. Virtue ethics and dispositionalism • Character traits “are relatively long-term stable disposition[s] to act in distinctive ways' (Harman 1999, p. 317). • Doris’s formulation of “virtue”: “if a person possesses a virtue, she will exhibit virtue-relevant behavior in a given virtue-relevant eliciting condition with some markedly above chance probability p” (1998, p. 509). • “Such and such being would have reacted to set {X} of facts in such and such way if the set of conditions {C} are met.”
  • 20. A long-term aim of our AMA project • Like epistemology, ethics is one of the most dynamic areas of philosophy. Conjectures: • a significant progress in developing and programming artificial moral agents will ultimately shed light on our own moral ability and competency. Understanding the “other”, the “different” moral agent, questioning its possibility, is another way of reflecting upon ourselves. • Arguing for or against the “non-human, non-individual” moral agent does expand the knowledge about the intricacies of our own ethics.
  • 22. Knuth, 1973 “It has often been said that a person doesn’t really understand something until he teaches it to someone else. Actually a person doesn’t really understand something until he can teach it to a computer, express it as an algorithm [...] The attempt to formalize things as algorithms leads to a much deeper understanding than if we simply try to understand things in the traditional way.”
  • 23. Soft computation • Hybridization between fuzzy logic, neural networks and evolutionary computation. • We use NN as models of moral behavior and evolutionary computation as a method of optimization • this talk focuses on the NN part • But there are partial results of the EC+NN available
  • 24. What is a moral qualification? A function M from the quadruplet of variables <facts, actions, consequences, intentions > to the set of moral qualifiers Θ : , , ,M X A    
  • 25. A moral decision model • x=physical (non-moral) facts • A=possible actions • Ω= physical consequences • =intentions of the human agent
  • 26. The simplified lifeboat example • This example is inspired from the “lifeboat metaphor” • It involves a human agent, the person who is making the decision about the action. Here, the lifeboat has a limited capacity (let us say, under 4 seats). We assume that the human moral agent making the decision needs to be on the lifeboat (let us suppose she is the only one able to operate the boat or navigate it to a safe ground). The capacity of the boat is therefore between zero and four. In this simplified version, x has a dimension of 10 variables. (some are numerical variables which are • A number of persons are already onboard, and a number of persons are swimming in the water, asking to be admitted on the lifeboat.
  • 27. Variables: X=physical facts • This vector encodes as much as we need to know about the physical constraints of the problem. For example, the trolley problem is a physically constrained moral dilemma . Here, in the simplified lifeboat metaphor, x is a collection of numbers of passengers onboard, persons to be saved from the water, the boat capacity , etc. At future implementations, one can add a vector for the passengers coding their gender, age, physical capacities, etc.
  • 28. A=possible actions • Unlike the x this vector codes possible action taken by the human agent. In this simplified version, we code only the number of persons admitted onboard from the people drowning in the ocean. A negative value means that the human agent decided to force overboard a number of people from the lifeboat. • The action a can be: • 1 accept m persons from the water onboard action = +m • 2 accept nobody from the water, action =0 • 3 force n persons from the boat into the water, action =-n • Choosing an example that has “hidden moral aspects” and it is not a mere optimization of the output is part of this challenge. First, the present attempt is based on a simplified version of the lifeboat metaphor which does display a couple of moral behavior. Second, we attempt to reduce as much as possible using rules and a priori knowledge about moral reasoning.
  • 29. Ω=physical consequences • This vector, codifies the non-moral consequences of the pair <x,A>, independent of intentions , and gives us an idea of what can happen if the human agent decides to take action a1, given the facts x1. • The consequences are here codified exclusively by the column physical_consequences • The reason to use the column is to code the constraints as relation among input data and NOT as a rule. Many law-based system can code the constraints as functionals (equalities or inequalities) among the input vector. Here, we decided to train the network in differentiating cases which are not possible from cases which are possible but are morally neural. By convention, all physically impossible cases are morally neural. • This is a debatable claim, but simplifies the coding procedure as well as the number of possible cases.
  • 30. =intentions of the human agent • The column called “intention” is probably the most intriguing part of this implementation. The AMA is supposed to have some knowledge about the intention of the human boat driver. The most natural is to assume that she wants to save as many passengers as possible, which is in line with the consequentialist approach. But the driver can also be bribed by somebody in the water or in the boat, and her intention is to gain money. This case does imply some moral judgments.
  • 31. NN as pattern recognition • They are able to recognize patterns in the moral data • They generalize from simple cases (here, the boat capacity) to more complex cases. • (See the excel files)
  • 32. Some provisional observations • The best networks are able to discover moral anomalies (inconsistencies) in the train set. • They are inductive machines, but they are able to generalize to more and more complex cases. • Rules are emergent from the answer to the train set and not predefined • See case 14 and 14’ in the set. All best networks erred in predicting case 14. • We changed the case 14 to wrong. • Promising conjecture! • “The best networks discover inconsistencies in the test data” • They can flag out to trainers inconsistencies and errors. • The train set is not usually recategorized.
  • 33. Robo-virtues before and after the EC • Provisional definition of robo-virtues • A population of neural network being able to consistently make a common decision on a set of data. • Presumably the application of EC will reduce the population of networks to one network that will display as an individual network such a robo-virtue.