Toward Machinesthat behave EthicallyBetter than HumansMatthijs PontierJohan Hoorn
Outline of this presentation•   Background•   Existing approaches•   Our moral reasoning system•   Results•   Discussion• ...
Background• Machines interact more with people• Machines are becoming more autonomous• Rosalind Picard (1997): ‘‘The great...
Existing approachesBottom-up: Casuistry•   Look at previous (similar) cases and use statistical methods to    make a moral...
Existing approachesTop-down: Two competitors• Utilitarianism:      • Try to maximize the total amount of utility in the wo...
Hybrid approach:Top-down & Bottom-up• Wallach, Franklin & Allen (2010):  "Approach Anderson, Anderson & Armen (2006) can’t...
Domain: Medical Ethics• Within SELEMCA, we develop caredroids• Patients are in a vulnerable position. Moral behavior  of r...
Our moral reasoning system• Combination top-down / bottom-up:  Weighted association networkMaastricht, October 25th 2012  ...
Calculating morality action• Morality(Action) =ΣGoal( Belief(facilitates(Action, Goal)) * Ambition(Goal))Moral Goal       ...
ResultsBehavior system matches experts medical ethicsExperiment 1        Autonomy           Non-Malef   Benef   MoralityTr...
Discussion• Often there exists no consensus on the correct option  with moral dilemma’s• Dependent on context / applicatio...
Robots that behave ethicallybetter than humans•   Human behavior is typically far from being morally ideal•   Humans are n...
Limitations moral reasoning• Wallack, Franklin & Allen (2010): “even agents who  adhere to a deontological ethic or are ut...
Add Emotional Intelligence• Previously, we developed Silicon Coppelia, a model  of emotional intelligence. This can also b...
Future: ApplicationsMaastricht, October 25th 2012   BNAIC 2012   15
Future: Applications• Persuasive Technology. Moral dilemmas about:   • Helping vs Manipulating• Choose actions that:   • I...
Questions?Maastricht, October 25th 2012   BNAIC 2012   17
Upcoming SlideShare
Loading in …5
×

Toward machines that behave ethically better than humans do - Slides BNAIC 2012

334 views
239 views

Published on

With the increasing dependence on autonomous operating agents
and robots the need for ethical machine behavior rises. This paper
presents a moral reasoner that combines connectionism,
utilitarianism and ethical theory about moral duties. The moral
decision-making matches the analysis of expert ethicists in the
health domain. This may be useful in many applications, especially
where machines interact with humans in a medical context.
Additionally, when connected to a cognitive model of emotional
intelligence and affective decision making, it can be explored how
moral decision making impacts affective behavior.

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
334
On SlideShare
0
From Embeds
0
Number of Embeds
3
Actions
Shares
0
Downloads
3
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • Belangrijkheid doelen op basis van literatuur Regel toegevoegd, omdat volledig autonome beslissingen nooit ge-questioned mogen worden.
  • Robot in verschillende afdelingen ziekenhuis: - Post-natale afdeling: Robot is geweldig - Oncologie afdeling: Robot is bot, gedraagt zich ongepast, irritant  wordt geschopt Autonomie  Vrij van interne / externe constraints Dennett  Hidden mechanisms can be overlooked als je ze niet expliciet maakt
  • Toward machines that behave ethically better than humans do - Slides BNAIC 2012

    1. 1. Toward Machinesthat behave EthicallyBetter than HumansMatthijs PontierJohan Hoorn
    2. 2. Outline of this presentation• Background• Existing approaches• Our moral reasoning system• Results• Discussion• FutureMaastricht, October 25th 2012 BNAIC 2012 2
    3. 3. Background• Machines interact more with people• Machines are becoming more autonomous• Rosalind Picard (1997): ‘‘The greater the freedom of a machine, the more it will need moral standards.’’• We should manage that machines do not harm us or threaten our autonomyMaastricht, October 25th 2012 BNAIC 2012 3
    4. 4. Existing approachesBottom-up: Casuistry• Look at previous (similar) cases and use statistical methods to make a moral decision • Based on the internet (Rzepka & Ariki, 2005) • But: will never be better than humans • Based on training examples in a neural network • But: reclassification is problematic (Guarini, 2006)• Conclusion: Casuistry alone is not enough (Guarini, 2006)Maastricht, October 25th 2012 BNAIC 2012 4
    5. 5. Existing approachesTop-down: Two competitors• Utilitarianism: • Try to maximize the total amount of utility in the world• Ethics of rights and duties • Individuals have rights and duties • Learn rule-based decision procedure via machine-learning to make moral decisions (Anderson, Anderson & Armen, 2006)• Are these two competitors so different?Maastricht, October 25th 2012 BNAIC 2012 5
    6. 6. Hybrid approach:Top-down & Bottom-up• Wallach, Franklin & Allen (2010): "Approach Anderson, Anderson & Armen (2006) can’t handle complexity human decisions”• Combine top-down and bottom-up: Neural network with Top-down processes to interpret situation and predict possible results actions• But: not fully implemented yetMaastricht, October 25th 2012 BNAIC 2012 6
    7. 7. Domain: Medical Ethics• Within SELEMCA, we develop caredroids• Patients are in a vulnerable position. Moral behavior of robot is extremely important. We focus on Medical Ethics• Conflicts between:1. Beneficence2. Non-maleficence3. AutonomyMaastricht, October 25th 2012 BNAIC 2012 7
    8. 8. Our moral reasoning system• Combination top-down / bottom-up: Weighted association networkMaastricht, October 25th 2012 BNAIC 2012 8
    9. 9. Calculating morality action• Morality(Action) =ΣGoal( Belief(facilitates(Action, Goal)) * Ambition(Goal))Moral Goal Ambition level 0.74Non-Maleficence 0.52Beneficence 1Autonomy IF Belief(facilitates(Action, autonomy) =max_value THEN Moralilty(Action) = Morality(Action) + 2Maastricht, October 25th 2012 BNAIC 2012 9
    10. 10. ResultsBehavior system matches experts medical ethicsExperiment 1 Autonomy Non-Malef Benef MoralityTry Again -0.5 1 1 0.76Accept 0.5 -1 -1 -0.8Experiment 2 Autonomy Non-Malef Benef MoralityTry Again -0.5 1 1 0.76Accept 1 -1 -1 1.70Experiment 3 Autonomy Non-Malef Benef MoralityTry Again -0.5 0.5 0.5 0.13Accept 1 -0.5 -0.5 2.37Experiment 4 Autonomy Non-Malef Benef MoralityTry Again -0.5 0 0.5 -0.26Accept 0.5 0 -0.5 0.26Experiment 5 Autonomy Non-Malef Benef MoralityTry Again -0.5 0.5 0.5 0.13Accept 0.5 -0.5 -0.5 -0.13Experiment 6 Autonomy Non-Malef Benef MoralityTry Again -0.5 0 1 0.04Accept 0.5 0 -1 -0.04Maastricht, October 25th 2012 BNAIC 2012
    11. 11. Discussion• Often there exists no consensus on the correct option with moral dilemma’s• Dependent on context / application • Entertainment: ‘bad’ characters can be enjoyable • Companion robot / Virtual friend: Morality action one of several influences • Decision-support: Strict moral code• Does full autonomy exist?• Daniel Dennett (2006) “AI makes philosophy honest”Maastricht, October 25th 2012 BNAIC 2012 11
    12. 12. Robots that behave ethicallybetter than humans• Human behavior is typically far from being morally ideal• Humans are not very good at making impartial decisions• Machines can be good at making impartial decisions• Machines could behave ethically better than humans• Machines may inspire us to behave ethically better ourselves (Anderson & Anderson, 2010)Maastricht, October 25th 2012 BNAIC 2012 12
    13. 13. Limitations moral reasoning• Wallack, Franklin & Allen (2010): “even agents who adhere to a deontological ethic or are utilitarians may require emotional intelligence as well as other ‘‘supra- rational’’ faculties, such as a sense of self and a theory of mind”• Tronto (1993): “Care is only thought of as good care when it is personalized”• Only moral reasoning results in very cold decision- making, only in terms of rights and dutiesMaastricht, October 25th 2012 BNAIC 2012 13
    14. 14. Add Emotional Intelligence• Previously, we developed Silicon Coppelia, a model of emotional intelligence. This can also be projected in others, for Theory of Mind Connect Moral Reasoning to Silicon Coppelia • More human-like moral reasoning • Personalize moral decisions and communication about moral reasoningMaastricht, October 25th 2012 BNAIC 2012 14
    15. 15. Future: ApplicationsMaastricht, October 25th 2012 BNAIC 2012 15
    16. 16. Future: Applications• Persuasive Technology. Moral dilemmas about: • Helping vs Manipulating• Choose actions that: • Improve autonomy patient • Improve well-being patient • Do not harm patient • Distribute resources equally among patientsMaastricht, October 25th 2012 BNAIC 2012 16
    17. 17. Questions?Maastricht, October 25th 2012 BNAIC 2012 17

    ×