Moral Coppélia:
Affective moral reasoning
with twofold autonomy and
a touch of personality
Matthijs Pontier
MatthijsPon@gm...
Overview of this presentation
• SELEMCA
• Moral Reasoning
• Silicon Coppelia: Model of Emotional Intelligence
• Moral Reas...
SELEMCA
• Develop ‘Caredroids’: Robots or Computer Agents
that assist Patients and Care-deliverers
• Focus on patients who...
Possible functionalities
• Care-broker: Find care that matches need patient
• Companion: Become friends with the patient t...
Applications: Care Agents
5London, 03-04-2014
MEMCA-14 Symposium at AISB50
Applications: Care Robots
6London, 03-04-2014
MEMCA-14 Symposium at AISB50
Background Machine Ethics
• Machines are becoming more autonomous
 Rosalind Picard (1997): ‘‘The greater the freedom of
a...
Domain: Medical Ethics
• Within SELEMCA, we develop caredroids
• Patients are in a vulnerable position.
 Moral behavior o...
Moral reasoning system
We developed a rational moral reasoning system that
is capable of balancing between conflicting mor...
Positive vs Negative Autonomy
• Negative Autonomy = Self-determination
• Freedom of others
• Autonomy is more than self-de...
Typical moral dilemmas
Caredroids will encounter
• Positive vs Negative Autonomy:
• Accept unhealthy choice vs Persuade re...
Model autonomy
London, 03-04-2014 12
MEMCA-14 Symposium at AISB50
New Moral Reasoning System
London, 03-04-2014 MEMCA-14 Symposium at AISB50
Results
London, 03-04-2014 MEMCA-14 Symposium at AISB50
• New moral reasoning system matches decisions
previous moral reas...
Conclusions Autonomy Model
• We created moral reasoning system including twofold
approach of autonomy
• System matches dec...
Limitations rational moral reasoning
• Only moral reasoning results in very cold decision-
making, only in terms of rights...
Problem: Not Able to Simulate
Trolley Dilemma vs Footbridge Dilemma
• Greene et al. (2001) find that moral dilemmas vary
s...
Solution: Add Emotional Processing
• Previously, we developed Silicon Coppelia,
a model of emotional intelligence.
• This ...
Silicon Coppelia
19London, 03-04-2014
MEMCA-14 Symposium at AISB50
Silicon Coppelia
• We developed Silicon Coppelia, with the goal to create
emotionally human-like robots
• Simulation exper...
Turing Test
• Turing Test was originally text-based
• We enriched test with affect-laden communication
• Facial expression...
Speed-dating Experiment
London, 03-04-2014
MEMCA-14 Symposium at AISB50
Results
• Participants did not detect differences on single
variables
• Participants did not recognize significant differe...
Conclusions Speed-Date
• We created simulation of affect so natural that young
women could not discern dating a robot from...
Silicon Coppelia + Moral Reasoning:
Decisions based on:
1. Rational influences
• Does action help me to reach my goals?
2....
Results Trolley & Footbridge
Kill 1 to Save 5 Do Nothing
Moral system
Trolley X
Footbridge X
Moral Coppelia
Trolley X
Foot...
Background Criminology
Study
• Substantial evidence emotions are fundamental in
criminal decision making
• But emotions ra...
Matching data to model
Match:
• Honesty/Humility to Weightmorality
• Perceived Risk to Expected Utility
• Negative State A...
Predicting Criminal Choices
London, 03-04-2014 MEMCA-14 Symposium at AISB50 29
Ratio Emo R+E Moral M+R M+E M+R+E
morcc 0 0...
Conclusions
• We created an affective moral reasoning system
• System matches decisions medical ethical experts
• System m...
Discussion
• The introduction of affect in rational ethics is
important when robots communicate with humans
• Combination ...
Future Work: Apply in politics
London, 03-04-2014 MEMCA-14 Symposium at AISB50
• Personal freedom
• Privacy
• Human rights...
Thank you!
33
Matthijs Pontier
matthijspon@gmail.com
http://camera-vu.nl/matthijs
http://www.linkedin.com/in/matthijsponti...
Upcoming SlideShare
Loading in …5
×

Moral Coppélia: Affective moral reasoning with twofold autonomy and a touch of personality - Presentation at MEMCA14 Symposium at AISB50

1,002 views

Published on

We present a moral reasoner, Moral Coppélia, which combines connectionism, utilitarianism, and ethical theory about the moral duties autonomy, non-maleficence, beneficence, and justice with affective states and personality traits. We, moreover, treat human autonomy in the sense of self-determination as well as making a meaningful choice. Our system combines bottom-up with top-down approaches, calculating the effect of an act on the total moral utility in the world. Moral Coppélia can reproduce the verdicts of medical ethicists and health judges in real-life cases and can handle the emotional differences between logically identical problems such as the Trolley and Footbridge dilemma. It also deals with properties of character and personality such as honesty and humility to explain why logic reasoning is not always descriptive of actual human moral behavior. Apart from simulating known cases, we performed a split-half experiment with the responses of 153 participants in a criminal justice experiment. While fine-tuning the parameters to the first half of the data, the encompassing version of Moral Coppélia was capable of forecasting criminal decisions, leading to a better fit with the second half of the data than either of the loose component parts did. In other words, we found empirical support for the integral contribution of ratio, affect, and personality to moral decision making, which, additionally, could be acceptably simulated by our extended version of the Moral Coppélia system.

Published in: Technology
3 Comments
1 Like
Statistics
Notes
No Downloads
Views
Total views
1,002
On SlideShare
0
From Embeds
0
Number of Embeds
22
Actions
Shares
0
Downloads
16
Comments
3
Likes
1
Embeds 0
No embeds

No notes for slide
  • Science + Health Care + Creative Industry
    Triangle Patient / Care-deliverer / Robot
    Robot: Repetitive tasks, so that
    Care-deliverer has time for: Medical + Social tasks
  • Functionalities can all be in the same robot
    Same functionality can be in different kind of robots (physical robot, agent, app)
  • Care-droids = care-agents, care-robots, assist care-deliverers and patients
  • Results: Able to match decisions medical ethical experts
    Behavior system matches experts medical ethics
  • Negative autonomy is freedom of others
    Positive autonomy is freedom to make a well-reflected choice
  • For example, when a patient goes into rehab his freedom can be limited for a limited period of time to achieve better cognitive functioning and self-reflection in the future.
  • Risk on aggression, fire. Assertive Outreach prevented worsening of the situation  prevented JC
    Less impact on privacy, more potential to prevent worsening.
    During detox, restore cognitive functioning and reflection
  • 1: Robot tries to convince elder to take pills
    2:
  • Wallach, Franklin & Allen: Ratio / Logic alone is not enough for ethical behavior towards humans
    Silicon Coppelia  emotional intelligence, theory of mind, personalization (through adaptation / learning from interaction)
  • Moral personal = more emotionally engaging
  • Model from media perception to let medium perceive user
    Ga door model, ook emotie-regulatie uitleggen
    Moral and Affective Decision
    Personalized  User-centered
  • Silicon Coppelia, based on how humans affectively perceive each other and communicate
  • More elaborated, because measure on several appraisal dimensions
    Yes/no problematic, because if Tom not human-like on only one aspect, everyone says no
  • Robot (mijn modellen) vs Mens
    Multiple Choice & Emoties
    Wat vond dit mannetje nou van jou?
    Proefpersonen zien geen verschil
    Je zou kunnen spreken van een geslaagde Turing Test
  • Cognitive structure  How variables are related (as in Silicon Coppelia)
  • Communication technologies: helping autistic patients with recognizing emotions
  • Moral Reasoning system alone could not simulate difference trolley dilemma and footbridge dilemma.
    Moral Reasoning system combined with Silicon Coppélia could simulate these human moral decision making processes
  • 1: Robot tries to convince elder to take pills
    2: Entertainment  Bad can also be interesting.
  • Bring back moral reasoning in politics and stimulate autonomy of civilians
  • Moral Coppélia: Affective moral reasoning with twofold autonomy and a touch of personality - Presentation at MEMCA14 Symposium at AISB50

    1. 1. Moral Coppélia: Affective moral reasoning with twofold autonomy and a touch of personality Matthijs Pontier MatthijsPon@gmail.com
    2. 2. Overview of this presentation • SELEMCA • Moral Reasoning • Silicon Coppelia: Model of Emotional Intelligence • Moral Reasoning + Silicon Coppelia = Moral Coppelia • Predicting Crime with Moral Coppelia • Conclusion • Future Work London, 03-04-2014 MEMCA-14 Symposium at AISB50 2
    3. 3. SELEMCA • Develop ‘Caredroids’: Robots or Computer Agents that assist Patients and Care-deliverers • Focus on patients who stay in long-term care facilities 3London, 03-04-2014 MEMCA-14 Symposium at AISB50
    4. 4. Possible functionalities • Care-broker: Find care that matches need patient • Companion: Become friends with the patient to prevent loneliness and activate the patient • Coach: Assist the patient in making healthy choices: Exercising, Eating healthy, Taking medicine, etc. 4London, 03-04-2014 MEMCA-14 Symposium at AISB50
    5. 5. Applications: Care Agents 5London, 03-04-2014 MEMCA-14 Symposium at AISB50
    6. 6. Applications: Care Robots 6London, 03-04-2014 MEMCA-14 Symposium at AISB50
    7. 7. Background Machine Ethics • Machines are becoming more autonomous  Rosalind Picard (1997): ‘‘The greater the freedom of a machine, the more it will need moral standards.’’ • Machines interact more with people We should manage that machines do not harm us or threaten our autonomy • Machine ethics is important to establish perceived trust in users 7London, 03-04-2014 MEMCA-14 Symposium at AISB50
    8. 8. Domain: Medical Ethics • Within SELEMCA, we develop caredroids • Patients are in a vulnerable position.  Moral behavior of robot is extremely important. We focus on Medical Ethics • Conflicts between: 1. Beneficence 2. Non-maleficence 3. Autonomy 4. Justice London, 03-04-2014 MEMCA-14 Symposium at AISB50 8
    9. 9. Moral reasoning system We developed a rational moral reasoning system that is capable of balancing between conflicting moral goals. 9London, 03-04-2014 MEMCA-14 Symposium at AISB50
    10. 10. Positive vs Negative Autonomy • Negative Autonomy = Self-determination • Freedom of others • Autonomy is more than self-determination • Being able to make a meaningful choice • Act in line with well-considered preferences • Positive Autonomy = Freedom to make a meaningful choice London, 03-04-2014 MEMCA-14 Symposium at AISB50
    11. 11. Typical moral dilemmas Caredroids will encounter • Positive vs Negative Autonomy: • Accept unhealthy choice vs Persuade reconsider • Binding to previous agreement vs Giving up  Expand moral principle of Autonomy London, 03-04-2014 MEMCA-14 Symposium at AISB50
    12. 12. Model autonomy London, 03-04-2014 12 MEMCA-14 Symposium at AISB50
    13. 13. New Moral Reasoning System London, 03-04-2014 MEMCA-14 Symposium at AISB50
    14. 14. Results London, 03-04-2014 MEMCA-14 Symposium at AISB50 • New moral reasoning system matches decisions previous moral reasoning system • Simulation of 2008-2012 NL law cases: • Case 1: Assertive outreach to prevent judicial coercion • Patient in demise. Aggression  Assertive Outreach • Case 2: Inform care deliverers, not parents of adult • Patient in alarming situation. No good contact with parents. • Case 3: Neg. autonomy constrained to enhance pos. autonomy • Self-binding declaration addict due to relapses in alcohol use • Conditions were met  Judicial Coercion
    15. 15. Conclusions Autonomy Model • We created moral reasoning system including twofold approach of autonomy • System matches decisions medical ethical experts • System matches decisions law cases • By using theories of (medical) ethics, we can build robots that stimulate autonomy 15London, 03-04-2014 MEMCA-14 Symposium at AISB50
    16. 16. Limitations rational moral reasoning • Only moral reasoning results in very cold decision- making, only in terms of rights and duties • Wallack, Franklin & Allen (2010): “Ethical agents require emotional intelligence as well as other ‘supra- rational’ faculties, such as a sense of self and a ‘Theory of Mind” • Tronto (1993): “Care is only thought of as good care when it is personalized” 16London, 03-04-2014 MEMCA-14 Symposium at AISB50
    17. 17. Problem: Not Able to Simulate Trolley Dilemma vs Footbridge Dilemma • Greene et al. (2001) find that moral dilemmas vary systematically in the extent to which they engage emotional processing and that these variations in emotional engagement influence moral judgment. • Their study was inspired by the difference between two variants of an ethical dilemma: Trolley dilemma (moral impersonal) Footbridge dilemma (moral personal) 17London, 03-04-2014 MEMCA-14 Symposium at AISB50
    18. 18. Solution: Add Emotional Processing • Previously, we developed Silicon Coppelia, a model of emotional intelligence. • This can be projected in others for Theory of Mind • Learns from experience  Personalization  Connect Moral Reasoning to Silicon Coppelia • More human-like moral reasoning • Personalize moral decisions and communication about moral reasoning 18London, 03-04-2014 MEMCA-14 Symposium at AISB50
    19. 19. Silicon Coppelia 19London, 03-04-2014 MEMCA-14 Symposium at AISB50
    20. 20. Silicon Coppelia • We developed Silicon Coppelia, with the goal to create emotionally human-like robots • Simulation experiments:  System behaves consistent with Theory and Intuition • Compare performance model with performance real human in speeddating experiment London, 03-04-2014 MEMCA-14 Symposium at AISB50 20
    21. 21. Turing Test • Turing Test was originally text-based • We enriched test with affect-laden communication • Facial expressions showing emotions • Capable of vocal speech • Afterwards questionnaire: How do you think Tom perceived you?  Measure made continuous and more elaborated than simply yes/no • Analysis: Bayesian structural equation modeling London, 03-04-2014 MEMCA-14 Symposium at AISB50 21
    22. 22. Speed-dating Experiment London, 03-04-2014 MEMCA-14 Symposium at AISB50
    23. 23. Results • Participants did not detect differences on single variables • Participants did not recognize significant differences on cognitive-affective structure • Model in which conditions (1: human, 2: robot) were assumed equal explained data better than model in which conditions were assumed different London, 03-04-2014 MEMCA-14 Symposium at AISB50 23
    24. 24. Conclusions Speed-Date • We created simulation of affect so natural that young women could not discern dating a robot from a man • Important for: • Understanding human affective communication • Developing communication technologies • Developing emotionally human-like robots London, 03-04-2014 MEMCA-14 Symposium at AISB50 24
    25. 25. Silicon Coppelia + Moral Reasoning: Decisions based on: 1. Rational influences • Does action help me to reach my goals? 2. Affective influences • Does action lead to desired emotions? • Does action reflect Involvement I feel towards user? • Does action reflect Distance I feel towards user? 3. Moral reasoning • Is this action morally good? London, 03-04-2014 MEMCA-14 Symposium at AISB50 25
    26. 26. Results Trolley & Footbridge Kill 1 to Save 5 Do Nothing Moral system Trolley X Footbridge X Moral Coppelia Trolley X Footbridge X 26London, 03-04-2014 MEMCA-14 Symposium at AISB50
    27. 27. Background Criminology Study • Substantial evidence emotions are fundamental in criminal decision making • But emotions rarely in criminal choice models  Study relation Ratio+Emotions+Moral  Apply Moral Coppelia to criminology data  Predict criminal decisions participants London, 03-04-2014 MEMCA-14 Symposium at AISB50 27
    28. 28. Matching data to model Match: • Honesty/Humility to Weightmorality • Perceived Risk to Expected Utility • Negative State Affect to EESA Parameter Tuning: 1. Find optimal fits for initial sample 2. Predict decisions for holdout sample London, 03-04-2014 MEMCA-14 Symposium at AISB50 28
    29. 29. Predicting Criminal Choices London, 03-04-2014 MEMCA-14 Symposium at AISB50 29 Ratio Emo R+E Moral M+R M+E M+R+E morcc 0 0 0 0.68 0.42 0.453 0.435 wmor 0 0 0 1.00 0.96 0.97 0.87 partrat 1 0 0.34 0 1 0 0.64 partemo 0 1 0.66 0 0 1 0.36 R2 initial 0.7553 0.8792 0.9222 0.9336 0.9871 0.9798 0.9881 R2 holdout 0.7192 0.9060 0.9323 0.9281 0.9803 0.9778 0.9821
    30. 30. Conclusions • We created an affective moral reasoning system • System matches decisions medical ethical experts • System matches decisions law cases • By using theories of (medical) ethics, we can build robots that stimulate autonomy • System can simulate trolley and footbridge dilemma • System can predict human criminal choices 30London, 03-04-2014 MEMCA-14 Symposium at AISB50
    31. 31. Discussion • The introduction of affect in rational ethics is important when robots communicate with humans • Combination Ratio + Affect + Morals useful for applications that simulate human decision making for example, when agent systems or robots provide healthcare support, or in entertainment settings London, 03-04-2014 MEMCA-14 Symposium at AISB50 31
    32. 32. Future Work: Apply in politics London, 03-04-2014 MEMCA-14 Symposium at AISB50 • Personal freedom • Privacy • Human rights • Transparency • Citizen participation • Evidence-based policy • Science & Education • Freedom of information • Open access / Open data / Open source • Elections European Parliament: 22.05.2014
    33. 33. Thank you! 33 Matthijs Pontier matthijspon@gmail.com http://camera-vu.nl/matthijs http://www.linkedin.com/in/matthijspontier • @Matthijs85 http://www.piratenpartij.nl/ • @Piratenpartij London, 03-04-2014 MEMCA-14 Symposium at AISB50

    ×