Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Robots that stimulate our autonomy

1,213 views

Published on

In healthcare, robots are increasingly being used to provide a high standard of care in the near future. When machines interact with humans, we need to ensure that these machines take into account patient autonomy. Autonomy can be defined as negative autonomy and positive autonomy. We present a moral reasoning system that takes into account this twofold approach of autonomy. In simulation experiments, the system matches the decision of the judge in a number of law cases about medical ethical decisions. This may be useful in applications where robots need to constrain the negative autonomy of a person to stimulate positive autonomy, for example when attempting to pursue a patient to make a healthier choice.

Often autonomy is equated with self-determination. In this view, people are autono-mous when they are not influenced by others. However, autonomy is not just being free from external constraints. Autonomy can also be conceptualized as being able to make a meaningful choice, which fits in with one’s life-plan [8].

We presented a moral reasoning system including a twofold approach of autonomy. The system extends a previous moral reasoning system [14] in which autonomy con-sisted of a single variable. The behavior of the current system matches the behavior of the previous system. Moreover, simulation of legal cases for courts in the Nether-lands showed a congruency between the verdicts of the judges and the decisions of the presented moral reasoning system including the twofold model of autonomy. Finally, the experiments showed that in some cases long-term positive autonomy was seen as more important than negative autonomy on the short-term.

The moral reasoning system presented in this paper can be used by robots and soft-ware agents to prefer actions that prevent users from being harmed, improve the users’ well-being and stimulate the users’ autonomy. By adding the twofold approach of autonomy, the system can balance between positive autonomy and negative autonomy.
In future work, we intend to integrate the model of autonomy into Moral Coppélia [13], an integration of the previously developed moral reasoning system [14] and Silicon Coppélia - a computational model of emotional intelligence [9]. Adding the twofold model of autonomy to Moral Coppélia may be useful in many applications, especially where machines interact with humans in a medical context.
After doing so, the level of involvement and distance will influence the way the robot tries to improve the autonomy of a patient. In long-term care, nurses tend to be rela-tionally involved with patients, motivating them to accept care [1, 11]. A robot that is more involved with the patient will focus more on improving positive autonomy and especially on reflection. If the robot is relatively little involved with the patient, it will focus more on negative autonomy: physical and mental integrity and privacy.

Published in: Technology, Spiritual
  • Be the first to comment

Robots that stimulate our autonomy

  1. 1. Robots that stimulate autonomy Matthijs Pontier, matthijspon@gmail.com Guy Widdershoven
  2. 2. SELEMCA • Develop ‘Caredroids’: Robots or Computer Agents that assist Patients and Care-deliverers • Focus on patients who stay in long-term care facilities 2Pafos, 30-09-2013 AIAI 2013
  3. 3. Applications: Care Agents 3Pafos, 30-09-2013 AIAI 2013
  4. 4. Applications: Care Robots 4Pafos, 30-09-2013 AIAI 2013
  5. 5. Possible functionalities • Care-broker: Find care that matches need patient • Companion: Become friends with the patient to prevent loneliness and activate the patient • Coach: Assist the patient in making healthy choices: Exercising, Eating healthy, Taking medicine, etc. 5Pafos, 30-09-2013 AIAI 2013
  6. 6. Background Machine Ethics • Machines are becoming more autonomous  Rosalind Picard (1997): ‘‘The greater the freedom of a machine, the more it will need moral standards.’’ • Machines interact more with people We should manage that machines do not harm us or threaten our autonomy • Machine ethics is important to establish perceived trust in users 6Pafos, 30-09-2013 AIAI 2013
  7. 7. Moral reasoning system We developed a rational moral reasoning system that is capable of balancing between conflicting moral goals. 7Pafos, 30-09-2013 AIAI 2013
  8. 8. Positive vs Negative Autonomy • Negative Autonomy = Self-determination • Freedom of others • Autonomy is more than self-determination • Being able to make a meaningful choice • Act in line with well-considered preferences • Positive Autonomy = Freedom to make a meaningful choice Pafos, 30-09-2013 AIAI 2013
  9. 9. Typical moral dilemmas Caredroids will encounter • Positive vs Negative Autonomy: • Accept unhealthy choice vs Persuade reconsider • Binding to previous agreement vs Giving up  Expand moral principle of Autonomy Pafos, 30-09-2013 AIAI 2013
  10. 10. Model autonomy Pafos, 30-09-2013 10AIAI 2013
  11. 11. Relations in the model Autonomy = Positive_autonomy * Negative_autonomy Positive_Autonomy = Information * Cognitive_Functioning * Reflection Negative autonomy = wp*Privacy + wm *Mental_Integrity + wph * Physical_Integrity Pafos, 30-09-2013 AIAI 2013
  12. 12. New Moral Reasoning System Pafos, 30-09-2013 AIAI 2013
  13. 13. Results Pafos, 30-09-2013 AIAI 2013 • New moral reasoning system matches decisions previous moral reasoning system • Simulation of 2008-2012 NL law cases: • Case 1: Assertive outreach to prevent judicial coercion • Patient in demise. Aggression  Assertive Outreach • Case 2: Inform care deliverers, not parents of adult • Patient in alarming situation. No good contact with parents. • Case 3: Neg. autonomy constrained to enhance pos. autonomy • Self-binding declaration addict due to relapses in alcohol use • Conditions were met  Judicial Coercion
  14. 14. Conclusions • We created moral reasoning system including twofold approach of autonomy • System matches decisions medical ethical experts • System matches decisions law cases • By using theories of (medical) ethics, we can build robots that stimulate autonomy 14Pafos, 30-09-2013 AIAI 2013
  15. 15. Future Work • Integrate to Moral Coppelia: Model of affective moral decision making. • Especially useful when machines communicate with humans • Involved nurse will focus more on pos. autonomy • Little involved will focus more on neg. autonomy Pafos, 30-09-2013 AIAI 2013
  16. 16. Thank you! 16 Matthijs Pontier m.a.pontier@vu.nl http://camera-vu.nl/matthijs http://www.linkedin.com/in/matthijspontier http://crispplatform.nl/projects/selemca Pafos, 30-09-2013 AIAI 2013

×