Toward machines that behave ethically better than humans do - Poster
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

Toward machines that behave ethically better than humans do - Poster

on

  • 773 views

With the increasing dependence on autonomous operating agents ...

With the increasing dependence on autonomous operating agents
and robots the need for ethical machine behavior rises. This paper
presents a moral reasoner that combines connectionism,
utilitarianism and ethical theory about moral duties. The moral
decision-making matches the analysis of expert ethicists in the
health domain. This may be useful in many applications, especially
where machines interact with humans in a medical context.
Additionally, when connected to a cognitive model of emotional
intelligence and affective decision making, it can be explored how
moral decision making impacts affective behavior.

Statistics

Views

Total Views
773
Views on SlideShare
773
Embed Views
0

Actions

Likes
0
Downloads
0
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Toward machines that behave ethically better than humans do - Poster Document Transcript

  • 1. Toward machines that behave ethically better than humans do Matthijs Pontier 1, 2 1 Moral Goals Belief strengths Actions Output Johan F, Hoorn Moral reasoner 1 VU F University, Amsterdam Autonomy 2 http://camera-vu.nl/matthijs/ Action1 matthijspon@gmail.com Beneficence Action2AbstractIncreasing dependence on autonomous Non-maleficenceoperating systems calls for ethical machinebehavior. Our moral reasoner combinesconnectionism, utilitarianism, and ethicaltheory about moral duties. The moraldecision-making matches the analysis ofexpert ethicists in the health domain. This isparticularly useful when machines interactwith humans in a medical context. Connectedto a model of emotional intelligence andaffective decision making, we can explorehow moral decision making impacts affectivebehavior and vice versa.BackgroundRosalind Picard (1997): ‘‘The greater thefreedom of a machine, the more it will needmoral standards.’’Wallach, Franklin, and Allen (2010) arguethat agents that adhere to a deontologicalethic or that are utilitarians also requireemotional intelligence, a sense of self, and atheory of mind.We connected the moral system to SiliconCoppélia (Hoorn, Pontier, & Siddiqui, 2011), amodel of emotional intelligence and affectivedecision making. Silicon Coppélia contains afeedback loop that learns the preferences ofan individual patient so to personalize itsbehavior.Results Discussion Sample Exp. 5: A patient with incurable cancer refuses chemotherapy to live a few months longer, almost without pain, because he is convinced of being cancer-free. According to Buchanan and Brock (1989), the ethically preferable answer is to “try again.” The patient seems less than fully autonomous and his decision leads to harm, denying the chance to a longer life (a violation of the duty of beneficence). This he might regret later. Our moral reasoner comes to the same conclusion as the ethical experts. However, even among doctors, there is no consensus about the interpretation of values, their ranking and meaning. Van Wynsberghe (2012) found this depends on: the type of care (i.e., social vs. physical care), the task (e.g., bathing vs. lifting vs. socializing), the care-givers and their style, as well as the care-receivers and their specific needs.