Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Machine Intelligence and Moral Decision-Making


Published on

A presentation given at the IMLS project of "Libraries Facilitating Cross-disciplinary Research," DC Workshop, Washington D.C., May 31, 2019 by Bohyun Kim, CTO, University of Rhode Island Libraries.

Published in: Technology
  • Be the first to comment

  • Be the first to like this

Machine Intelligence and Moral Decision-Making

 CTO, University of Rhode Island Libraries Libraries Facilitating Cross-disciplinary Research - DC Workshop, Washington D.C., May 31, 2019
  2. 2. POTENTIAL ISSUES OF CROSS-DISCIPLINARY RESEARCH & ML ➤ One Issue: Moral/Ethical Dimension of Machine Intelligence ➤ If ML techniques drive AI developments to the degree that AI systems come to partly or fully automate human decision-making, what would that mean? ➤ An important question that applies to any discipline/area where ML techniques are used from health sciences to the military.
  3. 3. WHY DOES MACHINE MORALITY MATTER? ➤ The notorious trolley case ➤ No longer a thought-experiment with a self-driving car ➤ UAVs (=drones), UGVs, sentry robots, and other AI systems raise new and unsettling ethical questions.
  4. 4. MORAL RISKS OF AI/ML: MILITARY ROBOTS AS AN EXAMPLE ➤ They make killing easier (with a greater physical and psychological distance) with less casualties. ➤ The likely increase of overall violence in the world ➤ The ramification of allowing machines to harm humans or to take their lives ➤ Potentially surrendering our moral decisions and moral responsibility to an opaque system
  5. 5. ASK ETHICISTS WHAT A MORAL ACTION IS? ➤ Utilitarianism ➤ Maximize the utility of the outcome ➤ Deontology ➤ Follow the moral rules/principles ➤ Contractraniansm ➤ Maximize the outcome for the least privileged ➤ Disappointing to programmers/engineers?
  6. 6. PROGRAMMERS & ENGINEERS ➤ A working solution ➤ A set of constraints and if-then statements for a machine, which will enable it to identify and process morally relevant considerations, and then determine and execute an action that is not only rational but also ethical in the given task environment.
  7. 7. ETHICISTS ➤ Generalizable and abstract knowledge about morality ➤ Define what is moral and investigate how moral reasoning works ➤ Borderline cases that reveal subtle differences in varying moral situations
  8. 8. WHAT WOULD PROGRAMMING MORALITY LOOK LIKE? ➤ General AI vs. Weak/Narrow AI ➤ Machine learning projects ➤ Including those related to library collections, services, or operations ➤ The issue of algorithmic bias that replicates/magnifies existing social prejudices due to ML’s heavy reliance on data
  9. 9. LEVELS OF MACHINE MORALITY ➤ Operational morality ➤ tools/systems low in both autonomy and ethical sensitivity ➤ e.g. a gun with a childproof safety mechanism ➤ within the full control of a tool’s designers and users ➤ Functional morality ➤ both autonomy and ethical sensitivity somewhere in the mid-range ➤ systems with significant autonomy but little ethical sensitivity ➤ systems with little autonomy but high ethical sensitivity ➤ e.g. an aircraft with an auto-pilot feature; an ethical decision-support system for clinicians (MedEthEx) ➤ Full moral agency ➤ systems with high autonomy and high ethical sensitivity ➤ representation of values and capacity for moral reasoning ➤ moral responsibility Source: Wendell Wallach and Colin Allen, Moral Machines: Teaching Robots Right from Wrong, (Oxford: Oxford University Press, 2010), p.26.
  10. 10. QUESTIONS TO ASK ➤ What goal is the AI/ML system designed to achieve? ➤ What level of autonomy / ethical sensitivity comes with the system? ➤ What level of machine morality is feasible or appropriate? ➤ Moral reasoning: potential steps ➤ Select potential actions that achieves a goal. ➤ Identify morally relevant considerations under the given goal. ➤ Anticipate outcome & consequences factoring in the context (/ environment) and involved parties. ➤ Determine and execute the best action. ➤ Compare the actual outcome with the anticipated outcome. ➤ A feedback loop to improve the system based upon past experience. ➤ Additional system-wide audits for specific biases.
  11. 11. ULTIMATE CHALLENGE ➤ How to code human values into an intelligent machine, so that its actions not only reflect but also preserve and strengthen those values?
  12. 12. THANK YOU! @bohyunkim