& MORAL DECISION-MAKING
CTO, University of Rhode Island Libraries
Libraries Facilitating Cross-disciplinary Research - DC Workshop, Washington D.C., May 31, 2019
POTENTIAL ISSUES OF CROSS-DISCIPLINARY RESEARCH & ML
➤ One Issue: Moral/Ethical Dimension of Machine Intelligence
➤ If ML techniques drive AI developments to the degree that AI
systems come to partly or fully automate human decision-making,
what would that mean?
➤ An important question that applies to any discipline/area where
ML techniques are used from health sciences to the military.
WHY DOES MACHINE MORALITY MATTER?
➤ The notorious trolley case
➤ No longer a thought-experiment with a self-driving car
➤ UAVs (=drones), UGVs, sentry robots, and other AI systems raise
new and unsettling ethical questions.
MORAL RISKS OF AI/ML: MILITARY ROBOTS AS AN EXAMPLE
➤ They make killing easier (with a greater physical and
psychological distance) with less casualties.
➤ The likely increase of overall violence in the world
➤ The ramiﬁcation of allowing machines to harm humans or to
take their lives
➤ Potentially surrendering our moral decisions and moral
responsibility to an opaque system
ASK ETHICISTS WHAT A MORAL ACTION IS?
➤ Maximize the utility of the outcome
➤ Follow the moral rules/principles
➤ Maximize the outcome for the least privileged
➤ Disappointing to programmers/engineers?
PROGRAMMERS & ENGINEERS
➤ A working solution
➤ A set of constraints and if-then statements for a machine,
which will enable it to identify and process morally relevant
considerations, and then determine and execute an action that
is not only rational but also ethical in the given task
➤ Generalizable and abstract knowledge about morality
➤ Deﬁne what is moral and investigate how moral reasoning
➤ Borderline cases that reveal subtle diﬀerences in varying
WHAT WOULD PROGRAMMING MORALITY LOOK LIKE?
➤ General AI vs. Weak/Narrow AI
➤ Machine learning projects
➤ Including those related to library collections, services, or
➤ The issue of algorithmic bias that replicates/magniﬁes
existing social prejudices due to ML’s heavy reliance on
LEVELS OF MACHINE MORALITY
➤ Operational morality
➤ tools/systems low in both autonomy and ethical sensitivity
➤ e.g. a gun with a childproof safety mechanism
➤ within the full control of a tool’s designers and users
➤ Functional morality
➤ both autonomy and ethical sensitivity somewhere in the mid-range
➤ systems with signiﬁcant autonomy but little ethical sensitivity
➤ systems with little autonomy but high ethical sensitivity
➤ e.g. an aircraft with an auto-pilot feature; an ethical decision-support system for clinicians
➤ Full moral agency
➤ systems with high autonomy and high ethical sensitivity
➤ representation of values and capacity for moral reasoning
➤ moral responsibility
Source: Wendell Wallach and Colin Allen, Moral Machines: Teaching Robots Right from Wrong, (Oxford: Oxford University Press, 2010), p.26.
QUESTIONS TO ASK
➤ What goal is the AI/ML system designed to achieve?
➤ What level of autonomy / ethical sensitivity comes with the system?
➤ What level of machine morality is feasible or appropriate?
➤ Moral reasoning: potential steps
➤ Select potential actions that achieves a goal.
➤ Identify morally relevant considerations under the given goal.
➤ Anticipate outcome & consequences factoring in the context (/
environment) and involved parties.
➤ Determine and execute the best action.
➤ Compare the actual outcome with the anticipated outcome.
➤ A feedback loop to improve the system based upon past experience.
➤ Additional system-wide audits for speciﬁc biases.
➤ How to code human values into an intelligent machine, so
that its actions not only reﬂect but also preserve and
strengthen those values?