PSYCHOLOGY OF
TECHNOLOGY:
COMPUTER AND ROBOT
ETHICS AND MORALITY
DAVE MILLER
ITINERARY
 ETHICS AND MORALITY
 MORAL THEORIES
 Utilitarianism
 Kantianism (The Categorical Imperative)
 THE BATTLE OF THE THEORIES
 The Trolley Problem (Foot, 1967; Thompson, 1973)
 PERSUASION ETHICS
 ROBOT ETHICS
ETHICAL THEORIES
 UTILITARIANISM: Jeremy Bentham and John Stuart Mill
 “The Greatest Good for the Greatest Number”
 Act Utilitarianism: Calculate the utility
 Rule Utilitarianism: Follow the rule that should optimize utility
 Kant’s Categorical Imperative
 “Act only according to that maxim whereby you can at the same time will that it
should become a universal law without contradiction.”
 If everybody acted this way, would it be a good situation?
THE TROLLEY PROBLEM
(Philippa Foot, 1967)
© Jonas Kubilius
CHOOSE WISELY
VR FORCED CHOICE DECISION
MAKING
SKULMOWSKI ET AL., (2014)
ASIMOV’S LAWS
(Asimov 1957 , 1978 , 1985)
1. A robot may not injure a human being or, through inaction, allow a human
being to come to harm
2. A robot must obey the orders given it by human beings except where
such orders would conflict with the First Law
3. A robot must protect its own existence as long as such protection does
not conflict with the First or Second Laws
FOGG’S TREE OF ETHICS
IT’S A TRAP!
QUESTIONABLY ETHICAL
PERSUASIVE TECH: MOPy FISH
Wikimedia Commons
AUTONOMOUS SYSTEM ETHICS
https://static-secure.guim.co.uk/sys-images/Guardian/Pix/pictures/2014/3/20/1395318797375/
The-Machine-full-of-ideas-012.jpg
DISCUSSION
 At what point is a robot or system no longer merely a tool but an entity with
independent ethics and morals?
 At what point is a robot or system a “person” with rights of its own?
 How much independent agency should a system have?
 When should the computer listen to a human?
 When should the computer overrule a human?

Computer and robot ethics