Academic Course: 12 Safety and Ethics


Published on

By Alan Winfield

Published in: Technology, Business
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Academic Course: 12 Safety and Ethics

  1. 1. Designed by Alan Winfield Self-Awareness in Autonomic Systems Safety and Ethics
  2. 2. Designed by Alan Winfield Outline • The problem of safety in autonomic systems – and why we need a radical new approach • The problem of ethics in autonomic systems – using robots as an example • Self-awareness might provide a powerful means for building safe and ethical autonomic systems
  3. 3. Designed by Alan Winfield The safety problem 1 • For any engineered system to be trusted, it must be safe – We already have many examples of complex engineered systems that are trusted; passenger airliners, for instance – These systems are trusted because they are designed, built, verified and operated to very stringent design and safety standards – The same will need to apply to autonomous systems
  4. 4. Designed by Alan Winfield The safety problem 2 • The problem of safe autonomous systems in unstructured or unpredictable environments, i.e. – robotsdesigned to share human workspaces and physically interact with humans must be safe, – yet guaranteeing safe behaviour is extremely difficult because the robot’s human-centred working environment is, by definition, unpredictable – it becomes even more difficult if the robot is also capable oflearning or adaptation
  5. 5. Designed by Alan Winfield The ethical problem • Use autonomous robots as a case study – Four ethical problems – Asimov’s three laws of robotics – Asimov revised: 5 ethics for roboticists – But could robots themselves be ethical..?
  6. 6. Designed by Alan Winfield What is a robot?
  7. 7. Designed by Alan Winfield Four ethical problems • The problem of autonomous robots that pull the trigger • The problem of robots that induce an emotional reaction, or dependency • The problem of humanoid robots that appear to be intelligent but are not • The problem of who is responsible when a robot causes harm
  8. 8. Designed by Alan Winfield 8 Asimov’s three laws of robotics 1. a robot may not injure a human being or, through inaction, allow a human being to come to harm; 2. a robot must obey any orders given to it by human beings, except where such orders would conflict with the first Law, and 3. a robot must protect its own existence as long as such protection does not conflict with the first or second Law.
  9. 9. Designed by Alan Winfield 9 Asimov revised: 5 ethics for roboticists 1.Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security.
  10. 10. Designed by Alan Winfield 10 Asimov revised: 5 ethics for roboticists 2.Humans, not robots, are responsible agents. Robots should be designed & operated as far as is practicable to comply with existing laws & fundamental rights & freedoms, including privacy.
  11. 11. Designed by Alan Winfield 11 Asimov revised: 5 ethics for roboticists 3.Robots are products. They should be designed using processes which assure their safety and security.
  12. 12. Designed by Alan Winfield 12 Asimov revised: 5 ethics for roboticists 4.Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent.
  13. 13. Designed by Alan Winfield 13 Asimov revised: 5 ethics for roboticists 5. The person with legal responsibility for a robot should be attributed. Draft ethical principles proposed by UK EPSRC/AHRC working group on robot ethics, September 2010: ngineering/activities/Pages/principlesofrobotics.aspx
  14. 14. Designed by Alan Winfield But could a robot be ethical? • An ethical robot would require: – The ability to predict the consequences of its own actions (or inaction) – A set of ethical rules against which to test each possible action/consequence, so it can choose the most ethical action – New legal status..?
  15. 15. Designed by Alan Winfield Using internal models • Internal models might provide a level of functional self-awareness – sufficient to allow robots to ask what-if questions about both the consequences of its next possible actions – the same internal modelling architecture could conceivably embody both safety and ethical rules – See slide set 12 Systems with Internal Models
  16. 16. Designed by Alan Winfield A thought experiment Consider a robot that has four possible next actions: 1. turn left 2. move ahead 3. turn right 4. stand still Which action would lead to the least harm to the human?
  17. 17. Designed by Alan Winfield In conclusion • I strongly suspect that internal models might prove to be theonly way to guarantee safety in robots, and by extension autonomous systems, in unknown and unpredictable environments – and just maybe provide ethicalbehaviours too
  18. 18. Designed by Alan Winfield References • Woodman R, Winfield AFT, Harper C and Fraser M, Building Safer Robots: Safety Driven Control, International Journal of Robotics Research. 31 (13), 1603-1626, 2012. • Wendell Wallach and Colin Allen, Moral Machines: Teaching Robots Right from Wrong, Oxford University Press, 2008 • M. Anderson and S. L. Anderson. Machine Ethics. Cambridge University Press, 2011 • Royal Academy of Engineering, Autonomous Systems: Social, Legal and Ethical Issues, August 2009 – ems_Report_09.pdf • Draft ethical principles proposed by EPSRC/AHRC working group on robot ethics, September 2010 – Pages/principlesofrobotics.aspx