Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Ethical Considerations in the Design of Artificial Intelligence


Published on

A presentation for IEEE's Ethics Symposium happening in Vancouver, May 2016. Featuring presentations from John C. Havens, Mike Van der Loos, John P. Sullins, and Alan Mackworth.

Published in: Technology
  • Be the first to comment

Ethical Considerations in the Design of Artificial Intelligence

  1. 1. Ethical Considerations in the Design of Artificial Intelligence John C. Havens * Mike Van der Loos * Alan Mackworth * John P. Sullins #AIEthics
  2. 2. The Delight In The Data Welcome and Introductions
  3. 3. Agenda • Introductions • John C. Havens • Mike Van der Loos • Alan Mackworth • John Sullins • Moderated Panelists Discussion • Audience Q&A • End #AIEthics
  4. 4. 4 IEEE Global Ethics Initiative
  5. 5. • Launched April 5, 2016 • Executive Committee of twelve global thought leaders in AI, autonomous tech, Ethics • Eleven Committees featuring over eighty additional thought leaders from over twelve countries • IEEE Staff/Society Involvement: Representatives from SA, TA, RAS, SSIT, Computer Society, IEEE P2040* • AI Association Involvement: AAAI, EurAI, IJCAI • Policy orgs represented: WEF, UN, FCC, Future of Privacy Forum* • Companies represented include: IBM, EMC, Cisco, NXP, LucidAI, Google DeepMind* • Academic Institutions represented include: University of Texas, TU Delft, University of British Columbia, Arizona State University, University of Washington, University of Cambridge, Duke University, Harvard University, MIT, Georgia Institute of Technology* *Partial listing
  6. 6. 5/19/2016 7
  7. 7. Committees: • Executive Committee • AI Ecosystem Mapping Committee • General Principles and Guidance • Legal Issues • Affective Computing • Safety and Beneficence of AGI and ASI • Individual/Personal Data Control • Economics of Machine Automation/Humanitarian Issues • Methodologies to Guide Ethical Research, Design and Manufacturing • How to Imbue Ethics/Values into AI • Reframing Lethal Autonomous Weapons Systems (LAWS)
  8. 8. • Global Initiative invited to have satellite meeting as part of Europe’s largest AI Conference • Initiative Committees gather for first face-to-face meeting • Initiative Committees bring Charter Language (Crowdsourced Code of Conduct) to event • Committees Bring Standards Projects to Workshops (to submit to SA) • Attendees at Workshops help iterate Language • Attendees to Workshops provide feedback and vote on Projects
  9. 9. • Second face to face meeting at UT in March, 2017 before SXSW Conference • Attendees evolve Charter 2.0 to Charter 3.0 • Charter available via Creative Commons License for good of technology community at large • By March 2017, over Multiple Standards Projects will be recommended to SA as PARs • At UT, Global Initiative announces its formation as an Alliance, global University partnerships • Alliance iterates Charter annually via meetings around the world, creates Certifications/Workshops to implement Charter in multiple verticals, serves as an ongoing, global R&D Standards Pipeline for SA
  10. 10. 11 Mike Van der Loos
  12. 12. 13 HFM VAN DERLOOS CARIS LAB MAY13, 2016 Collaborative Advanced Robotics and Intelligent Systems Lab ELIZABETH A. CROFT Elizabeth A. Croft Mike Van der Loos
  13. 13. 14 HFM VAN DERLOOS ROBOTS ARE COMING HUMAN-ROBOTCOLLABORATION CARIS lab, UBC (2010) Baxter, Rethink Robotics (2012) MAY13, 2016
  14. 14. 15 HFM VAN DERLOOS ROBOETHICS MAY13, 2016 ETHICSAPPLIED TOROBOTICS Roboethics - Human ethics - Applied ethics adopted by designers / manufacturers / users - Code of conduct implemented in the artificial intelligence of robots - Artificial ethics for robots to exhibit ethically acceptable behaviour Roboethics Robot Ethics - Morality of a hypothetical robot that is equipped with a conscience and freedom to choose its own actions Robot’s Ethics Fiorella Operto, Ethics in Advanced Robotics, 18 IEEE ROBOT. AUTOM. MAG. 72–78 (2011)
  15. 15. 16 HFM VAN DERLOOS PROBLEM MAY13, 2016 What is right / wrong? Fair / unfair? What should / ought a robot do? Who knows the answers? Design decision Policy decisions Technical implementations Culture Religion Context Philosophical stance …
  18. 18. 19 HFM VAN DERLOOS AUTONOMOUS CARS MAY13, 2016 STUDYING WHAT PEOPLE THINK A total of 10 polls and 766 responses on autonomous cars polls since April 25, 2014
  19. 19. 20 HFM VAN DERLOOS AUTONOMOUS CARS MAY13, 2016 Image by: Craig Berry Continue straight and kill the child 64% Swerve and kill the passenger (you) 36% IF YOU FIND YOURSELF AS THE PASSENGER OF THE TUNNEL PROBLEM, HOW SHOULD THE CAR REACT? N=113. Analyzed on June 22, 2014 Difficult 24% Moderately difficult 28% Easy 48% HOW HARD WAS IT FOR YOU TO ANSWER THE TUNNEL PROBLEM QUESTION? N=116. Analyzed on June 22, 2014 Passenger 44% Lawmakers 33% Other 11% Manufacturer / designer 12% WHO SHOULD DETERMINE HOW THE CAR RESPONDS TO THE TUNNEL PROBLEM? N=113. Analyzed on June 22, 2014 STUDYING WHAT PEOPLE THINK
  21. 21. 22 HFM VAN DERLOOS CONCLUSION TAKE HOME MESSAGES MAY13, 2016 PROBLEM: What should a robot do? Public acceptance & design decisions Democratic approach to moral decisions Delegating decision making to atomic interactions Human-Robot Interaction (HRI) Roboethics
  22. 22. 23 HFM VAN DERLOOS ACKNOWLEDGMENTS  CARIS Lab  ICICS  UBC Dept. of Mechanical Engineering  CFI  NSERC  Vanier Canada Graduate Scholarships MAY13, 2016 CONTACT INFORMATION: Mike Van der Loos, Ph.D., P.Eng. Assoc. Prof., Dept. of Mechanical Engineering, UBC 6250 Applied Science Lane Vancouver, BC V6T 1Z4 CANADA phone: +1-604-827-4479 email: web: research:; Ori:
  23. 23. 24 Alan Mackworth
  24. 24. Trusted Artificial Autonomous Agents Alan Mackworth • New ontological category: Artificial Autonomous Agents (AAAs) • Q: Can we trust them? • A: No! • Q: Why not? • A: E.g. ‘Deep Learning’: opaque, with massive, inaccessible training sets • Ethical agents have to be trustworthy • Need new methods to build trusted, ethical agents • Ensure AAAs values are aligned with users’ and society’s values
  25. 25. Five Approaches to Building Trusted Agents 1. Formal methods for specification and verification 2. Hierarchical constraint-based modular architectures 3. Inferring human values: e.g. inverse reinforcement learning 4. Semi-autonomy, human in the loop 5. Participatory Action Design: user-centered with Wizard of Oz techniques
  26. 26. What We Need Any ethical discussion presupposes we (and agents) can: •Model agent structure and functionality •Predict consequences of agent commands and actions •Impose constraints on agent actions such as goal reachability, safety and liveness (absence of deadlock and livelock) •Determine if an agent satisfies those constraints (almost always)
  27. 27. Formal Methods to Build Trustworthy AAAs To show that implementation satisfies specification, we need a tripartite theory: 1. Language to express agent structure and dynamics 2. Language for constraint-based specifications 3. Method to determine if an agent will (be likely to) satisfy its specifications, connecting 1 to 2
  28. 28. A Constraint-Based Agent (CBA) CBA Structure Constraint Solver
  29. 29. Formal Methods for Agent Verification The CBA framework consists of: 1. Constraint Net (CN) → system modelling 2. Timed -automata → behavior specification 3. Model-checking and Liapunov methods → behavior verification A (Zhang & Mackworth, 1993, …)
  30. 30. Hierarchical Modular CBA in CN ← CBA Structure ↑ Control Synthesis with Prioritized Constraints Constraint1 > Constraint2 > Constraint3 >
  31. 31. Artificial Semi-autonomous Agents (ASAs) • Keep human(s) in the loop • Shared autonomy at the higher control levels • Provide ‘sliders’ for users to adjust autonomy levels • Not one size fits all • Case study: smart wheelchairs for cognitively and physically impaired older adults
  32. 32. Docking and Back-in Parking Assistance Driving Scenario at Long Term Care Facility
  33. 33. Shared Autonomy Wheelchair Control Modes Level 1: Basic safety by limiting speed Level 2: Level 1 + non-intrusive steering guidance Level 3: Level 1 + intrusively turning away from obstacles Level 4: Completely autonomous The Wizard [Baum, 1900] Systems developed using user-centered Participatory Action Design methodology and Wizard of Oz techniques
  34. 34. Closing Thoughts • More R&D on building trusted AAAs and ASAs required • Formal specification and verification of AAAs needed • Governments lack technical expertise to develop standards • Lack of effective global standards bodies with enforcement • Regulatory capture: power of corporations to fend off regulation • Poor education of AI scientists & roboticists in morals and ethics • AI singularity & superintelligence hype overshadows real concerns • See One Hundred Year Study of AI Thanks to: Y. Zhang, P. Viswanathan, A. Mihailidis, B. Adhikari, I. Mitchell, J. Little, …. Contact: @AlanMackworth URL:
  35. 35. 36 John P. Sullins
  36. 36. John P. Sullins Professor of Philosophy Sonoma State University
  37. 37. Embedded Ethics Design for AI and Robotics • Building workable solutions requires many disciplines to work together • When it is working well, philosophy is a big picture discipline and it has much to offer in our quest of building beneficial AI and robotics applications • Especially in the area of ethics and the design of artificial moral agents
  38. 38. Bryant Walker Smith • Lawyers and Engineers Should Speak the Same Robot Language, Bryant Walker Smith, 2015 • Each application has many uses • Actual • Legal • Reasonable • Use intended by the designer • “An open question is the extent to which product design should attempt to confine actual uses to those that are legal, reasonable, or intended.”
  39. 39. Ethical Design I recommend we add ethical use to the list of potential uses as well A-Actual Use B-Reasonable Use C-Intended Use D-Legal Use E-Ethical Use Actual Use Reasonable use Intended UseLegal Use Ethical Use
  40. 40. Ethics Applied to AI and Robotics Image from: Are Deontological Moral Judgements Rationalizations? Some problems • Classical Ethics is only concerned with human agency • What is the best ethical system to apply? • No science is ever truly finished so the science of ethics will not result in one unified theory either
  41. 41. A Helpful Alternative • The following discussions can be distracting • Egoism vs Altruism • Self interest vs Benevolence • Free Will vs Determinism • Responsibility • Morality has roots in evolution • Ethics is a tool or instrument that we use to design new forms of beneficial behavior American Pragmatist Philosopher:1859-1952
  42. 42. Three Active Areas of AI Ethics Research
  43. 43. Embedded Ethics Design • The ", carries on the great part of his work without consciously asking himself whether his work is going to benefit himself or someone else. He is interested in the work itself; such objective interest is a condition of mental and moral health.... Nevertheless, there are occasions when conscious reference to the welfare of others is imperative." Dewey, Ethics 1935. • We need embedded ethics professionals at the level of the design team • To meet the needs for engineers who must focus on their work • And for the organization that employs them to pay appropriate concern to the ethical impacts of their work • This can take the form of consultants but it would be best to have some of the designers trained in values sensitive design • Their job is to find the areas of ethical concern in a design and suggest constructive means for mitigating problems in the design stage • This prevents the approach we often see • release-disaster-beg forgiveness • Since embedded ethicists might be susceptible to something like the Stockholm syndrome, we must also have ethics review boards
  44. 44. AI and Robotics Ethics Boards Short term ethical concerns are met by creating a dialog that follows these steps 1. Identify the ethical concerns raised by the new technology. a. Anticipate consequences. Create proactive ethics rather than merely reactive ones. b. Enhance the standard model IRB and replace it with one that fosters embedded ethicists in the design groups that closely work with them and help foster a community of practice around ethical deliberation. 2. Vet the overall design strategy of the organization. a. Define the ethical goals—what does the organization want to craft as its legacy? 3. Help operationalize the ethical code of the organization as it is applied to AI and robotic projects and update this code as new challenges are resolved. 4. Keep a repository of these deliberations to facilitate future discussions
  45. 45. Artificial Ethical/Moral Agents (AEA, AMA) • Artificial Practical Wisdom • Virtues for robots • Security • Integrity • Accessibility • Ethical trust • Functional moral sensibility • Accurate choice of ethical actions and goals • Context sensitive • Accurate ranking of exemplar cases and reasoning
  46. 46. For More Information Applied Professional Ethics for the Reluctant Roboticist. Open Robotics, 2015 Ethics Boards for Research in Robotics and Artificial Intelligence: Is it Too Soon to Act? Chapter 5 in Social Robots: Boundaries, Potential, Challenges, edited by Marco Nørskov, Ashgate
  47. 47. Q&A – Ethics in AI
  48. 48. John C. Havens @johnchavens Alan Mackworth @AlanMackworth Mike Van der Loos loos/ John Sullins, s
  49. 49. Thank you.