Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Technology for everyone - AI ethics and Bias

74 views

Published on

Slides from my talk at #ToonTechTalks on 27 september 2018

We all see the great potential AI is bringing us. But is it really bringing it to everyone? How are we ensuring under-represented groups are included and vulnerable people are protected? What to do when our technology is unintended biased and discriminating against certain groups. And what if the data and AI is correct, but the by-effect of it is that some groups are put at risk? All questions we need to think about when we are advancing technology for the benefit of humanity.

Sharing what I've learned from my work in diversity, digital and from following great minds in this field such as Joanna Bryson, Virginia Dignum, Rumman Chowdhury, Juriaan van Diggelen, Valerie Frissen, Catelijne Muller, and many more.

Published in: Technology
  • Be the first to comment

  • Be the first to like this

Technology for everyone - AI ethics and Bias

  1. 1. Technology for everyone By Marion Mulder #ToonTechTalks 27 September 2018 AI Bias and Ethics Image source: https://www.silvergroup.asia/2012/10/10/new-tech-can-also-challenge-the-40-consumer/
  2. 2. Apparently I’m a sad angry “world ethic” male… Marion Mulder
  3. 3. we all want to make great products
  4. 4. But sometimes along the line things don’t quite turn out right (yet)
  5. 5. Some examples of where it did not go quite right…. Sales of helmet in Asia (of US brand) were low… Turns out the average head size in the database was not diverse enough Image source: Che-Wei Wang – mindful algorithms: the new role of the designer in generative design - TNW conference 2018
  6. 6. Some examples of where it did not go quite right…. Automatic translations are gender biased
  7. 7. Some examples of where it did not go quite right…. Google’s image classifier wrongly assigned black people
  8. 8. Some examples of where it did not go quite right….
  9. 9. Or they turn our right (do they?), but…
  10. 10. Source: Average faces of men and women around the world. Average faces of the world (really??)
  11. 11. Source: Virginia Dignum – UMEA University
  12. 12. Criminal face detection AI Source: Juriaan van Diggelen - TNO
  13. 13. Source: https://ilga.org/maps-sexual-orientation-laws
  14. 14. People blindly trust a robot to lead them out of a building In case of a fire emergency. [Overtrust of robots in emergency evacuation scenarios, P Robinette, W Li, R Allen, AM Howard, AR Wagner - Human-Robot Interaction, 2016] Source: Juriaan van Diggelen - TNO
  15. 15. So things go ‘wrong’ because…
  16. 16. So things go ‘wrong’ because…? • Human bias • Size of data set • Source of data • Completeness • How well is representative • Intended use (was data set originally setup for what you are now trying to do with it) • Is historical data accurate* • labelling • Ontology isn’t always binary • Complexity reduced to binary • Demographical, geographical, cultural differences on definition, law and interpretation • .....
  17. 17. Image Source: Valerie Frissen – Erasmus University /SIDN fonds at Fraiday event (presentation on slideshare)
  18. 18. Lets start with BIAS
  19. 19. Dealing with human BIAS • Bias: expectations derived from experience regularities in the world. Knowing what programmer means, including that most are male. • Stereotypes: biases based on regularities we do not wish to persist. Knowing that (most) programmes are male. • Prejudice: action on stereotypes. Hiring only male programmers. Source: Joanna Bryson at AI Summit - Caliskan, Bryson &Narayanan 2017 Ada Lovelace
  20. 20. Source: beperktzicht.nl
  21. 21. Source: beperktzicht.nl
  22. 22. Source: google search result for start-up founder
  23. 23. ✅ Be aware of confirmation bias tendency to search for, interpret, favor, and recall information in a way that confirms one's pre-existing beliefs or hypotheses. Source: Virginia Dignum – UMEA University
  24. 24. Source: Virginia Dignum – UMEA University ✅ realise: things won’t always binary or clear
  25. 25. Laws, regulation, Self regulations
  26. 26. ✅ Laws, regulation, self regulation • Values • Code of conduct • GDPR • IEEE Ethically Aligned Design • UN Human Rights Declaration • Global sustainability goals • Product liability laws • AI on EU level initiatives • Ethical Guidelines • Responsible AI frameworks
  27. 27. IEEE Ethically Aligned Design A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems • IEEE P7000™ - Model Process for Addressing Ethical Concerns During System Design • IEEE P7001™ - Transparency of Autonomous Systems • IEEE P7002™ - Data Privacy Process • IEEE P7003™ - Algorithmic Bias Considerations • IEEE P7004™ - Standard on Child and Student Data Governance • IEEE P7005™ - Standard for Transparent Employer Data Governance • IEEE P7006™ - Standard for Personal Data Artificial Intelligence (AI) Agent • IEEE P7007™ - Ontological Standard for Ethically Driven Robotics and Automation Systems • IEEE P7008™ - Standard for Ethically Driven Nudging for Robotic, Intelligent, and Automation Systems • IEEE P7009™ - Standard for Fail-Safe Design of Autonomous and Semi- Autonomous Systems • IEEE P7010™ - Wellbeing Metrics Standard for Ethical Artificial Intelligence and Autonomous Systems • IEEE P7011 - Standard for the Process of Identifying and Rating the Trustworthiness of News Sources • IEEE P7012 - Standard for Machine Readable Personal Privacy Terms • IEEE P7013 - Inclusion and Application Standards for Automated Facial Analysis Technology See https://ethicsinaction.ieee.org/
  28. 28. Human rights
  29. 29. ✅ product liability laws • AI is a product, product liability laws apply
  30. 30. Source: Virginia Dignum – UMEA University
  31. 31. Source: Virginia Dignum – UMEA University
  32. 32. Source: Rumman Chowdhury - Accenture
  33. 33. Source: Rumman Chowdhury - Accenture
  34. 34. Some other initiatives worth following
  35. 35. Source: Virginia Dignum – UMEA University
  36. 36. Google Introducing the Inclusive Images Competition Thursday, September 6, 2018 Posted by Tulsee Doshi, Product Manager, Google AI
  37. 37. https://futureproof.community/circle/tech-for-good/challenges/denk-mee-over-humaan-datagebruik-met-behulp- van-equality-matters
  38. 38. Design, human in the loop
  39. 39. ✅ Design Human in the Loop systems • Consult non-technical experts and resources • Involve diverse groups in your design, data gathering, testing • Transparency (explainable) of your decision making and algorithms • Analyze –synthetize –evaluate -repeat
  40. 40. ✅ Key questions when developing or deploying an algorithmic system • Who will be affected? • What are the decisions/optimisation criteria? • How are these criteria justified? • Are these justifications acceptable in the context where the system is used? • How are we training our algorithm? • Does training data resemble the context of use? IEEE P7003 Algorithmic Bias Considerations by Ansgar Koene IEEE standard Algorithmic bias https://standards.ieee.org/project/7003.html
  41. 41. ✅ Be explicit & transparent • Question your options and choices • Motivate your choices • Document your choices and options Source: Virginia Dignum – UMEA University
  42. 42. Lets be critical about our data
  43. 43. ✅ Lets check some data!
  44. 44. In summary We are all biased, so lets be aware lost of laws, regulations and other sources for guidance And lets be critical about the data we use Lets design, test and implement with humans in the loop
  45. 45. AI summit 11 oct full morning about AI ethics topics Wilder will go into data and code site next #Aiethics #databias #responsibleAI #responsibleData
  46. 46. Sources
  47. 47. Reference Resources • Joanna Bryson • Virginia Dignum • Rumman Chowdhury • Catelijne Muller - EU High Level Expert Group on AI • IEEE.org • Ada-AI • WCAG Accessibility guidelines (https://www.w3.org/TR/WCAG/) • Incorporating Ethical Considerations in Autonomous & Intelligent Systems • #Aiethics #databias #responsibleAI #responsibleData • See my twitter list https://twitter.com/muldimedia/lists/ai-ethics/members • Of check my pinterest boards https://nl.pinterest.com/muldimedia/artificial-intelligence/ or https://nl.pinterest.com/muldimedia/artificial-intelligence/ai-ethics-responsible-ai-bias-and-privacy/ • https://futurism-com.cdn.ampproject.org/c/s/futurism.com/artificial-intelligence-benefits-select-few/amp/ • http://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201720180ACR215 • https://ec.europa.eu/jrc/communities/community/humaint/event/humaint-winter-school-ai-ethical-social-legal-and-economic-impact • https://theintercept.com/2018/09/06/nypd-surveillance-camera-skin-tone-search/

×