Slides from my talk at #ToonTechTalks on 27 september 2018
We all see the great potential AI is bringing us. But is it really bringing it to everyone? How are we ensuring under-represented groups are included and vulnerable people are protected? What to do when our technology is unintended biased and discriminating against certain groups. And what if the data and AI is correct, but the by-effect of it is that some groups are put at risk? All questions we need to think about when we are advancing technology for the benefit of humanity.
Sharing what I've learned from my work in diversity, digital and from following great minds in this field such as Joanna Bryson, Virginia Dignum, Rumman Chowdhury, Juriaan van Diggelen, Valerie Frissen, Catelijne Muller, and many more.
1. Technology for everyone
By Marion Mulder
#ToonTechTalks 27 September 2018
AI Bias and Ethics
Image source: https://www.silvergroup.asia/2012/10/10/new-tech-can-also-challenge-the-40-consumer/
5. Some examples of where it did not go quite right….
Sales of helmet in
Asia (of US brand)
were low…
Turns out the
average head size
in the database was
not diverse enough
Image source: Che-Wei Wang – mindful algorithms: the new role of the designer in generative design - TNW conference 2018
6. Some examples of where it did not go quite right….
Automatic translations are gender biased
7. Some examples of where it did not go quite right….
Google’s image
classifier wrongly
assigned black people
20. People blindly trust a robot to lead them out of a building In case of a fire emergency.
[Overtrust of robots in emergency evacuation scenarios, P Robinette, W Li, R Allen, AM Howard, AR Wagner - Human-Robot Interaction, 2016]
Source: Juriaan van Diggelen - TNO
22. So things go ‘wrong’ because…?
• Human bias
• Size of data set
• Source of data
• Completeness
• How well is representative
• Intended use (was data set originally setup for what you are now trying to do with it)
• Is historical data accurate*
• labelling
• Ontology isn’t always binary
• Complexity reduced to binary
• Demographical, geographical, cultural differences on definition, law and
interpretation
• .....
23. Image Source: Valerie Frissen – Erasmus University /SIDN fonds at Fraiday event (presentation on slideshare)
25. Dealing with human BIAS
• Bias: expectations derived from experience
regularities in the world. Knowing what
programmer means, including that most are
male.
• Stereotypes: biases based on regularities we
do not wish to persist. Knowing that (most)
programmes are male.
• Prejudice: action on stereotypes. Hiring only
male programmers.
Source: Joanna Bryson at AI Summit - Caliskan, Bryson &Narayanan 2017
Ada Lovelace
29. ✅ Be aware of confirmation bias
tendency to search for,
interpret, favor, and recall
information in a way that
confirms one's pre-existing
beliefs or hypotheses.
Source: Virginia Dignum – UMEA University
32. ✅ Laws, regulation, self regulation
• Values
• Code of conduct
• GDPR
• IEEE Ethically Aligned Design
• UN Human Rights Declaration
• Global sustainability goals
• Product liability laws
• AI on EU level initiatives
• Ethical Guidelines
• Responsible AI frameworks
33. IEEE Ethically Aligned Design
A Vision for Prioritizing Human Well-being with Autonomous and
Intelligent Systems
• IEEE P7000™ - Model Process for Addressing Ethical Concerns During System
Design
• IEEE P7001™ - Transparency of Autonomous Systems
• IEEE P7002™ - Data Privacy Process
• IEEE P7003™ - Algorithmic Bias Considerations
• IEEE P7004™ - Standard on Child and Student Data Governance
• IEEE P7005™ - Standard for Transparent Employer Data Governance
• IEEE P7006™ - Standard for Personal Data Artificial Intelligence (AI) Agent
• IEEE P7007™ - Ontological Standard for Ethically Driven Robotics and
Automation Systems
• IEEE P7008™ - Standard for Ethically Driven Nudging for Robotic,
Intelligent, and Automation Systems
• IEEE P7009™ - Standard for Fail-Safe Design of Autonomous and Semi-
Autonomous Systems
• IEEE P7010™ - Wellbeing Metrics Standard for Ethical Artificial Intelligence
and Autonomous Systems
• IEEE P7011 - Standard for the Process of Identifying and Rating the
Trustworthiness of News Sources
• IEEE P7012 - Standard for Machine Readable Personal Privacy Terms
• IEEE P7013 - Inclusion and Application Standards for Automated Facial
Analysis Technology
See https://ethicsinaction.ieee.org/
48. ✅ Design Human in the Loop
systems
• Consult non-technical experts and
resources
• Involve diverse groups in your design,
data gathering, testing
• Transparency (explainable) of your
decision making and algorithms
• Analyze –synthetize –evaluate -repeat
49. ✅ Key questions when developing or deploying an algorithmic system
• Who will be affected?
• What are the decisions/optimisation criteria?
• How are these criteria justified?
• Are these justifications acceptable in the context where the system is
used?
• How are we training our algorithm?
• Does training data resemble the context of use?
IEEE P7003 Algorithmic Bias Considerations by Ansgar Koene
IEEE standard Algorithmic bias https://standards.ieee.org/project/7003.html
50. ✅ Be explicit & transparent
• Question your options and choices
• Motivate your choices
• Document your choices and options
Source: Virginia Dignum – UMEA University
53. In summary
We are all biased, so lets be aware lost of laws, regulations and other
sources for guidance
And lets be critical about the data
we use
Lets design, test and implement with
humans in the loop
54. AI summit 11 oct
full morning about AI ethics topics
Wilder will go into data and code site
next
#Aiethics #databias #responsibleAI #responsibleData
56. Reference Resources
• Joanna Bryson
• Virginia Dignum
• Rumman Chowdhury
• Catelijne Muller - EU High Level Expert Group on AI
• IEEE.org
• Ada-AI
• WCAG Accessibility guidelines (https://www.w3.org/TR/WCAG/)
• Incorporating Ethical Considerations in Autonomous & Intelligent Systems
• #Aiethics #databias #responsibleAI #responsibleData
• See my twitter list https://twitter.com/muldimedia/lists/ai-ethics/members
• Of check my pinterest boards https://nl.pinterest.com/muldimedia/artificial-intelligence/ or
https://nl.pinterest.com/muldimedia/artificial-intelligence/ai-ethics-responsible-ai-bias-and-privacy/
• https://futurism-com.cdn.ampproject.org/c/s/futurism.com/artificial-intelligence-benefits-select-few/amp/
• http://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201720180ACR215
• https://ec.europa.eu/jrc/communities/community/humaint/event/humaint-winter-school-ai-ethical-social-legal-and-economic-impact
• https://theintercept.com/2018/09/06/nypd-surveillance-camera-skin-tone-search/
Editor's Notes
In my day job I steer between needs and wants (business, users) and those who make it (happen)
For over 10 years co-founder and board member of WPP - a foundation dedicated to improving the lives of LGBTI people in workplaces all over the world.
Racial profiling?
Predictive Policing?
AI programs can distinguish criminal from non-criminal faces with nearly 90% accuracy. [Wu and Zhang, “Automated Inference on Criminality using Face Images”, arXiv]
http://callingbullshit.org/case_studies/case_study_criminal_machine_learning.html
1800 photos:
1100 of these were photos of non-criminals scraped from a variety on sources on the World Wide Web using a web spider. (e.g. Linkedin)
700 of the photos were pictures of criminals, provided by police departments.
Bias in
Clothing (did we build a tie classifier?)
Facial expression
Micro
Macro
Convicted criminals (i.e. the judge’s bias is copied)
Source: Valerie Frissen – Erasmus University /SIDN fonds
Ada Lovelace
Ze wordt nu gezien als de ontwerpster van het eerste computerprogramma, omdat ze "programma's" schreef om symbolen volgens vaste regels te manipuleren met een machine die Babbage op dat moment nog moest maken.
Advancing technology for humanity – or just get rich?