Technology for everyone
By Marion Mulder
#ToonTechTalks 27 September 2018
AI Bias and Ethics
Image source: https://www.silvergroup.asia/2012/10/10/new-tech-can-also-challenge-the-40-consumer/
Apparently I’m a sad angry
“world ethic” male…
Marion Mulder
we all want to
make great
products
But sometimes
along the line
things don’t quite
turn out right (yet)
Some examples of where it did not go quite right….
Sales of helmet in
Asia (of US brand)
were low…
Turns out the
average head size
in the database was
not diverse enough
Image source: Che-Wei Wang – mindful algorithms: the new role of the designer in generative design - TNW conference 2018
Some examples of where it did not go quite right….
Automatic translations are gender biased
Some examples of where it did not go quite right….
Google’s image
classifier wrongly
assigned black people
Some examples of where it did not go quite right….
Or they turn our
right (do they?),
but…
Source: Average faces of men and women around the world.
Average faces of the world
(really??)
Source: Virginia Dignum – UMEA University
Criminal face detection AI
Source: Juriaan van Diggelen - TNO
Source: https://ilga.org/maps-sexual-orientation-laws
People blindly trust a robot to lead them out of a building In case of a fire emergency.
[Overtrust of robots in emergency evacuation scenarios, P Robinette, W Li, R Allen, AM Howard, AR Wagner - Human-Robot Interaction, 2016]
Source: Juriaan van Diggelen - TNO
So things go ‘wrong’
because…
So things go ‘wrong’ because…?
• Human bias
• Size of data set
• Source of data
• Completeness
• How well is representative
• Intended use (was data set originally setup for what you are now trying to do with it)
• Is historical data accurate*
• labelling
• Ontology isn’t always binary
• Complexity reduced to binary
• Demographical, geographical, cultural differences on definition, law and
interpretation
• .....
Image Source: Valerie Frissen – Erasmus University /SIDN fonds at Fraiday event (presentation on slideshare)
Lets start with BIAS
Dealing with human BIAS
• Bias: expectations derived from experience
regularities in the world. Knowing what
programmer means, including that most are
male.
• Stereotypes: biases based on regularities we
do not wish to persist. Knowing that (most)
programmes are male.
• Prejudice: action on stereotypes. Hiring only
male programmers.
Source: Joanna Bryson at AI Summit - Caliskan, Bryson &Narayanan 2017
Ada Lovelace
Source: beperktzicht.nl
Source: beperktzicht.nl
Source: google search result for start-up founder
✅ Be aware of confirmation bias
tendency to search for,
interpret, favor, and recall
information in a way that
confirms one's pre-existing
beliefs or hypotheses.
Source: Virginia Dignum – UMEA University
Source: Virginia Dignum – UMEA University
✅ realise: things won’t always binary or clear
Laws, regulation,
Self regulations
✅ Laws, regulation, self regulation
• Values
• Code of conduct
• GDPR
• IEEE Ethically Aligned Design
• UN Human Rights Declaration
• Global sustainability goals
• Product liability laws
• AI on EU level initiatives
• Ethical Guidelines
• Responsible AI frameworks
IEEE Ethically Aligned Design
A Vision for Prioritizing Human Well-being with Autonomous and
Intelligent Systems
• IEEE P7000™ - Model Process for Addressing Ethical Concerns During System
Design
• IEEE P7001™ - Transparency of Autonomous Systems
• IEEE P7002™ - Data Privacy Process
• IEEE P7003™ - Algorithmic Bias Considerations
• IEEE P7004™ - Standard on Child and Student Data Governance
• IEEE P7005™ - Standard for Transparent Employer Data Governance
• IEEE P7006™ - Standard for Personal Data Artificial Intelligence (AI) Agent
• IEEE P7007™ - Ontological Standard for Ethically Driven Robotics and
Automation Systems
• IEEE P7008™ - Standard for Ethically Driven Nudging for Robotic,
Intelligent, and Automation Systems
• IEEE P7009™ - Standard for Fail-Safe Design of Autonomous and Semi-
Autonomous Systems
• IEEE P7010™ - Wellbeing Metrics Standard for Ethical Artificial Intelligence
and Autonomous Systems
• IEEE P7011 - Standard for the Process of Identifying and Rating the
Trustworthiness of News Sources
• IEEE P7012 - Standard for Machine Readable Personal Privacy Terms
• IEEE P7013 - Inclusion and Application Standards for Automated Facial
Analysis Technology
See https://ethicsinaction.ieee.org/
Human rights
✅ product liability laws
• AI is a product,
product liability laws apply
Source: Virginia Dignum – UMEA University
Source: Virginia Dignum – UMEA University
Source: Rumman Chowdhury - Accenture
Source: Rumman Chowdhury - Accenture
Some other initiatives
worth following
Source: Virginia Dignum – UMEA University
Google
Introducing the
Inclusive
Images
Competition
Thursday, September
6, 2018
Posted by Tulsee
Doshi, Product
Manager, Google AI
https://futureproof.community/circle/tech-for-good/challenges/denk-mee-over-humaan-datagebruik-met-behulp-
van-equality-matters
Design, human in the
loop
✅ Design Human in the Loop
systems
• Consult non-technical experts and
resources
• Involve diverse groups in your design,
data gathering, testing
• Transparency (explainable) of your
decision making and algorithms
• Analyze –synthetize –evaluate -repeat
✅ Key questions when developing or deploying an algorithmic system
• Who will be affected?
• What are the decisions/optimisation criteria?
• How are these criteria justified?
• Are these justifications acceptable in the context where the system is
used?
• How are we training our algorithm?
• Does training data resemble the context of use?
IEEE P7003 Algorithmic Bias Considerations by Ansgar Koene
IEEE standard Algorithmic bias https://standards.ieee.org/project/7003.html
✅ Be explicit & transparent
• Question your options and choices
• Motivate your choices
• Document your choices and options
Source: Virginia Dignum – UMEA University
Lets be critical about
our data
✅ Lets check some data!
In summary
We are all biased, so lets be aware lost of laws, regulations and other
sources for guidance
And lets be critical about the data
we use
Lets design, test and implement with
humans in the loop
AI summit 11 oct
full morning about AI ethics topics
Wilder will go into data and code site
next
#Aiethics #databias #responsibleAI #responsibleData
Sources
Reference Resources
• Joanna Bryson
• Virginia Dignum
• Rumman Chowdhury
• Catelijne Muller - EU High Level Expert Group on AI
• IEEE.org
• Ada-AI
• WCAG Accessibility guidelines (https://www.w3.org/TR/WCAG/)
• Incorporating Ethical Considerations in Autonomous & Intelligent Systems
• #Aiethics #databias #responsibleAI #responsibleData
• See my twitter list https://twitter.com/muldimedia/lists/ai-ethics/members
• Of check my pinterest boards https://nl.pinterest.com/muldimedia/artificial-intelligence/ or
https://nl.pinterest.com/muldimedia/artificial-intelligence/ai-ethics-responsible-ai-bias-and-privacy/
• https://futurism-com.cdn.ampproject.org/c/s/futurism.com/artificial-intelligence-benefits-select-few/amp/
• http://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201720180ACR215
• https://ec.europa.eu/jrc/communities/community/humaint/event/humaint-winter-school-ai-ethical-social-legal-and-economic-impact
• https://theintercept.com/2018/09/06/nypd-surveillance-camera-skin-tone-search/

Technology for everyone - AI ethics and Bias

  • 1.
    Technology for everyone ByMarion Mulder #ToonTechTalks 27 September 2018 AI Bias and Ethics Image source: https://www.silvergroup.asia/2012/10/10/new-tech-can-also-challenge-the-40-consumer/
  • 2.
    Apparently I’m asad angry “world ethic” male… Marion Mulder
  • 3.
    we all wantto make great products
  • 4.
    But sometimes along theline things don’t quite turn out right (yet)
  • 5.
    Some examples ofwhere it did not go quite right…. Sales of helmet in Asia (of US brand) were low… Turns out the average head size in the database was not diverse enough Image source: Che-Wei Wang – mindful algorithms: the new role of the designer in generative design - TNW conference 2018
  • 6.
    Some examples ofwhere it did not go quite right…. Automatic translations are gender biased
  • 7.
    Some examples ofwhere it did not go quite right…. Google’s image classifier wrongly assigned black people
  • 8.
    Some examples ofwhere it did not go quite right….
  • 9.
    Or they turnour right (do they?), but…
  • 10.
    Source: Average facesof men and women around the world. Average faces of the world (really??)
  • 14.
    Source: Virginia Dignum– UMEA University
  • 15.
    Criminal face detectionAI Source: Juriaan van Diggelen - TNO
  • 17.
  • 20.
    People blindly trusta robot to lead them out of a building In case of a fire emergency. [Overtrust of robots in emergency evacuation scenarios, P Robinette, W Li, R Allen, AM Howard, AR Wagner - Human-Robot Interaction, 2016] Source: Juriaan van Diggelen - TNO
  • 21.
    So things go‘wrong’ because…
  • 22.
    So things go‘wrong’ because…? • Human bias • Size of data set • Source of data • Completeness • How well is representative • Intended use (was data set originally setup for what you are now trying to do with it) • Is historical data accurate* • labelling • Ontology isn’t always binary • Complexity reduced to binary • Demographical, geographical, cultural differences on definition, law and interpretation • .....
  • 23.
    Image Source: ValerieFrissen – Erasmus University /SIDN fonds at Fraiday event (presentation on slideshare)
  • 24.
  • 25.
    Dealing with humanBIAS • Bias: expectations derived from experience regularities in the world. Knowing what programmer means, including that most are male. • Stereotypes: biases based on regularities we do not wish to persist. Knowing that (most) programmes are male. • Prejudice: action on stereotypes. Hiring only male programmers. Source: Joanna Bryson at AI Summit - Caliskan, Bryson &Narayanan 2017 Ada Lovelace
  • 26.
  • 27.
  • 28.
    Source: google searchresult for start-up founder
  • 29.
    ✅ Be awareof confirmation bias tendency to search for, interpret, favor, and recall information in a way that confirms one's pre-existing beliefs or hypotheses. Source: Virginia Dignum – UMEA University
  • 30.
    Source: Virginia Dignum– UMEA University ✅ realise: things won’t always binary or clear
  • 31.
  • 32.
    ✅ Laws, regulation,self regulation • Values • Code of conduct • GDPR • IEEE Ethically Aligned Design • UN Human Rights Declaration • Global sustainability goals • Product liability laws • AI on EU level initiatives • Ethical Guidelines • Responsible AI frameworks
  • 33.
    IEEE Ethically AlignedDesign A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems • IEEE P7000™ - Model Process for Addressing Ethical Concerns During System Design • IEEE P7001™ - Transparency of Autonomous Systems • IEEE P7002™ - Data Privacy Process • IEEE P7003™ - Algorithmic Bias Considerations • IEEE P7004™ - Standard on Child and Student Data Governance • IEEE P7005™ - Standard for Transparent Employer Data Governance • IEEE P7006™ - Standard for Personal Data Artificial Intelligence (AI) Agent • IEEE P7007™ - Ontological Standard for Ethically Driven Robotics and Automation Systems • IEEE P7008™ - Standard for Ethically Driven Nudging for Robotic, Intelligent, and Automation Systems • IEEE P7009™ - Standard for Fail-Safe Design of Autonomous and Semi- Autonomous Systems • IEEE P7010™ - Wellbeing Metrics Standard for Ethical Artificial Intelligence and Autonomous Systems • IEEE P7011 - Standard for the Process of Identifying and Rating the Trustworthiness of News Sources • IEEE P7012 - Standard for Machine Readable Personal Privacy Terms • IEEE P7013 - Inclusion and Application Standards for Automated Facial Analysis Technology See https://ethicsinaction.ieee.org/
  • 34.
  • 36.
    ✅ product liabilitylaws • AI is a product, product liability laws apply
  • 37.
    Source: Virginia Dignum– UMEA University
  • 38.
    Source: Virginia Dignum– UMEA University
  • 39.
  • 40.
  • 42.
  • 43.
    Source: Virginia Dignum– UMEA University
  • 44.
    Google Introducing the Inclusive Images Competition Thursday, September 6,2018 Posted by Tulsee Doshi, Product Manager, Google AI
  • 45.
  • 47.
  • 48.
    ✅ Design Humanin the Loop systems • Consult non-technical experts and resources • Involve diverse groups in your design, data gathering, testing • Transparency (explainable) of your decision making and algorithms • Analyze –synthetize –evaluate -repeat
  • 49.
    ✅ Key questionswhen developing or deploying an algorithmic system • Who will be affected? • What are the decisions/optimisation criteria? • How are these criteria justified? • Are these justifications acceptable in the context where the system is used? • How are we training our algorithm? • Does training data resemble the context of use? IEEE P7003 Algorithmic Bias Considerations by Ansgar Koene IEEE standard Algorithmic bias https://standards.ieee.org/project/7003.html
  • 50.
    ✅ Be explicit& transparent • Question your options and choices • Motivate your choices • Document your choices and options Source: Virginia Dignum – UMEA University
  • 51.
    Lets be criticalabout our data
  • 52.
    ✅ Lets checksome data!
  • 53.
    In summary We areall biased, so lets be aware lost of laws, regulations and other sources for guidance And lets be critical about the data we use Lets design, test and implement with humans in the loop
  • 54.
    AI summit 11oct full morning about AI ethics topics Wilder will go into data and code site next #Aiethics #databias #responsibleAI #responsibleData
  • 55.
  • 56.
    Reference Resources • JoannaBryson • Virginia Dignum • Rumman Chowdhury • Catelijne Muller - EU High Level Expert Group on AI • IEEE.org • Ada-AI • WCAG Accessibility guidelines (https://www.w3.org/TR/WCAG/) • Incorporating Ethical Considerations in Autonomous & Intelligent Systems • #Aiethics #databias #responsibleAI #responsibleData • See my twitter list https://twitter.com/muldimedia/lists/ai-ethics/members • Of check my pinterest boards https://nl.pinterest.com/muldimedia/artificial-intelligence/ or https://nl.pinterest.com/muldimedia/artificial-intelligence/ai-ethics-responsible-ai-bias-and-privacy/ • https://futurism-com.cdn.ampproject.org/c/s/futurism.com/artificial-intelligence-benefits-select-few/amp/ • http://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201720180ACR215 • https://ec.europa.eu/jrc/communities/community/humaint/event/humaint-winter-school-ai-ethical-social-legal-and-economic-impact • https://theintercept.com/2018/09/06/nypd-surveillance-camera-skin-tone-search/

Editor's Notes

  • #3 In my day job I steer between needs and wants (business, users) and those who make it (happen) For over 10 years co-founder and board member of WPP - a foundation dedicated to improving the lives of LGBTI people in workplaces all over the world. 
  • #12 Racial profiling? Predictive Policing?
  • #16 AI programs can distinguish criminal from non-criminal faces with nearly 90% accuracy. [Wu and Zhang, “Automated Inference on Criminality using Face Images”, arXiv] http://callingbullshit.org/case_studies/case_study_criminal_machine_learning.html 1800 photos: 1100 of these were photos of non-criminals scraped from a variety on sources on the World Wide Web using a web spider. (e.g. Linkedin) 700 of the photos were pictures of criminals, provided by police departments. Bias in Clothing (did we build a tie classifier?) Facial expression Micro Macro Convicted criminals (i.e. the judge’s bias is copied)
  • #24 Source: Valerie Frissen – Erasmus University /SIDN fonds
  • #26 Ada Lovelace Ze wordt nu gezien als de ontwerpster van het eerste computerprogramma, omdat ze "programma's" schreef om symbolen volgens vaste regels te manipuleren met een machine die Babbage op dat moment nog moest maken.
  • #36 Advancing technology for humanity – or just get rich?
  • #39 https://goo.gl/ca9YQV