Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Ethical Dilemmas in AI/ML-based systems

127 views

Published on

I gave this presentation at Deutsche Telekom AG's Digital Ethics Conference in Bonn on March 13 2019. It provides the background for how biases may occur in machine learning systems and what may go wrong if not corrected (or minimized).

Published in: Technology
  • Be the first to comment

Ethical Dilemmas in AI/ML-based systems

  1. 1. Ethical Dilemmas 13th of June, 2019, Bonn, Germany. Dr. Kim Kyllesbech Larsen, CTIO, T-Mobile Netherlands. Behind every AI going wrong is a (hu)man. AI
  2. 2. Dr. Kim K. Larsen / Big Data @ NT 2
  3. 3. Bias is so much more than mathematics! See also Kate Crawford NIPS (NeurlPS) 2017 keynote ”The Trouble with BIas” https://m.youtube.com/watch?v=fMym_BKWQzk&feature=youtu.be
  4. 4. 4 Dr. Kim K. Larsen / How do we Humans feel about AI?
  5. 5. PEOPLE LIE DATA DON’T or
  6. 6. Liar, liar – who’s fake & who’s not? BA http://www.whichfaceisreal.com/ and see also https://skymind.ai/wiki/generative-adversarial-network-gan for a great intro to GAN.
  7. 7. Selection bias & data gaps. Representing (hu)man. White male. 25 - 30 years old. 178 cm. 70 kg. Student.
  8. 8. ≈ ≈ ( ) ( ) Blue Class Magenta Class … … Label (e.g., Outcome) Features or attributes (e.g., defining the policy) Class tag(s) (e.g., Female vs Male) – might not be wise to consider in policy unless policy requires class differentiation. Discrete Feature Distribution (e.g., age range, marital status, education, etc..) Continuous Feature Distribution identified for example by its mean and variance (or standard deviation) Mean Standard Deviation Approved Approved Rejected Rejected Gender Education Income Age Relationship status BMI Debt … … “Hidden” attributes … Classes Structure of data we choose and use. … … Sources: non-mathematical representation https://www.linkedin.com/pulse/machine-why-aint-thee-fair-dr-kim-kyllesbech-larsen-/ & for the mathematics see https://www.linkedin.com/pulse/mathematical-formulation-fairness-ali-bahramisharif/
  9. 9. True or False? TRUE NEGATIVE TRUE POSITIVE FALSE POSITIVE FALSE NEGATIVE ActualHappiness Predicted Happiness AI/ML applied to test data HAPPY UNHAPPY HAPPYUNHAPPY Our model predict that our Unhappy customer is Happy Our model predict that our Happy customer is Unhappy See also https://aistrategyblog.com/2018/10/11/machine-why-aint-thee-fair/
  10. 10. Behind every AI going wrong is a (hu)man. NEED TO “CLEAN” DATA! QUALITY GOALS! Limited computing resources may result in a higher bias. Do you have ethical measures in place to identify biases? IT STARTS HERE! Train Test COMPUTING POWER LOTS OF DATA! Data reflects the society, if biased so will the data be. ML MODEL / ARCHITECTURE Your problem context inherently be unfair/biased. With data cleaning & selection it is very easy to introduce bias. Models can (& will) amplify biases in training data
  11. 11. FALSE POSITIVEFALSE NEGATIVE Allocation / classification bias (& unfairness). AI assessment of the risk of re-offending Data Source: https://github.com/propublica/compas-analysis, see also https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing Stole a kids bike ($80 of value) prior as juvenile (misdemeanor). ? Do we treat our customers the same irrespective of race or gender or age or Location, etc.. ? Shoplifting ($86 of value) Prior: armed robbery & misdemeanors as juvenile.
  12. 12. Allocation bias (& unfairness). Minority neighborhoods underserved Source: https://www.bloomberg.com/graphics/2016-amazon-same-day/ Issues  Credit / loans.  Insurances.  Policing.  Shops.  Permits.  Cleaning.  Schools.  Public transport.  Public areas.  Public safety.  Telco services.
  13. 13. Classification / allocation bias (& unfairness). AI-based Recruitment. Amazon's AI were trained on resumes submitted to the company over a 10-year period. https://uk.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women
  14. 14. Father is to a doctor as a Mother is to a nurse Man is to Telecom Engineer as Woman is to homemaker Boy is to gun as Girl is to Doll Man is to Manager as Woman is to Assistant Gender (classification) biases. https://developers.google.com/machine-learning/fairness-overview/
  15. 15. "Gay faces tended to be gender atypical," the researchers said. "Gay men had narrower jaws and longer noses, while lesbians had larger jaws." Wang, Y., & Kosinski, M. (in press). Deep neural networks are more accurate than humans at detecting sexual orientation from facial images. Journal of Personality and Social Psychology DEEP NEURAL NETWORKS CAN DETECT SEXUAL ORIENTATION FROM YOUR FACE. (ALLEGEDLY!) Classification bias (& unfairness). Quotes;
  16. 16. Classification bias (& unfairness). "law-biding public have a greater degree of resemblance compared with the faces of criminals.” “We have discovered that a law of normality for faces of non-criminals." DEEP NEURAL NETWORKS CAN DETECT CRIMINALITY FROM YOUR FACE (ALLEGEDLY!) Xiaolin Wu & Xi Zhang, “Automated Inference on Criminality using Face Images” (2016); https://arxiv.org/pdf/1611.04135v1.pdf Quotes;
  17. 17. ? 64x64x3 Female German Telekom DNN Architecture e.g., 128/64/32/1 (4 Layers) Trained on 6,992+ LinkedIn pictures. TRUE POSITIVES Male Polish Vodafone FALSE NEGATIVES What’s your Gender, Nationality & Employer. How much does your face tell about you?
  18. 18. What’s your Gender? 45% 32% 16% 7%
  19. 19. Confirmation bias … Are autonomous cars racially biased? Source: Benjamin Wilson et al “Predictive Inequity in Object Detection”, https://arxiv.org/pdf/1902.11097.pdf and https://twitter.com/katecrawford/status/1100958020203409408 Object detection systems of autonomous cars are better at detecting humans with light skin than dark skin. Object detection systems are better at detecting humans wearing light clothes than dark clothes at night (& vice versa at day). Thesis of Wilson, Hoffman & Morgenstern – confirmation bias? The anti-thesis – that may seems more plausible? See also Marc Green’s https://www.visualexpert.com/Resources/pedestrian.html
  20. 20. THANK YOU! Acknowledgement Thanks to many colleagues who have contributed with valuable insights, discussions & comments throughout this work. Also I would like to thank my wife Eva Varadi for her patience during this work. Contact: Email: kim.larsen@t-mobile.nl Linkedin: www.linkedin.com/in/kimklarsen Blogs: www.aistrategyblog.com & www.techneconomyblog.com Twitter: @KimKLarsen Recommendation: Source: https://www.amazon.com/Invisible-Women-Exposing- World-Designed-ebook/dp/B07CQ2NZG6/

×