Successfully reported this slideshow.
Your SlideShare is downloading. ×

Ethical Algorithms: Bias in Machine Learning for NextAI

More Related Content

Related Books

Free with a 30 day trial from Scribd

See all

Ethical Algorithms: Bias in Machine Learning for NextAI

  1. 1. Ethical Algorithms Bias and Explainability in Machine Learning VP Product & Strategy @integrateai Kathryn Hume @humekathryn Venture Partner quamproxime.com @ffvc
  2. 2. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
  3. 3. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
  4. 4. Algorithms are evil?
  5. 5. Algorithms are convex mirrors that refract human biases Parmagianino, 1524 Ashbery, 1975
  6. 6. Why?
  7. 7. The Raw Ingredients Deep understanding of a business problem Data, data, data Algorithmic capability
  8. 8. Data Product Lifecycle Design What problem are we solving? Data Exploration What does the data look like? Data Engineering Production Can we harden and scale the model? Maintenance How to update as data changes? Data Processing Is our data ready for use? Model Prototyping Will this work? Should we pivot?
  9. 9. Supervised and Unsupervised Learning
  10. 10. Unsupervised Learning
  11. 11. Clustering households based on TV-viewing habits
  12. 12. 70% of machine learning products use supervised learning https://www.sas.com/en_ca/insights/analytics/machine-learning.html
  13. 13. Supervised Learning Find a function that defines a correlation between P and C Use this function to make guesses about C Find a proxy (P) for something hard to know (C)
  14. 14. Use square footage (P) to predict housing prices (C)
  15. 15. Use “Nigerian Prince” (P) to predict if emails are spam (C)
  16. 16. What (P) should we pick to decide if it’s a cat or dog?
  17. 17. Deep Learning • Use layers to transform complex input into mathematical expressions • Remove need for human to select which features matter
  18. 18. Universal Approximation Theorem Neural networks can approximate arbitrary functions
  19. 19. The art lies in designing your model, not feature engineering
  20. 20. Where does bias creep in?
  21. 21. Classical Statistics & ML Higher unconscious bias in feature selection Higher explainability in the model Deep Learning Lower unconscious bias in feature selection Lower explainability in the model
  22. 22. Introduce Blind Fairness?
  23. 23. https://visual.ly/community/infographic/ human-rights/taxonomy-transitions Redundant Encodings
  24. 24. Fairness does not align with accuracy http://blog.mrtz.org/2016/09/06/approaching-fairness.html
  25. 25. the controversial world of sentiment analysis
  26. 26. • System use algorithms to identify negative sentiment • Performs better with strident, unambiguous expressions of emotions • Men more likely to use those expressions • Men attract disproportionate attention from brands https://blog.dominodatalab.com/video-how-machine-learning-amplifies-societal-privilege/
  27. 27. Bluntness and bias • Precision <> Recall • Marketing wants high precision • Implies low recall • We’re better at identifying extremes • They’re likely in a particular group
  28. 28. https://medium.com/@blaisea/physiognomys-new-clothes-f2d4b59fdd6a
  29. 29. Developments in Language Processing Traditional NLP N-grams Word Embeddings
  30. 30. Bolukbasi, Chang, Zou, Saligrama, Kalai, 2016 Man : King :: Woman : Queen Man : Computer Programmer :: Woman : Homemaker Black Male : Assaulted :: White Male: Entitled To Inherent Bias in Word Embeddings
  31. 31. http://www.andrew.cmu.edu/user/danupam/dtd-pets15.pdf
  32. 32. What can we do about it?
  33. 33. Regulate to require explainability
  34. 34. Machine Talk People Talk h/t Hilary Mason of Fast Forward Labs
  35. 35. Affect important human rights Education Housing Health Work Justice Finance/credit
  36. 36. Lack of understanding stymies adoption
  37. 37. Deterministic framework on a probabilistic tool
  38. 38. Refine our conceptual framework
  39. 39. Inflated media rhetoric
  40. 40. https://medium.com/inventing-intelligent-machines/machine-learning-alien-knowledge-and-other- ufos-1a44c66508d1 Observations, not explanations
  41. 41. Comfort with competence without comprehension
  42. 42. How come? What for? Can I intervene?
  43. 43. Get into the guts of the technology
  44. 44. Bolukbasi, Chang, Zou, Saligrama, Kalai, 2016 Remove bias without compromising utility
  45. 45. FairML: Measure dependence on inputs by changing them http://blog.fastforwardlabs.com/2017/03/09/fairml-auditing-black-box-predictive-models.html
  46. 46. LIME: As if linear functions https://homes.cs.washington.edu/~marcotcr/blog/lime/
  47. 47. https://homes.cs.washington.edu/~marcotcr/blog/lime/ LIME: As if linear functions
  48. 48. Fair representations: treat similar individuals similarly http://proceedings.mlr.press/v28/zemel13.html
  49. 49. Fair representations: treat similar individuals similarly http://proceedings.mlr.press/v28/zemel13.html “We formulate fairness as an optimization problem of finding an intermediate representation of the data that best encodes the data (i.e., preserving as much information about the individual’s attributes as possible), while simultaneously obfuscates aspects of it, removing any information about membership with respect to the protected subgroup.”
  50. 50. Equal Opportunity
  51. 51. Equal Opportunity
  52. 52. LOVE PEOPLE Find opportunities to maximize mutual lifetime value Respect the principles of contextual integrity Protect individual and corporate data using differential privacy Consider the goals of the people affected by the systems we build
  53. 53. Questions? @integrateai@humekathryn quamproxime.com @ffvc

×