Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Designing ethical artificial intelligence

1,113 views

Published on

Technology should be a beneficial force in our lives, taking the world in exciting new directions and making us better humans. To ensure this, we need to facilitate a conversation between data technology and the human experience. Keeping social responsibility and ethical behavior in mind when designing AI systems enables us to put the right systems in place to contribute to the society we want, fostering higher levels of cognitive and emotional skills.

Jivan Virdee and Hollie Lubbock explore how to address fairness, accountability, and the long-term effects on our society when designing with data, focusing on four key areas for consideration in this space:

— Responsibility and accountability for machine learning systems
— Fair and transparent data science
— Trust and human-machine collaboration
—Automation and changes in the way we work

Along the way, they cover key issues in creating ethical AI systems and detail how we might go about tackling them and outline further questions that will need to be addressed going forward.

Big thanks to @fjord and @accenturedock for their help and support

Talk by:
https://www.linkedin.com/in/hollie-lubbock-703b77b/
https://www.linkedin.com/in/jivanvirdee/

Published in: Data & Analytics

Designing ethical artificial intelligence

  1. 1. Designing ethical artificial intelligence @jivanvirdee & @hollielubbock Fjord May 2018, #StrataData
  2. 2. Hello, @jivanvirdee @hollielubbock
  3. 3. Artificial intelligence in the news (Google trends) 0 25 50 75 100 Date 2008-07 2009-02 2009-09 2010-04 2010-11 2011-06 2012-01 2012-08 2013-03 2013-10 2014-05 2014-12 2015-07 2016-02 2016-09 2017-04 2017-11 2018-05 Instance Rising interest
  4. 4. The Backlash Guardian article
  5. 5. Fitter, 
 Happier, 
 More Productive
  6. 6. Lonelier, 
 Less satisfied, 
 Less productive
  7. 7. Predictive & proactive
  8. 8. The issue of
 Dark Patterns https://darkpatterns.org/hall-of-shame
  9. 9. Amplified at scale
  10. 10. AI will save us/kill us https://twitter.com/Mark__Zukerberg | https://twitter.com/elonmusk I think you can build things and the world gets better, with AI especially, I’m really optimistic Until people see robots going down the street killing people, they don’t know how to react Marks understanding of the subject is limited.
  11. 11. “We are morphing so fast that our ability to invent new things outpaces the rate we can civilise them.” Kevin Kelly, The Inevitable
  12. 12. So how do we shift the trajectory?
  13. 13. 13 Human Centred Created from an understanding of human behaviour, motivations, and needs.
 Is it the best way to solve that problem?
 How can we make their lives better, easier and more fulfilling?
 Are the user needs put before the business needs?
 Humanity centred Considering the effect on society as a whole. 
 What if everyone used your product or service? 
 What the worst thing that could happen to society because of it?
 Does it intrinsically favour one group of people over another?
  14. 14. Fair & transparent data science Responsibility & accountability Trust & human machine collaboration Human first 
 3 key areas
  15. 15. Fair & transparent data science
  16. 16. Explainable Artificial Intelligence
  17. 17. Accuracy vs. explainability Accuracy Explainability Deep learning Ensemble methods SVMs Graphical models Decision trees https://arxiv.org/pdf/1802.00603.pdf
  18. 18. Why is explainability important? http://www.jstor.org/stable/41238108
  19. 19. Methods for Explainability Interpretable models • Regression • Decision trees • Classification rules Models for interpretability • LIME • BETA • LRP
  20. 20. Any sufficiently advanced technology is indistinguishable from magic ― Arthur C. Clarke
  21. 21. Increasing levels of context
  22. 22. Humble machines
  23. 23. Transparency is key 
 to trust
  24. 24. “Transparency does not happen on its own: it has to be consciously audited, understood, designed and implemented to take root” ― Andy Polaine
  25. 25. Trust and human machine collaboration
  26. 26. 800 million jobs lost to automation by 2030 Source
  27. 27. There’s even an algorithm to check if you’ll lose your job to automation 27 https://www.fastcompany.com/3047269/this-calculator- will-tell-you-if-a-robot-is-coming-for-your-job
  28. 28. AI replaces workforces
  29. 29. AI replaces workforces AI can enable super human powers
  30. 30. Human Machine Together 96% 99.5% 92% Human & Machine Source
  31. 31. Enhancing human potential
  32. 32. Trust = credibility + reliability + authenticity self interest
  33. 33. HUMAN INTELLIGENT AGENT Augmentation Automation Define the Relationship
  34. 34. Problem Complexity Bounded Open ended Consequence of Failure Negligible Critical Responsibility Machine Human Independent Autonomy Collaborative Management by exception Supervision Continuous Engagement
  35. 35. Bounded Open ended Negligible Critical Machine Human Independent Collaborative Management by exception Continuous Engagement Content Moderation Problem Complexity Consequence of Failure Responsibility Autonomy Supervision
  36. 36. Bounded Open ended Negligible Critical Machine Human Independent Collaborative Management by exception Continuous Engagement Movie Recommendation Service Problem Complexity Consequence of Failure Responsibility Autonomy Supervision
  37. 37. Problem Complexity Bounded Open ended Consequence of Failure Negligible Critical Responsibility Machine Human Independent Autonomy Collaborative Management by exception Supervision Continuous Engagement Three Mile Island Nuclear Plant
  38. 38. One of the team, play to your strengths http://humanrobotinteraction.org/journal/ index.php/HRI/article/view/173
  39. 39. Creativity is going to be far more important in a future where software can code better than we can. Tom Hulme
  40. 40. Automation could make us more human
  41. 41. “This is not a race against the machines. If we race against them, we lose. This is a race with the machines.” ― Kevin Kelly, The Inevitable
  42. 42. Responsibility and accountability
  43. 43. It takes momentum to 
 get noticed
  44. 44. Ethical standards in science
  45. 45. An issue of scale
  46. 46. Mind the bias
  47. 47. You’re only as good as your data So how do we make it better?
  48. 48. From unknown to known
  49. 49. Codes of ethics Do you have one?
  50. 50. Data for Democracy Its my job to understand, mitigate and communicate the presence of bias in algorithms. Be responsible for maximizing social benefit and minimizing harm. Practice humility and openness. I will know my data and help future users know it as well. Make reasonable efforts to know and document its origins and document its transformation. Bias will exist. Measure it. Plan for it. Thou shalt document transparently, accessibly, responsibly, reproducibly, and communicate. Engaging the whole community. Do you have all relevant individuals engaged? People before data - data scientists should use a question driven approach rather than a data-driving or methods approach. Consider personal safety and treat others the way they want to be treated. Exercise ethical imagination. Open by default - use of data should be transparent and fair. I will not over/under represent findings. You are part of an ecosystem understand context and provenance. Respecting human dignity. Respect their data even more than your own. Understand where its sources and think about the consequences of your actions. Protecting individual and institutional privacy. Diversity for inclusivity. Attention to bias. Respect for others/persons. Be intentional as you work to create value. https://github.com/Data4Democracy/ethics-resources
  51. 51. So what do we do with them?
  52. 52. Ethics Overview Informed Consent Data Ownership Privacy Anonymity Data Validity Algorithmic Fairness Societal Consequences https://www.edx.org/course/mind-of-the-universe-robots-in- society-blessing-or-curse https://www.coursera.org/learn/data-science-ethics
  53. 53. Designing AI with human values in mind
  54. 54. Video Link
  55. 55. Fair & transparent data science Responsibility & accountability Trust & human machine collaboration Human first 
 3 key areas
  56. 56. No one’s coming. 
 It’s up to us. — Dan Hon @jivanvirdee @hollielubbock

×