Successfully reported this slideshow.
Your SlideShare is downloading. ×

Bias in algorithmic decision-making: Standards, Algorithmic Literacy and Governance

Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad

Check these out next

1 of 32 Ad
Advertisement

More Related Content

Slideshows for you (20)

Similar to Bias in algorithmic decision-making: Standards, Algorithmic Literacy and Governance (20)

Advertisement

Recently uploaded (20)

Advertisement

Bias in algorithmic decision-making: Standards, Algorithmic Literacy and Governance

  1. 1. Bias in algorithmic decision-making: Standards, Algorithmic Literacy and Governance ANSGAR KOENE, HORIZON DIGITAL ECONOMY RESEARCH INSTITUTE, UNIVERSITY OF NOTTIN GHAM 5TH SEPTEMBER 2018 1
  2. 2. Projects UnBias – EPSRC funded “Digital Economy” project ◦ Horizon Digital Economy research institute, University of Nottingham ◦ Human Centred Computing group, University of Oxford ◦ Centre for Intelligent Systems and their Application, University of Edinburgh IEEE P7003 Standard for Algorithmic Bias Considerations ◦ Multi-stakeholder working group with 70+ participants from Academia, Civil-society and Industry A governance framework for algorithmic accountability and transparency – EP Science Technology Options Assessment report ◦ UnBias; AI Now; Purdue University; EMLS RI Age Appropriate Design ◦ UnBias; 5Rights 2
  3. 3. UnBias: Emancipating Users Against Algorithmic Biases for a Trusted Digital Economy Standards and policy Stakeholder workshops 3 Youth Juries
  4. 4. Algorithms in the news 4
  5. 5. 5
  6. 6. Theme 1: The Use of Algorithms Introduces the concept of algorithms Activities include: ◦ Mapping your online world ◦ Discusses the range of online services that use algorithms ◦ What’s in your personal filter bubble? ◦ Highlights that not everyone gets the same results online
  7. 7. Theme 1: The Use of Algorithms Activities include: ◦ What kinds of data do algorithms use? ◦ Discusses the range of data collected and inferred by algorithms, and what happens to it ◦ How much is your data worth? ◦ From the perspective of you (the user) and the companies that buy/sell it
  8. 8. Theme 2: Regulation of Algorithms Uses real-life scenarios to highlight issues surrounding the use of algorithms, and asks Who is responsible when things go wrong? Participants debate both sides of a case and develop their critical thinking skills
  9. 9. Theme 3: Algorithm Transparency The algorithm as a ‘black box’ Discusses the concept of meaningful transparency and the sort of information that young people would like to have when they are online ◦ What data is being collected about me? ◦ Why? ◦ Where does it go?
  10. 10. 11 Fairness Toolkit http://unbias.wp.horizon.ac.uk https://oer.horizon.ac.uk/5rights-youth-juries
  11. 11. 12
  12. 12. 13 IEEE P7000: Model Process for Addressing Ethical Concerns During System Design IEEE P7001: Transparency of Autonomous Systems IEEE P7002: Data Privacy Process IEEE P7003: Algorithmic Bias Considerations IEEE P7004: Child and Student Data Governance IEEE P7005: Employer Data Governance IEEE P7006: Personal Data AI Agent Working Group IEEE P7007: Ontological Standard for Ethically Driven Robotics and Automation Systems IEEE P7008: Ethically Driven Nudging for Robotic, Intelligent and Autonomous Systems IEEE P7009: Fail-Safe Design of Autonomous and Semi-Autonomous Systems IEEE P7010: Wellbeing Metrics Standard for Ethical AI and Autonomous Systems IEEE P7011: Process of Identifying and Rating the Trustworthiness of News Sources IEEE P7012: Standard for Machines Readable Personal Privacy Terms
  13. 13. Algorithmic systems are socio-technical Algorithmic systems do not exist in a vacuum They are built, deployed and used: ◦ by people, ◦ within organizations, ◦ within a social, political, legal and cultural context. The outcomes of algorithmic decisions can have significant impacts on real, and possibly vulnerable, people.
  14. 14. P7003 - Algorithmic Bias Considerations All non-trivial* decisions are biased We seek to minimize bias that is: ◦ Unintended ◦ Unjustified ◦ Unacceptable as defined by the context where the system is used. *Non-trivial means the decision space has more than one possible outcome and the choice is not uniformly random.
  15. 15. Causes of algorithmic bias Insufficient understanding of the context of use. Failure to rigorously map decision criteria. Failure to have explicit justifications for the chosen criteria.
  16. 16. 17 Algorithmic Discrimination
  17. 17. 18 Complex individuals reduced to simplistic binary stereotypes
  18. 18. Key question when developing or deploying an algorithmic system 19  Who will be affected?  What are the decision/optimization criteria?  How are these criteria justified?  Are these justifications acceptable in the context where the system is used?
  19. 19. 20 P7003 foundational sections  Taxonomy of Algorithmic Bias  Legal frameworks related to Bias  Psychology of Bias  Cultural aspects P7003 algorithm development sections  Algorithmic system design stages  Person categorization and identifying affected population groups  Assurance of representativeness of testing/training/validation data  Evaluation of system outcomes  Evaluation of algorithmic processing  Assessment of resilience against external manipulation to Bias  Documentation of criteria, scope and justifications of choices
  20. 20. Related AI standards activities British Standards Institute (BSI) – BS 8611 Ethics design and application of robots ISO/IEC JTC 1/SC 42 Artificial Intelligence ◦ SG 1 Computational approaches and characteristics of AI systems ◦ SG 2 Trustworthiness ◦ SG 3 Use cases and applications ◦ WG 1 Foundational standards Jan 2018 China published “Artificial Intelligence Standardization White Paper.”
  21. 21. A governance framework for algorithmic accountability and transparency EPRS/2018/STOA/SER/18/002 ANSGAR KOENE, UNIVERSITY OF NOTTINGHAM RASHIDA RICHARDSON & DILLON REISMAN, AI NOW INSTITUTE YOHKO HATADA, EMLS RI HELENA WEBB, M. PATEL, J. LAVIOLETTE, C. MACHADO, UNIVERSITY OF OXFORD CHRIS CLIFTON, PURDUE UNIVERSITY 25TH OCTOBER 2018
  22. 22. Awareness raising: education, watchdogs and whistleblowers  “Algorithmic literacy” - teaching core concepts: computational thinking, the role of data and the importance of optimisation criteria.  Standardised notification to communicate type and degree of algorithmic processing in decisions.  Provision of computational infrastructure and access to technical experts to support data analysis etc. for “algorithmic accountability journalism”.  Whistleblower protection and protection against prosecution on grounds of breaching copyright or Terms of Service when doing so serves the public interest.
  23. 23. Accountability in public sector use of algorithmic decision-making Adoption of Algorithmic Impact Assessment (AIA) for algorithmic systems used for public service 1. Public disclosure of purpose, scope, intended use and associated policies, self-assessment process and potential implementation timeline 2. Performing and publishing of self-assessment of the system with focus on inaccuracies, bias, harms to affected communities, and mitigation plans for potential impacts. 3. Publication of plan for meaningful, ongoing access to external researchers to review the system. 4. Public participation period. 5. Publication of final AIA, once issues raised in public participation have been addressed. 6. Renewal of AIAs on a regular timeline. 7. Opportunity for public to challenge failure to mitigate issues raised in the public participation period or foreseeable outcomes.
  24. 24. Regulatory oversight and Legal liability  Regulatory body for algorithms:  Risk assessment  Investigating algorithmic systems suspected of infringing of human rights.  Advising other regulatory agencies regarding algorithmic systems  Algorithmic Impact Assessment requirement for systems classified as causing potentially severe non-reversible impact  Strict tort liability for algorithmic systems with medium severity non-reversible impacts  Reduced liability for algorithmic systems certified as compliant with best-practice standards.
  25. 25. Global coordination for algorithmic governance  Establishment a permanent global Algorithm Governance Forum (AGF)  Multi-stakeholder dialog and policy expertise related to algorithmic systems  Based on the principles of Responsible Research and Innovation  Provide a forum for coordination and exchanging of governance best-practices  Strong positions in trade negotiations to protect regulatory ability to investigate algorithmic systems and hold parties accountable for violations of European laws and human rights.
  26. 26. Age Appropriate Design 27
  27. 27. What is the Age-Appropriate Design Code? The Age-Appropriate Design Code sits at section 123 of the UK Data Protection Act 2018 (“DPA”) and will set out the standards of data protection that Information Society Services (“ISS”, known as online services) must offer children. It was brought into UK legislation by Crossbench Peer, Baroness Kidron, Parliamentary Under-Secretary, Department for Digital, Culture, Media and Sport, Lord Ashton of Hyde, Opposition Spokesperson, Lord Stevenson of Balmacara, Conservative Peer, Baroness Harding of Winscombe and Liberal Democrat Spokesperson, Lord Clement-Jones of Clapham. 28
  28. 28. 29
  29. 29. ICO to draft code by 25 October 2019 30
  30. 30. Thank you 31 http://unbias.wp.horizon.ac.uk
  31. 31. Biography • Dr. Koene is Senior Research Fellow at the Horizon Digital Economy Research Institute of the University of Nottingham, where he conducts research on societal impact of Digital Technology. • Chairs the IEEE P7003TM Standard for Algorithms Bias Considerations working group., and leads the policy impact activities of the Horizon institute. • He is co-investigator on the UnBias project to develop regulation-, design- and education-recommendations for minimizing unintended, unjustified and inappropriate bias in algorithmic systems. • Over 15 years of experience researching and publishing on topics ranging from Robotics, AI and Computational Neuroscience to Human Behaviour studies and Tech. Policy recommendations. • He received his M.Eng., and Ph.D. in Electical Engineering & Neuroscience, respectively, from Delft University of Technology and Utrecht University. • Trustee of 5Rights, a UK based charity for Enabling Children and Young People to Access the digital world creatively, knowledgeably and fearlessly. Dr. Ansgar Koene ansgar.koene@nottingham.ac.uk https://www.linkedin.com/in/akoene/ https://www.nottingham.ac.uk/comp uterscience/people/ansgar.koene http://unbias.wp.horizon.ac.uk/

Editor's Notes

  • Automated decisions are not defined by algorithms alone. Rather, they emerge from automated systems that mix human judgment, conventional software, and statistical models, all designed to serve human goals and purposes. Discerning and debating the
    social impact of these systems requires a holistic approach that considers:
    Computational and statistical aspects of the algorithmic processing;
    Power dynamics between the service provider and the customer;
    The social-political-legal-cultural context within which the system is used;
  • All non-trivial decisions are biased. For example, a good results from a search engine should be biased to match the interests of the user as expressed by the search-term, and possibly refined based on personalization data.

    When we say we want ‘no Bias’ we mean we want to minimize unintended, unjustified and unacceptable bias, as defined by the context within which the algorithmic system is being used.
  • In the absence of malicious intent, bias in algorithmic system is generally caused by:

    Insufficient understanding of the context that the system is part of. This includes lack of understanding who will be affected by the algorithmic decision outcomes, resulting in a failure to test how the system performs for specific groups, who are often minorities. Diversity in the development team can partially help to address this.
    Failure to rigorously map decision criteria. When people think of algorithmic decisions as being more ‘objectively trustworthy’ than human decisions, more often than not they are referring to the idea that algorithmic systems follow a clearly defined set of criteria with no ‘hidden agenda’. The complexity of system development challenges, however, can easily introduce ‘hidden decision criteria’ introduced as a quick fix during debugging or embedded within Machine Learning training data.
    Failure to explicitly define and examine the justifications for the decision criteria. Given the context within which the system is used, are these justifications acceptable? For example, in a given context is it OK to treat high correlation as evidence of causation?
  • Ansgar Koene is a Senior Research Fellow at Horizon Digital Economy Research institute, University of Nottingham and chairs the IEEE P7003 Standard for Algorithm Bias Considerations working group. As part of his work at Horizon Ansgar is the lead researcher in charge of Policy Impact; leads the stakeholder engagement activities of the EPSRC (UK research council) funded UnBias project to develop regulation-, design- and education-recommendations for minimizing unintended, unjustified and inappropriate bias in algorithmic systems; and frequently contributes evidence to UK parliamentary inquiries related to ICT and digital technologies.
    Ansgar has a multi-disciplinary research background, having previously worked and published on topics ranging from bio-inspired Robotics, AI and Computational Neuroscience to experimental Human Behaviour/Perception studies.

×