Automated decisions are not defined by algorithms alone. Rather, they emerge from automated systems that mix human judgment, conventional software, and statistical models, all designed to serve human goals and purposes. Discerning and debating the social impact of these systems requires a holistic approach that considers: Computational and statistical aspects of the algorithmic processing; Power dynamics between the service provider and the customer; The social-political-legal-cultural context within which the system is used;
All non-trivial decisions are biased. For example, a good results from a search engine should be biased to match the interests of the user as expressed by the search-term, and possibly refined based on personalization data.
When we say we want ‘no Bias’ we mean we want to minimize unintended, unjustified and unacceptable bias, as defined by the context within which the algorithmic system is being used.
In the absence of malicious intent, bias in algorithmic system is generally caused by:
Insufficient understanding of the context that the system is part of. This includes lack of understanding who will be affected by the algorithmic decision outcomes, resulting in a failure to test how the system performs for specific groups, who are often minorities. Diversity in the development team can partially help to address this. Failure to rigorously map decision criteria. When people think of algorithmic decisions as being more ‘objectively trustworthy’ than human decisions, more often than not they are referring to the idea that algorithmic systems follow a clearly defined set of criteria with no ‘hidden agenda’. The complexity of system development challenges, however, can easily introduce ‘hidden decision criteria’ introduced as a quick fix during debugging or embedded within Machine Learning training data. Failure to explicitly define and examine the justifications for the decision criteria. Given the context within which the system is used, are these justifications acceptable? For example, in a given context is it OK to treat high correlation as evidence of causation?
Ansgar Koene is a Senior Research Fellow at Horizon Digital Economy Research institute, University of Nottingham and chairs the IEEE P7003 Standard for Algorithm Bias Considerations working group. As part of his work at Horizon Ansgar is the lead researcher in charge of Policy Impact; leads the stakeholder engagement activities of the EPSRC (UK research council) funded UnBias project to develop regulation-, design- and education-recommendations for minimizing unintended, unjustified and inappropriate bias in algorithmic systems; and frequently contributes evidence to UK parliamentary inquiries related to ICT and digital technologies. Ansgar has a multi-disciplinary research background, having previously worked and published on topics ranging from bio-inspired Robotics, AI and Computational Neuroscience to experimental Human Behaviour/Perception studies.
Bias in algorithmic decision-making: Standards, Algorithmic Literacy and Governance
Bias in algorithmic decision-making:
Standards, Algorithmic Literacy and
HORIZON DIGITAL ECONOMY RESEARCH INSTITUTE, UNIVERSITY OF NOTTIN GHAM
5TH SEPTEMBER 2018
UnBias – EPSRC funded “Digital Economy” project
◦ Horizon Digital Economy research institute, University of Nottingham
◦ Human Centred Computing group, University of Oxford
◦ Centre for Intelligent Systems and their Application, University of Edinburgh
IEEE P7003 Standard for Algorithmic Bias Considerations
◦ Multi-stakeholder working group with 70+ participants from Academia, Civil-society and Industry
A governance framework for algorithmic accountability and transparency – EP Science
Technology Options Assessment report
◦ UnBias; AI Now; Purdue University; EMLS RI
Age Appropriate Design
◦ UnBias; 5Rights
UnBias: Emancipating Users Against Algorithmic
Biases for a Trusted Digital Economy
Standards and policy
Theme 1: The Use of Algorithms
Introduces the concept of algorithms
◦ Mapping your online world
◦ Discusses the range of online services that use algorithms
◦ What’s in your personal filter bubble?
◦ Highlights that not everyone gets the
same results online
Theme 1: The Use of Algorithms
◦ What kinds of data do algorithms use?
◦ Discusses the range of data collected and inferred by algorithms, and what happens to it
◦ How much is your data worth?
◦ From the perspective of you (the user) and the companies that buy/sell it
Theme 2: Regulation of Algorithms
Uses real-life scenarios to highlight issues
surrounding the use of algorithms, and asks
Who is responsible when things go wrong?
Participants debate both sides of a case and
develop their critical thinking skills
Theme 3: Algorithm Transparency
The algorithm as a ‘black box’
Discusses the concept of meaningful
transparency and the sort of information that
young people would like to have when they are
◦ What data is being collected about me?
◦ Where does it go?
IEEE P7000: Model Process for Addressing Ethical Concerns During System Design
IEEE P7001: Transparency of Autonomous Systems
IEEE P7002: Data Privacy Process
IEEE P7003: Algorithmic Bias Considerations
IEEE P7004: Child and Student Data Governance
IEEE P7005: Employer Data Governance
IEEE P7006: Personal Data AI Agent Working Group
IEEE P7007: Ontological Standard for Ethically Driven Robotics and Automation Systems
IEEE P7008: Ethically Driven Nudging for Robotic, Intelligent and Autonomous Systems
IEEE P7009: Fail-Safe Design of Autonomous and Semi-Autonomous Systems
IEEE P7010: Wellbeing Metrics Standard for Ethical AI and Autonomous Systems
IEEE P7011: Process of Identifying and Rating the Trustworthiness of News Sources
IEEE P7012: Standard for Machines Readable Personal Privacy Terms
Algorithmic systems are socio-technical
Algorithmic systems do not exist in a vacuum
They are built, deployed and used:
◦ by people,
◦ within organizations,
◦ within a social, political, legal and cultural context.
The outcomes of algorithmic decisions can have significant impacts
on real, and possibly vulnerable, people.
P7003 - Algorithmic Bias Considerations
All non-trivial* decisions are biased
We seek to minimize bias that is:
as defined by the context where the system is used.
*Non-trivial means the decision space has more than one possible outcome and the choice is not
Causes of algorithmic bias
Insufficient understanding of the context of use.
Failure to rigorously map decision criteria.
Failure to have explicit justifications for the chosen criteria.
Complex individuals reduced to
simplistic binary stereotypes
Key question when developing or
deploying an algorithmic system
Who will be affected?
What are the decision/optimization criteria?
How are these criteria justified?
Are these justifications acceptable in the context where the
system is used?
P7003 foundational sections
Taxonomy of Algorithmic Bias
Legal frameworks related to Bias
Psychology of Bias
P7003 algorithm development sections
Algorithmic system design stages
Person categorization and identifying affected population groups
Assurance of representativeness of testing/training/validation data
Evaluation of system outcomes
Evaluation of algorithmic processing
Assessment of resilience against external manipulation to Bias
Documentation of criteria, scope and justifications of choices
Related AI standards activities
British Standards Institute (BSI) – BS 8611 Ethics design and application of robots
ISO/IEC JTC 1/SC 42 Artificial Intelligence
◦ SG 1 Computational approaches and characteristics of AI systems
◦ SG 2 Trustworthiness
◦ SG 3 Use cases and applications
◦ WG 1 Foundational standards
Jan 2018 China published “Artificial Intelligence Standardization White Paper.”
A governance framework for
algorithmic accountability and
ANSGAR KOENE, UNIVERSITY OF NOTTINGHAM
RASHIDA RICHARDSON & DILLON REISMAN, AI NOW INSTITUTE
YOHKO HATADA, EMLS RI
HELENA WEBB, M. PATEL, J. LAVIOLETTE, C. MACHADO, UNIVERSITY OF OXFORD
CHRIS CLIFTON, PURDUE UNIVERSITY
25TH OCTOBER 2018
Awareness raising: education,
watchdogs and whistleblowers
“Algorithmic literacy” - teaching core concepts: computational thinking, the role of data
and the importance of optimisation criteria.
Standardised notification to communicate type and degree of algorithmic processing in
Provision of computational infrastructure and access to technical experts to support data
analysis etc. for “algorithmic accountability journalism”.
Whistleblower protection and protection against prosecution on grounds of breaching
copyright or Terms of Service when doing so serves the public interest.
Accountability in public sector use
of algorithmic decision-making
Adoption of Algorithmic Impact Assessment (AIA) for algorithmic systems used for public service
1. Public disclosure of purpose, scope, intended use and associated policies, self-assessment
process and potential implementation timeline
2. Performing and publishing of self-assessment of the system with focus on inaccuracies, bias,
harms to affected communities, and mitigation plans for potential impacts.
3. Publication of plan for meaningful, ongoing access to external researchers to review the system.
4. Public participation period.
5. Publication of final AIA, once issues raised in public participation have been addressed.
6. Renewal of AIAs on a regular timeline.
7. Opportunity for public to challenge failure to mitigate issues raised in the public participation
period or foreseeable outcomes.
Regulatory oversight and Legal liability
Regulatory body for algorithms:
Investigating algorithmic systems suspected of infringing of human rights.
Advising other regulatory agencies regarding algorithmic systems
Algorithmic Impact Assessment requirement for systems classified as causing potentially
severe non-reversible impact
Strict tort liability for algorithmic systems with medium severity non-reversible impacts
Reduced liability for algorithmic systems certified as compliant with best-practice standards.
Global coordination for algorithmic governance
Establishment a permanent global Algorithm Governance Forum (AGF)
Multi-stakeholder dialog and policy expertise related to algorithmic systems
Based on the principles of Responsible Research and Innovation
Provide a forum for coordination and exchanging of governance best-practices
Strong positions in trade negotiations to protect regulatory ability to
systems and hold parties accountable for violations of European laws and
What is the Age-Appropriate Design Code?
The Age-Appropriate Design Code sits at section 123 of the UK Data Protection Act 2018
(“DPA”) and will set out the standards of data protection that Information Society Services
(“ISS”, known as online services) must offer children. It was brought into UK legislation by
Crossbench Peer, Baroness Kidron, Parliamentary Under-Secretary, Department for Digital,
Culture, Media and Sport, Lord Ashton of Hyde, Opposition Spokesperson, Lord Stevenson
of Balmacara, Conservative Peer, Baroness Harding of Winscombe and Liberal Democrat
Spokesperson, Lord Clement-Jones of Clapham.
• Dr. Koene is Senior Research Fellow at the Horizon Digital Economy Research
Institute of the University of Nottingham, where he conducts research on
societal impact of Digital Technology.
• Chairs the IEEE P7003TM Standard for Algorithms Bias Considerations working
group., and leads the policy impact activities of the Horizon institute.
• He is co-investigator on the UnBias project to develop regulation-, design- and
education-recommendations for minimizing unintended, unjustified and
inappropriate bias in algorithmic systems.
• Over 15 years of experience researching and publishing on topics ranging from
Robotics, AI and Computational Neuroscience to Human Behaviour studies and
Tech. Policy recommendations.
• He received his M.Eng., and Ph.D. in Electical Engineering & Neuroscience,
respectively, from Delft University of Technology and Utrecht University.
• Trustee of 5Rights, a UK based charity for Enabling Children and Young People
to Access the digital world creatively, knowledgeably and fearlessly.
Dr. Ansgar Koene