A governance framework for
algorithmic accountability and
transparency
EPRS/2018/STOA/SER/18/002
ANSGAR KOENE, UNIVERSITY OF NOTTINGHAM
RASHIDA RICHARDSON & DILLON REISMAN, AI NOW INSTITUTE
YOHKO HATADA, EMLS RI
HELENA WEBB, M. PATEL, J. LAVIOLETTE, C. MACHADO, UNIVERSITY OF OXFORD
CHRIS CLIFTON, PURDUE UNIVERSITY
25TH OCTOBER 2018
Items covered in this presentation
Stocktaking : regulatory successes/failures
Policy recommendations
Demand/supply market mechanisms
 Digital literacy / algorithmic awareness – information asymmetry
 Market concentration & network effects
√ Demand side pressure from large business/institutional customers
√ Research on Fairness, Accountability and Transparency responding to investigative
journalism
√ Algorithmic fairness / auditing tools & services in anticipation of state intervention
Stocktaking
Self-regulation / co-regulation
 CSR response to avoid reputation loss
 Codes of ethics & ethics boards
√ Employee activism
√ Industry Standards of best practice
√ Third-party certification (with regulated licensing of certifiers)
√ Impact assessments
Stocktaking
State intervention 1/2
Information measures
 Algorithmic literacy for citizens and professionals
 Disclosure labels
Financial incentives
 Tax-incentives for ethical algorithms (analogous to eco-friendly technology)
 Public procurement
 Targeted public investment (e.g. Explainable AI research)
 “bug bounties” for revealing algorithmic bias
 Investment in algorithmic accountability journalism
Stocktaking
State intervention 2/2
Legislative measures
 Innovation lock-in (focus on “functional requirements”) / legislative steering of innovation
 GDPR
 Requirements for systems used for public/administrative purposes (e.g. Digital Republic Act)
 Mandated goals and outcomes transparency
 Optimisation criteria definition by body representing public interest
 No-fault/Strict/Shared Liability
Regulatory body
 Standards setting body
 openness / disclosure / transparency requirements
 Pre-market approval
Stocktaking
Policy recommendations
Awareness raising: education,
watchdogs and whistleblowers
 “Algorithmic literacy” - teaching core concepts: computational thinking, the role of data
and the importance of optimisation criteria.
 Standardised notification to communicate type and degree of algorithmic processing in
decisions.
 Provision of computational infrastructure and access to technical experts to support data
analysis etc. for “algorithmic accountability journalism”.
 Whistleblower protection and protection against prosecution on grounds of breaching
copyright or Terms of Service when doing so serves the public interest.
Recommendations
Accountability in public sector use
of algorithmic decision-making
Adoption of Algorithmic Impact Assessment (AIA) for algorithmic systems used for public service
1. Public disclosure of purpose, scope, intended use and associated policies, self-assessment
process and potential implementation timeline
2. Performing and publishing of self-assessment of the system with focus on inaccuracies, bias,
harms to affected communities, and mitigation plans for potential impacts.
3. Publication of plan for meaningful, ongoing access to external researchers to review the system.
4. Public participation period.
5. Publication of final AIA, once issues raised in public participation have been addressed.
6. Renewal of AIAs on a regular timeline.
7. Opportunity for public to challenge failure to mitigate issues raised in the public participation
period or foreseeable outcomes.
Recommendations
Regulatory oversight and Legal liability
 Regulatory body for algorithms:
 Risk assessment
 Investigating algorithmic systems suspected of infringing of human rights.
 Advising other regulatory agencies regarding algorithmic systems
 Algorithmic Impact Assessment requirement for systems classified as causing potentially
severe non-reversible impact
 Strict tort liability for algorithmic systems with medium severity non-reversible impacts
 Reduced liability for algorithmic systems certified as compliant with best-practice standards.
Recommendations
Global coordination for algorithmic governance
 Establishment a permanent global Algorithm Governance Forum (AGF)
 Multi-stakeholder dialog and policy expertise related to algorithmic systems
 Based on the principles of Responsible Research and Innovation
 Provide a forum for coordination and exchanging of governance best-practices
 Strong positions in trade negotiations to protect regulatory ability to
investigate algorithmic
systems and hold parties accountable for violations of European laws and
human rights.
Recommendations

A koene governance_framework_algorithmicaccountabilitytransparency_october2018

  • 1.
    A governance frameworkfor algorithmic accountability and transparency EPRS/2018/STOA/SER/18/002 ANSGAR KOENE, UNIVERSITY OF NOTTINGHAM RASHIDA RICHARDSON & DILLON REISMAN, AI NOW INSTITUTE YOHKO HATADA, EMLS RI HELENA WEBB, M. PATEL, J. LAVIOLETTE, C. MACHADO, UNIVERSITY OF OXFORD CHRIS CLIFTON, PURDUE UNIVERSITY 25TH OCTOBER 2018
  • 2.
    Items covered inthis presentation Stocktaking : regulatory successes/failures Policy recommendations
  • 3.
    Demand/supply market mechanisms Digital literacy / algorithmic awareness – information asymmetry  Market concentration & network effects √ Demand side pressure from large business/institutional customers √ Research on Fairness, Accountability and Transparency responding to investigative journalism √ Algorithmic fairness / auditing tools & services in anticipation of state intervention Stocktaking
  • 4.
    Self-regulation / co-regulation CSR response to avoid reputation loss  Codes of ethics & ethics boards √ Employee activism √ Industry Standards of best practice √ Third-party certification (with regulated licensing of certifiers) √ Impact assessments Stocktaking
  • 5.
    State intervention 1/2 Informationmeasures  Algorithmic literacy for citizens and professionals  Disclosure labels Financial incentives  Tax-incentives for ethical algorithms (analogous to eco-friendly technology)  Public procurement  Targeted public investment (e.g. Explainable AI research)  “bug bounties” for revealing algorithmic bias  Investment in algorithmic accountability journalism Stocktaking
  • 6.
    State intervention 2/2 Legislativemeasures  Innovation lock-in (focus on “functional requirements”) / legislative steering of innovation  GDPR  Requirements for systems used for public/administrative purposes (e.g. Digital Republic Act)  Mandated goals and outcomes transparency  Optimisation criteria definition by body representing public interest  No-fault/Strict/Shared Liability Regulatory body  Standards setting body  openness / disclosure / transparency requirements  Pre-market approval Stocktaking
  • 7.
  • 8.
    Awareness raising: education, watchdogsand whistleblowers  “Algorithmic literacy” - teaching core concepts: computational thinking, the role of data and the importance of optimisation criteria.  Standardised notification to communicate type and degree of algorithmic processing in decisions.  Provision of computational infrastructure and access to technical experts to support data analysis etc. for “algorithmic accountability journalism”.  Whistleblower protection and protection against prosecution on grounds of breaching copyright or Terms of Service when doing so serves the public interest. Recommendations
  • 9.
    Accountability in publicsector use of algorithmic decision-making Adoption of Algorithmic Impact Assessment (AIA) for algorithmic systems used for public service 1. Public disclosure of purpose, scope, intended use and associated policies, self-assessment process and potential implementation timeline 2. Performing and publishing of self-assessment of the system with focus on inaccuracies, bias, harms to affected communities, and mitigation plans for potential impacts. 3. Publication of plan for meaningful, ongoing access to external researchers to review the system. 4. Public participation period. 5. Publication of final AIA, once issues raised in public participation have been addressed. 6. Renewal of AIAs on a regular timeline. 7. Opportunity for public to challenge failure to mitigate issues raised in the public participation period or foreseeable outcomes. Recommendations
  • 10.
    Regulatory oversight andLegal liability  Regulatory body for algorithms:  Risk assessment  Investigating algorithmic systems suspected of infringing of human rights.  Advising other regulatory agencies regarding algorithmic systems  Algorithmic Impact Assessment requirement for systems classified as causing potentially severe non-reversible impact  Strict tort liability for algorithmic systems with medium severity non-reversible impacts  Reduced liability for algorithmic systems certified as compliant with best-practice standards. Recommendations
  • 11.
    Global coordination foralgorithmic governance  Establishment a permanent global Algorithm Governance Forum (AGF)  Multi-stakeholder dialog and policy expertise related to algorithmic systems  Based on the principles of Responsible Research and Innovation  Provide a forum for coordination and exchanging of governance best-practices  Strong positions in trade negotiations to protect regulatory ability to investigate algorithmic systems and hold parties accountable for violations of European laws and human rights. Recommendations