SlideShare a Scribd company logo
1 of 21
No part of this document may be quoted or
reproduced without prior written approval from IRGC
November 2018
Outcome from an IRGC workshop, July 2018
Governing risk from
decision-making learning algorithms
(DMLAs)
https://irgc.epfl.ch
Algorithms can now learn, self-evolve and decide autonomously
Decision-making learning algorithms (DMLAs) can be understood as information systems
that use data and advanced computational techniques, including machine learning or deep
learning (neural networks), to issue guidance or recommend a course of action for human
actors and/or produce specific commands for automated systems
• For the time being, there is limited implementation of DMLAs, besides a handful of
industry innovators and dominant players (e.g. tech giants and certain governments).
Most organisations are still exploring what is possible, to the extent that they are
exploring the full potential that such algorithms learn, self-evolve and can make
decisions autonomously.
• The potential of DMLAs is recognized in key sectors – notably in healthcare and for
automated driving. More broadly, this huge technological revolution can also involve a
profound transformation of society and the economy.
2Governing Risks from Decision-Making Learning Algorithms
Applications
Societies are becoming increasingly dependent on digital technologies, including
machine learning applied across a broad spectrum of areas such as:
o Transportation (e.g. autonomous driving)
o Health (e.g. diagnostics and prognostics, data-driven precision / genomic medicine)
o Administration (e.g. predictive policing, criminal risk assessment)
o Surveillance (e.g. citizen scoring schemes, counter-terrorism)
o Insurance (e.g. insurance underwriting, claim processing, insurance fraud detection, etc.)
o News and social media
o Advertising
3Governing Risks from Decision-Making Learning Algorithms
Evaluating risks and opportunities from DMLAs
• Policymakers face a difficult balancing act between allowing and incentivising the
meaningful uses of DMLAs from the adverse ones. Risk of wrong or unfair
outcome, including possible discrimination, must be carefully evaluated in light of
expected benefits in efficiency and accuracy.
• The more automated or ‘independently’ deciding algorithms are, the more they
need to be scrutinized. DMLAs remain particularly challenging to decision-making
when the stakes are high, when human judgment matters to concerns such as
privacy, non-discrimination and confidentiality, especially when there is a risk of
irreversible damage.
• Technical and governance issues are tightly interconnected. There are
opportunities and risks at both levels.
4Governing Risks from Decision-Making Learning Algorithms
5Governing Risks from Decision-Making Learning Algorithms
Examples Potential risk of relying on DMLAs Expected benefit of using DMLAs
Insurance contracts Incorrect actuarial analysis
misprices risk or introduces unfair
discrimination in prices
More efficient allocation of risk, e.g.
through better actuarial analysis and
fraud detection
Medical diagnostics &
prognostics
Wrong medical diagnosis,
prognostic or treatment decision
Improving the capacity to diagnose,
prevent or treat life-threatening diseases
Automated driving Wrong assessment of a car
environment (car-to-car and car-to-
infrastructure) leading to an
accident
Benefits of autonomous (connected)
guiding of vehicles, such as increased
traffic efficiency and fewer accidents;
Comfort and convenience
Predictive policy
- Criminal justice Incorrect prediction of recidivism,
potential unfair discrimination
Ability to enforce rules a priori by
embedding them into code
- Public services / social
benefits
Incorrect, potentially unfair
discriminative distribution of social
benefits
Embedding into code rules for a loan or
social benefit attribution
- Face recognition (ID) Undue or illegal citizen surveillance Reducing eyewitness misidentification (a
lead cause in wrongful convictions)
DMLAs can bring many benefits to society
• analysing large volumes and flows of data, from multiple
sources, in ways not possible for humans
Analytic prowess
• generating outcomes more promptly and less costly than
could be done by human processors
Efficiency gains
• drawing linkages, finding patterns and yielding outcomes
across domainsScalability
• processing information more consistently and systematically
than humansConsistency
• processing and learning with dynamic data and adapting to
changing inputs or variables fastAdaptability
• performing fastidious or time-consuming tasks so as to free
up human time for other meaningful pursuitsConvenience
6Governing Risks from Decision-Making Learning Algorithms
DMLAs can cause new risks or amplify existing risks (1)
7Governing Risks from Decision-Making Learning Algorithms
• difficulty to identify or correct errors or inaccuracy due to
the intrinsic biases in input data and lack of transparency on
the provenance of decisions, and difficulty to test DMLAs
Erroneous or
inaccurate outcomes
• DMLAs are embedded in software and we lack sufficient
knowledge on how to produce software that is always
correct
Recurring problem of
software correctness
• tension between privacy protecting rights such as ‘the right
to be forgotten’ and the need for more complete and
unbiased datasets for DMLAs to live up to their potential
Threats to data
protection and privacy
• notably through the reproduction of certain undue biases
around race, gender, ethnicity, age, income, etc.
Social discrimination
and unfairness
• some DMLAs resemble ‘black boxes’ such that decision-
making is difficult to understand and/or explain and thus
attribution of responsibility or liability may be difficult
Loss of accountability
DMLAs can cause new risks or amplify existing risks (2)
8Governing Risks from Decision-Making Learning Algorithms
• DMLAs are increasingly deployed in domains (e.g. of
medicine, criminal justice, etc.) where human judgment and
oversight matter
Loss of human
oversight
• DMLAs are deployed by powerful actors, be they
governments, businesses or other non-state actors to
survey citizens or unduly influence their behaviour
Excessive surveillance
and social control
• such as for criminal purposes, interference with democratic
politics, or in human rights breaches (e.g. as part of
indiscriminate warfare)
Manipulation or
malignant use
DMLAs
10 key themes
9Governing Risks from Decision-Making Learning Algorithms
#1 - Technology and governance are tightly connected
• The governance of DMLAs entails both technical and non-technical aspects, and
the challenge is to relate them well.
• An important part of governance by DMLAs will be to define desired policy,
research and business goals in a way that allows machine learning and data
scientists and developers to embed the appropriate governance rules, norms or
regulations into the very functioning of the algorithms.
• It is further valuable to include a mechanism of auditing and quality control, to
check adherence to these rules or norms.
10Governing Risks from Decision-Making Learning Algorithms
#2 - What is new: algorithms 'learn' and self-evolve
• Amidst different types of algorithms used for machine learning, algorithms that
learn and self-evolve warrant particular attention.
• In deep learning (with e.g. neural networks) algorithms are no longer
“programmed” but increasingly “learned” and adaptive, giving them an ability to
perform tasks that were previously done by humans trained or entrusted for such
purpose.
11Governing Risks from Decision-Making Learning Algorithms
#3 – Evaluating risk, across domains and applications
• The evaluation of distinct and shared risks requires careful assessment of
oundue biases in input data
omethodological inadequacies or shortcuts caused by low-quality input data or
inappropriate learning environment
owrong outcome, e.g. possibly resulting in social discrimination or unfairness
oloss of accountability and of human oversight
oinappropriate or illegal surveillance, and malignant manipulation
12Governing Risks from Decision-Making Learning Algorithms
#4 – Governing risk, considering existing benchmarks and regulations
• When DMLAs are deployed in specialised domains –like medicine, insurance,
public administration– they do not develop in a contextual vacuum: there already
are certain decision-making practices, analytical thresholds, prescriptive or
historical norms in place, which matter for calibrating and evaluating the
performance of DMLAs vis-à-vis alternatives.
• Existing regulatory frameworks vary by domain, therefore specific applications of
DMLAs require spelling out the relevant benchmarks against which their
performance must be evaluated and calibrated.
• An overarching question is how to evaluate decisions by DMLAs in contrast to
decisions by humans, which are not error or bias-free. When the benchmarks are
lacking, how to define them?
13Governing Risks from Decision-Making Learning Algorithms
#5 - Accuracy of outcome: critically important, not yet granted
• The accuracy (or correctness) of DMLAs’ outputs is what needs to be established, especially when the
decision-making process is not transparent or is difficult to explain, and/or the outcomes are difficult to
interpret or explain.
• Greater attention is needed to assure more robust methodological practices (particularly as regards the
quality and appropriate use of input data and learning context) and to probe the computational dynamics
and learning at play.
• Of particular concern is that we do not quite know how to test machine learned and adaptive algorithms.
• DMLAs may have the potential to optimise fewer errors in aggregate, but those errors may be qualitatively
worse when judged against those made for a specific individual, or in comparison to those expected from
equivalent human decision-making.
• It is thus increasingly necessary to decide, perhaps even regulate for, how we determine accuracy for
DMLA systems.
o Explainability of the outcome will probably be a key component of trustworthiness
o DMLAs should be able to detect when the outcome is inaccurate.
• Anyway, when individuals or organisations are affected by a decision which they believe is wrong or unfair,
they should be given a right to receive an explanation, and a right to recourse.
14Governing Risks from Decision-Making Learning Algorithms
#6 – Biases are a key challenge
• Algorithmic bias – at the level of data inputs, learning context or outputs – can yield
discriminatory treatment along sensitive or legally protected attributes of race, gender,
ethnicity, age, income, etc. Algorithmic bias remains tricky to address, particularly when
manifesting through proxies.
• De-biasing techniques exist, but entail a trade-off: in order to evaluate whether
undesired proxy measures creep into an algorithm, some very sensitive categories of
information, such as race, gender, age, ethnicity, etc. may have to be included, to know if
the biases are sufficiently minimised.
• Obtaining more complete and unbiased data remains a pervasive and significant
challenge.
• Examples: errors in insurance contract proposal, unfairness or undue discrimination in
predictive policing
15Governing Risks from Decision-Making Learning Algorithms
#7 - Humans in control: under which circumstances and how?
• A key question when evaluating whether we make the right choice in relying on decision-
making learning algorithms for specific applications is to ask if humans are
o in the loop (actively in control)
o on the loop (i.e. in alert mode, able to take control if need be) or
o off the loop (unable to take control back).
• While ‘on’ the loop may strike as the most balanced approach in that it suggests an ideal
level of control, it comes with some risk that humans may struggle to ‘jump in’ when
handed control if lacking the relevant context, practice, attention and time for making a
critical decision.
• Examples: role of the driver in automated driving, role of human judgement in criminal
justice, role of medical doctor in medical diagnostic
16Governing Risks from Decision-Making Learning Algorithms
#8 - Need to: develop standards, principles and good practices
• Standards, principles and good practices can help developers, industry, and other
organisations embed best practices ‘by design’.
• The IEEE Ethically Aligned Design Principles and standards (Global Initiative for
Ethical Considerations in the Design of Autonomous Systems), the Asilomar
principles for Responsible AI, or other initiatives by international organisations
like the OECD, show that the development of governance arrangements for the
programming, implementation and use of DMLAs is a shared concern.
17Governing Risks from Decision-Making Learning Algorithms
#9 - Defining accountability, responsibility and liability is central
• It becomes ever more important to determine what are the concrete ways to demand and deliver
accountability, who assumes what responsibility in DMLA’s development and use, and who is
liable in case of erroneous or wrongful applications. Better defining legal uses of DMLAs can also
help different organisations determine whether to enable, accelerate or restrict their adoption.
• Regulated sectors might be able to include the use of DMLAs in their existing regulatory frames.
• Liability attribution is particularly challenging given the ‘many hands’ partaking in the design and
deployment of DMLAs, and the lack of clarity about software liability.
• In Europe, the EU Global Data Protection Regulation (GDPR) makes in theory some provision for
the right not to be subject to automated decision-making and the right to an explanation (art.
22.1). But this provision may not apply in many cases (as defined in art. 22.2), which leaves much
ambiguity about which aspects of the GDPR apply to DMLAs. There remains a general prohibition
on making such decisions using personal data, but interpretation of the law and compliance with
it may vary by countries and/or domains.
18Governing Risks from Decision-Making Learning Algorithms
#10 - Engineering digital and social trust is a critical challenge
and increasingly relevant
• Digital trust can be ensured by techniques such as accountable computing, blockchains, smart
contracts, software verification, cryptography, and trusted hardware technologies that can, for
example, enable distributed or decentralised enforcement of accountability and transparency.
Developing provable theorems that algorithms do what they are supposed to do are both
possible and important: they help set certain critical ‘guard-rails’ or ensure against classes of ‘bad
decision events’.
• While possible to look for ways to mandate or improve digital trustworthiness, the challenge is
also about trusting the broader ecosystem in and around DMLAs, for which a governance
structure might help. Who benefits? Whom or what to trust?
• Informational asymmetries –between data subjects, brokers, companies or various platforms
where data is gathered– may affect public perception as to whether DMLAs are being put to good
general use.
• Especially when the stakes are high, some actors might try to ‘game it’ to their advantage,
including for adversarial purposes. Thus, an important general consideration for international
governance of DMLAs revolves around incentives for and vulnerability to abuse.
19Governing Risks from Decision-Making Learning Algorithms
Conclusion
• The development of DMLAs provides many opportunities, but also a series of
technical and governance challenges around accuracy, explainability and fairness
of the outcome, and transparency of the methodology and process.
• While more sharing of higher quality data is needed, data privacy must also be
ensured. Social norms are changing and principles are being redefined.
• Incentivizing the production of accurate, fair and socially acceptable outcome
from DMLAs will be key for the development of the technology. This will require a
dialogue between scientists and society, facilitated by trusted bodies.
• Assigning accountability and legal responsibility should ensure that DMLAs will be
developed for good.
20Governing Risks from Decision-Making Learning Algorithms
https://irgc.epfl.ch
• This presentation summarises some of the findings and recommendations
presented in a report elaborated by IRGC, after a multi-stakeholder and multi-
disciplinary workshop on governing decision-making algorithms, on 9-10 July
2018 at the Swiss Re Institute in Zurich.
• Autorisation to reproduce granted under the condition of full acknowledgement
of IRGC as the source:
EPFL IRGC (2018) The Governance of Decision-Making Algorithms. Lausanne: EPFL
International Risk Governance Center
• Available from
o https://irgc.epfl.ch/issues/projects-cybersecurity/the-governance-of-decision-making-algorithms/
o https://infoscience.epfl.ch/record/261264?ln=fr&p=irgc
21Governing Risks from Decision-Making Learning Algorithms

More Related Content

What's hot

Cmgt 430 cmgt430 cmgt 430 education for service uopstudy.com
Cmgt 430 cmgt430 cmgt 430 education for service   uopstudy.comCmgt 430 cmgt430 cmgt 430 education for service   uopstudy.com
Cmgt 430 cmgt430 cmgt 430 education for service uopstudy.comUOPCourseHelp
 
Blue team pp_(final_4-12-11)[1]
Blue team pp_(final_4-12-11)[1]Blue team pp_(final_4-12-11)[1]
Blue team pp_(final_4-12-11)[1]Jamie Jackson
 
Enterprise Information Technology Risk Assessment Form
Enterprise Information Technology Risk Assessment FormEnterprise Information Technology Risk Assessment Form
Enterprise Information Technology Risk Assessment FormGoutama Bachtiar
 
Data processing sunum-lesson 4-mis-dss
Data processing sunum-lesson 4-mis-dssData processing sunum-lesson 4-mis-dss
Data processing sunum-lesson 4-mis-dssUfuk Cebeci
 
Outsourcing security survey0706 (1)
Outsourcing security survey0706 (1)Outsourcing security survey0706 (1)
Outsourcing security survey0706 (1)brijesh singh
 
The Significance of IT Security Management & Risk Assessment
The Significance of IT Security Management & Risk AssessmentThe Significance of IT Security Management & Risk Assessment
The Significance of IT Security Management & Risk AssessmentBradley Susser
 
Artificial intelligence bsc - iso 27001 information security
Artificial intelligence   bsc - iso 27001 information securityArtificial intelligence   bsc - iso 27001 information security
Artificial intelligence bsc - iso 27001 information securityUfuk Cebeci
 
Generic_Sample_INFOSECPolicy_and_Procedures
Generic_Sample_INFOSECPolicy_and_ProceduresGeneric_Sample_INFOSECPolicy_and_Procedures
Generic_Sample_INFOSECPolicy_and_ProceduresSamuel Loomis
 
20CS024 Ethics in Information Technology
20CS024 Ethics in Information Technology20CS024 Ethics in Information Technology
20CS024 Ethics in Information TechnologyKathirvel Ayyaswamy
 
3.3 managing ict 3
3.3 managing ict 33.3 managing ict 3
3.3 managing ict 3mrmwood
 
Data processing in Industrial systems Course Notes 1- 3 weeks
Data processing in Industrial systems Course Notes 1- 3 weeksData processing in Industrial systems Course Notes 1- 3 weeks
Data processing in Industrial systems Course Notes 1- 3 weeksUfuk Cebeci
 
Air force institute final
Air force institute finalAir force institute final
Air force institute finalzoomdust
 
Risk assessment of information production using extended risk matrix approach
Risk assessment of information production using extended risk matrix approachRisk assessment of information production using extended risk matrix approach
Risk assessment of information production using extended risk matrix approachTELKOMNIKA JOURNAL
 

What's hot (20)

Ir s 1_2_1_kovacs
Ir s 1_2_1_kovacsIr s 1_2_1_kovacs
Ir s 1_2_1_kovacs
 
Cmgt 430 cmgt430 cmgt 430 education for service uopstudy.com
Cmgt 430 cmgt430 cmgt 430 education for service   uopstudy.comCmgt 430 cmgt430 cmgt 430 education for service   uopstudy.com
Cmgt 430 cmgt430 cmgt 430 education for service uopstudy.com
 
Blue team pp_(final_4-12-11)[1]
Blue team pp_(final_4-12-11)[1]Blue team pp_(final_4-12-11)[1]
Blue team pp_(final_4-12-11)[1]
 
PASCUA(REPORT.CMSC411)
PASCUA(REPORT.CMSC411)PASCUA(REPORT.CMSC411)
PASCUA(REPORT.CMSC411)
 
Enterprise Information Technology Risk Assessment Form
Enterprise Information Technology Risk Assessment FormEnterprise Information Technology Risk Assessment Form
Enterprise Information Technology Risk Assessment Form
 
Data processing sunum-lesson 4-mis-dss
Data processing sunum-lesson 4-mis-dssData processing sunum-lesson 4-mis-dss
Data processing sunum-lesson 4-mis-dss
 
Outsourcing security survey0706 (1)
Outsourcing security survey0706 (1)Outsourcing security survey0706 (1)
Outsourcing security survey0706 (1)
 
The Significance of IT Security Management & Risk Assessment
The Significance of IT Security Management & Risk AssessmentThe Significance of IT Security Management & Risk Assessment
The Significance of IT Security Management & Risk Assessment
 
Artificial intelligence bsc - iso 27001 information security
Artificial intelligence   bsc - iso 27001 information securityArtificial intelligence   bsc - iso 27001 information security
Artificial intelligence bsc - iso 27001 information security
 
10420140501001
1042014050100110420140501001
10420140501001
 
Generic_Sample_INFOSECPolicy_and_Procedures
Generic_Sample_INFOSECPolicy_and_ProceduresGeneric_Sample_INFOSECPolicy_and_Procedures
Generic_Sample_INFOSECPolicy_and_Procedures
 
Gtag 1 information risk and control
Gtag 1 information risk and controlGtag 1 information risk and control
Gtag 1 information risk and control
 
Risk - IT Services
Risk - IT ServicesRisk - IT Services
Risk - IT Services
 
20CS024 Ethics in Information Technology
20CS024 Ethics in Information Technology20CS024 Ethics in Information Technology
20CS024 Ethics in Information Technology
 
3.3 managing ict 3
3.3 managing ict 33.3 managing ict 3
3.3 managing ict 3
 
Aligning IT and Business for Better Results
Aligning IT and Business for Better ResultsAligning IT and Business for Better Results
Aligning IT and Business for Better Results
 
Root cause analysis questionnaire
Root cause analysis questionnaireRoot cause analysis questionnaire
Root cause analysis questionnaire
 
Data processing in Industrial systems Course Notes 1- 3 weeks
Data processing in Industrial systems Course Notes 1- 3 weeksData processing in Industrial systems Course Notes 1- 3 weeks
Data processing in Industrial systems Course Notes 1- 3 weeks
 
Air force institute final
Air force institute finalAir force institute final
Air force institute final
 
Risk assessment of information production using extended risk matrix approach
Risk assessment of information production using extended risk matrix approachRisk assessment of information production using extended risk matrix approach
Risk assessment of information production using extended risk matrix approach
 

Similar to Governing risk from decision-making learning algorithms (DMLAs)

ADS System (VAC).pptx
ADS System (VAC).pptxADS System (VAC).pptx
ADS System (VAC).pptxtechnospotxyz
 
Overcoming Hidden Risks in a Shared Security Model
Overcoming Hidden Risks in a Shared Security ModelOvercoming Hidden Risks in a Shared Security Model
Overcoming Hidden Risks in a Shared Security ModelOnRamp
 
Transformation of legacy landscape in the insurance world
Transformation of legacy landscape in the insurance worldTransformation of legacy landscape in the insurance world
Transformation of legacy landscape in the insurance worldNIIT Technologies
 
PACE-IT, Security+ 2.2: Integrating Data and Systems with 3rd Parties
PACE-IT, Security+ 2.2: Integrating Data and Systems with 3rd PartiesPACE-IT, Security+ 2.2: Integrating Data and Systems with 3rd Parties
PACE-IT, Security+ 2.2: Integrating Data and Systems with 3rd PartiesPace IT at Edmonds Community College
 
Managing-the-Risks-of-LLMs-in-FS-Industry-Roundtable-TruEra-QuantU.pdf
Managing-the-Risks-of-LLMs-in-FS-Industry-Roundtable-TruEra-QuantU.pdfManaging-the-Risks-of-LLMs-in-FS-Industry-Roundtable-TruEra-QuantU.pdf
Managing-the-Risks-of-LLMs-in-FS-Industry-Roundtable-TruEra-QuantU.pdfQuantUniversity
 
Univ. of IL Physicians Q Study
Univ. of IL Physicians Q StudyUniv. of IL Physicians Q Study
Univ. of IL Physicians Q StudyDavid Donohue
 
Challenges in adapting predictive analytics
Challenges  in  adapting  predictive  analyticsChallenges  in  adapting  predictive  analytics
Challenges in adapting predictive analyticsPrasad Narasimhan
 
Ethical Considerations in Data Analysis_ Balancing Power, Privacy, and Respon...
Ethical Considerations in Data Analysis_ Balancing Power, Privacy, and Respon...Ethical Considerations in Data Analysis_ Balancing Power, Privacy, and Respon...
Ethical Considerations in Data Analysis_ Balancing Power, Privacy, and Respon...Soumodeep Nanee Kundu
 
What regulation for Artificial Intelligence?
What regulation for Artificial Intelligence?What regulation for Artificial Intelligence?
What regulation for Artificial Intelligence?Nozha Boujemaa
 
Information Technology Risk Management
Information Technology Risk ManagementInformation Technology Risk Management
Information Technology Risk ManagementGlen Alleman
 
Chapter -6- Ethics and Professionalism of ET (2).pptx
Chapter -6- Ethics and Professionalism of ET (2).pptxChapter -6- Ethics and Professionalism of ET (2).pptx
Chapter -6- Ethics and Professionalism of ET (2).pptxbalewayalew
 
2014 Secure Mobility Survey Report
2014 Secure Mobility Survey Report2014 Secure Mobility Survey Report
2014 Secure Mobility Survey ReportDImension Data
 
GE_: eingetragene meine Kommentaeren zu pdf_files des EurOCouncil....
GE_: eingetragene meine Kommentaeren zu pdf_files des EurOCouncil....GE_: eingetragene meine Kommentaeren zu pdf_files des EurOCouncil....
GE_: eingetragene meine Kommentaeren zu pdf_files des EurOCouncil....giovanni Colombo
 
E_ find_in msi-aut(2018)06rev1
E_ find_in msi-aut(2018)06rev1E_ find_in msi-aut(2018)06rev1
E_ find_in msi-aut(2018)06rev1giovanni Colombo
 
Putting data science into perspective
Putting data science into perspectivePutting data science into perspective
Putting data science into perspectiveSravan Ankaraju
 
Enterprise Risk Management for the Digital Transformation Age
Enterprise Risk Management for the Digital Transformation AgeEnterprise Risk Management for the Digital Transformation Age
Enterprise Risk Management for the Digital Transformation AgeCareer Communications Group
 

Similar to Governing risk from decision-making learning algorithms (DMLAs) (20)

ADS System (VAC).pptx
ADS System (VAC).pptxADS System (VAC).pptx
ADS System (VAC).pptx
 
Overcoming Hidden Risks in a Shared Security Model
Overcoming Hidden Risks in a Shared Security ModelOvercoming Hidden Risks in a Shared Security Model
Overcoming Hidden Risks in a Shared Security Model
 
Transformation of legacy landscape in the insurance world
Transformation of legacy landscape in the insurance worldTransformation of legacy landscape in the insurance world
Transformation of legacy landscape in the insurance world
 
PACE-IT, Security+ 2.2: Integrating Data and Systems with 3rd Parties
PACE-IT, Security+ 2.2: Integrating Data and Systems with 3rd PartiesPACE-IT, Security+ 2.2: Integrating Data and Systems with 3rd Parties
PACE-IT, Security+ 2.2: Integrating Data and Systems with 3rd Parties
 
PACE-IT, Security+ 2.1: Risk Related Concepts (part 1)
PACE-IT, Security+ 2.1: Risk Related Concepts (part 1)PACE-IT, Security+ 2.1: Risk Related Concepts (part 1)
PACE-IT, Security+ 2.1: Risk Related Concepts (part 1)
 
Managing-the-Risks-of-LLMs-in-FS-Industry-Roundtable-TruEra-QuantU.pdf
Managing-the-Risks-of-LLMs-in-FS-Industry-Roundtable-TruEra-QuantU.pdfManaging-the-Risks-of-LLMs-in-FS-Industry-Roundtable-TruEra-QuantU.pdf
Managing-the-Risks-of-LLMs-in-FS-Industry-Roundtable-TruEra-QuantU.pdf
 
Univ. of IL Physicians Q Study
Univ. of IL Physicians Q StudyUniv. of IL Physicians Q Study
Univ. of IL Physicians Q Study
 
Challenges in adapting predictive analytics
Challenges  in  adapting  predictive  analyticsChallenges  in  adapting  predictive  analytics
Challenges in adapting predictive analytics
 
Ethical Considerations in Data Analysis_ Balancing Power, Privacy, and Respon...
Ethical Considerations in Data Analysis_ Balancing Power, Privacy, and Respon...Ethical Considerations in Data Analysis_ Balancing Power, Privacy, and Respon...
Ethical Considerations in Data Analysis_ Balancing Power, Privacy, and Respon...
 
What regulation for Artificial Intelligence?
What regulation for Artificial Intelligence?What regulation for Artificial Intelligence?
What regulation for Artificial Intelligence?
 
Information Technology Risk Management
Information Technology Risk ManagementInformation Technology Risk Management
Information Technology Risk Management
 
Chapter -6- Ethics and Professionalism of ET (2).pptx
Chapter -6- Ethics and Professionalism of ET (2).pptxChapter -6- Ethics and Professionalism of ET (2).pptx
Chapter -6- Ethics and Professionalism of ET (2).pptx
 
Managing Cyber and Five Other Technology Risks
Managing Cyber and Five Other Technology RisksManaging Cyber and Five Other Technology Risks
Managing Cyber and Five Other Technology Risks
 
2014 Secure Mobility Survey Report
2014 Secure Mobility Survey Report2014 Secure Mobility Survey Report
2014 Secure Mobility Survey Report
 
Article-V16
Article-V16Article-V16
Article-V16
 
GE_: eingetragene meine Kommentaeren zu pdf_files des EurOCouncil....
GE_: eingetragene meine Kommentaeren zu pdf_files des EurOCouncil....GE_: eingetragene meine Kommentaeren zu pdf_files des EurOCouncil....
GE_: eingetragene meine Kommentaeren zu pdf_files des EurOCouncil....
 
E_ find_in msi-aut(2018)06rev1
E_ find_in msi-aut(2018)06rev1E_ find_in msi-aut(2018)06rev1
E_ find_in msi-aut(2018)06rev1
 
Putting data science into perspective
Putting data science into perspectivePutting data science into perspective
Putting data science into perspective
 
Enterprise Risk Management for the Digital Transformation Age
Enterprise Risk Management for the Digital Transformation AgeEnterprise Risk Management for the Digital Transformation Age
Enterprise Risk Management for the Digital Transformation Age
 
To connect, or not to connect?
To connect, or not to connect?To connect, or not to connect?
To connect, or not to connect?
 

Recently uploaded

Scaling up coastal adaptation in Maldives through the NAP process
Scaling up coastal adaptation in Maldives through the NAP processScaling up coastal adaptation in Maldives through the NAP process
Scaling up coastal adaptation in Maldives through the NAP processNAP Global Network
 
The U.S. Budget and Economic Outlook (Presentation)
The U.S. Budget and Economic Outlook (Presentation)The U.S. Budget and Economic Outlook (Presentation)
The U.S. Budget and Economic Outlook (Presentation)Congressional Budget Office
 
VIP Model Call Girls Shikrapur ( Pune ) Call ON 8005736733 Starting From 5K t...
VIP Model Call Girls Shikrapur ( Pune ) Call ON 8005736733 Starting From 5K t...VIP Model Call Girls Shikrapur ( Pune ) Call ON 8005736733 Starting From 5K t...
VIP Model Call Girls Shikrapur ( Pune ) Call ON 8005736733 Starting From 5K t...SUHANI PANDEY
 
Nanded City ? Russian Call Girls Pune - 450+ Call Girl Cash Payment 800573673...
Nanded City ? Russian Call Girls Pune - 450+ Call Girl Cash Payment 800573673...Nanded City ? Russian Call Girls Pune - 450+ Call Girl Cash Payment 800573673...
Nanded City ? Russian Call Girls Pune - 450+ Call Girl Cash Payment 800573673...SUHANI PANDEY
 
Coastal Protection Measures in Hulhumale'
Coastal Protection Measures in Hulhumale'Coastal Protection Measures in Hulhumale'
Coastal Protection Measures in Hulhumale'NAP Global Network
 
Finance strategies for adaptation. Presentation for CANCC
Finance strategies for adaptation. Presentation for CANCCFinance strategies for adaptation. Presentation for CANCC
Finance strategies for adaptation. Presentation for CANCCNAP Global Network
 
Call On 6297143586 Viman Nagar Call Girls In All Pune 24/7 Provide Call With...
Call On 6297143586  Viman Nagar Call Girls In All Pune 24/7 Provide Call With...Call On 6297143586  Viman Nagar Call Girls In All Pune 24/7 Provide Call With...
Call On 6297143586 Viman Nagar Call Girls In All Pune 24/7 Provide Call With...tanu pandey
 
VIP Model Call Girls Baramati ( Pune ) Call ON 8005736733 Starting From 5K to...
VIP Model Call Girls Baramati ( Pune ) Call ON 8005736733 Starting From 5K to...VIP Model Call Girls Baramati ( Pune ) Call ON 8005736733 Starting From 5K to...
VIP Model Call Girls Baramati ( Pune ) Call ON 8005736733 Starting From 5K to...SUHANI PANDEY
 
celebrity 💋 Patna Escorts Just Dail 8250092165 service available anytime 24 hour
celebrity 💋 Patna Escorts Just Dail 8250092165 service available anytime 24 hourcelebrity 💋 Patna Escorts Just Dail 8250092165 service available anytime 24 hour
celebrity 💋 Patna Escorts Just Dail 8250092165 service available anytime 24 hourCall Girls in Nagpur High Profile
 
2024: The FAR, Federal Acquisition Regulations, Part 30
2024: The FAR, Federal Acquisition Regulations, Part 302024: The FAR, Federal Acquisition Regulations, Part 30
2024: The FAR, Federal Acquisition Regulations, Part 30JSchaus & Associates
 
2024: The FAR, Federal Acquisition Regulations, Part 31
2024: The FAR, Federal Acquisition Regulations, Part 312024: The FAR, Federal Acquisition Regulations, Part 31
2024: The FAR, Federal Acquisition Regulations, Part 31JSchaus & Associates
 
1935 CONSTITUTION REPORT IN RIPH FINALLS
1935 CONSTITUTION REPORT IN RIPH FINALLS1935 CONSTITUTION REPORT IN RIPH FINALLS
1935 CONSTITUTION REPORT IN RIPH FINALLSarandianics
 
The Economic and Organised Crime Office (EOCO) has been advised by the Office...
The Economic and Organised Crime Office (EOCO) has been advised by the Office...The Economic and Organised Crime Office (EOCO) has been advised by the Office...
The Economic and Organised Crime Office (EOCO) has been advised by the Office...nservice241
 
Call Girls Nanded City Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Nanded City Call Me 7737669865 Budget Friendly No Advance BookingCall Girls Nanded City Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Nanded City Call Me 7737669865 Budget Friendly No Advance Bookingroncy bisnoi
 
Call Girls Chakan Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Chakan Call Me 7737669865 Budget Friendly No Advance BookingCall Girls Chakan Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Chakan Call Me 7737669865 Budget Friendly No Advance Bookingroncy bisnoi
 
An Atoll Futures Research Institute? Presentation for CANCC
An Atoll Futures Research Institute? Presentation for CANCCAn Atoll Futures Research Institute? Presentation for CANCC
An Atoll Futures Research Institute? Presentation for CANCCNAP Global Network
 
Chakan ( Call Girls ) Pune 6297143586 Hot Model With Sexy Bhabi Ready For S...
Chakan ( Call Girls ) Pune  6297143586  Hot Model With Sexy Bhabi Ready For S...Chakan ( Call Girls ) Pune  6297143586  Hot Model With Sexy Bhabi Ready For S...
Chakan ( Call Girls ) Pune 6297143586 Hot Model With Sexy Bhabi Ready For S...tanu pandey
 
Hinjewadi * VIP Call Girls Pune | Whatsapp No 8005736733 VIP Escorts Service ...
Hinjewadi * VIP Call Girls Pune | Whatsapp No 8005736733 VIP Escorts Service ...Hinjewadi * VIP Call Girls Pune | Whatsapp No 8005736733 VIP Escorts Service ...
Hinjewadi * VIP Call Girls Pune | Whatsapp No 8005736733 VIP Escorts Service ...SUHANI PANDEY
 
Call On 6297143586 Yerwada Call Girls In All Pune 24/7 Provide Call With Bes...
Call On 6297143586  Yerwada Call Girls In All Pune 24/7 Provide Call With Bes...Call On 6297143586  Yerwada Call Girls In All Pune 24/7 Provide Call With Bes...
Call On 6297143586 Yerwada Call Girls In All Pune 24/7 Provide Call With Bes...tanu pandey
 
VIP Model Call Girls Kiwale ( Pune ) Call ON 8005736733 Starting From 5K to 2...
VIP Model Call Girls Kiwale ( Pune ) Call ON 8005736733 Starting From 5K to 2...VIP Model Call Girls Kiwale ( Pune ) Call ON 8005736733 Starting From 5K to 2...
VIP Model Call Girls Kiwale ( Pune ) Call ON 8005736733 Starting From 5K to 2...SUHANI PANDEY
 

Recently uploaded (20)

Scaling up coastal adaptation in Maldives through the NAP process
Scaling up coastal adaptation in Maldives through the NAP processScaling up coastal adaptation in Maldives through the NAP process
Scaling up coastal adaptation in Maldives through the NAP process
 
The U.S. Budget and Economic Outlook (Presentation)
The U.S. Budget and Economic Outlook (Presentation)The U.S. Budget and Economic Outlook (Presentation)
The U.S. Budget and Economic Outlook (Presentation)
 
VIP Model Call Girls Shikrapur ( Pune ) Call ON 8005736733 Starting From 5K t...
VIP Model Call Girls Shikrapur ( Pune ) Call ON 8005736733 Starting From 5K t...VIP Model Call Girls Shikrapur ( Pune ) Call ON 8005736733 Starting From 5K t...
VIP Model Call Girls Shikrapur ( Pune ) Call ON 8005736733 Starting From 5K t...
 
Nanded City ? Russian Call Girls Pune - 450+ Call Girl Cash Payment 800573673...
Nanded City ? Russian Call Girls Pune - 450+ Call Girl Cash Payment 800573673...Nanded City ? Russian Call Girls Pune - 450+ Call Girl Cash Payment 800573673...
Nanded City ? Russian Call Girls Pune - 450+ Call Girl Cash Payment 800573673...
 
Coastal Protection Measures in Hulhumale'
Coastal Protection Measures in Hulhumale'Coastal Protection Measures in Hulhumale'
Coastal Protection Measures in Hulhumale'
 
Finance strategies for adaptation. Presentation for CANCC
Finance strategies for adaptation. Presentation for CANCCFinance strategies for adaptation. Presentation for CANCC
Finance strategies for adaptation. Presentation for CANCC
 
Call On 6297143586 Viman Nagar Call Girls In All Pune 24/7 Provide Call With...
Call On 6297143586  Viman Nagar Call Girls In All Pune 24/7 Provide Call With...Call On 6297143586  Viman Nagar Call Girls In All Pune 24/7 Provide Call With...
Call On 6297143586 Viman Nagar Call Girls In All Pune 24/7 Provide Call With...
 
VIP Model Call Girls Baramati ( Pune ) Call ON 8005736733 Starting From 5K to...
VIP Model Call Girls Baramati ( Pune ) Call ON 8005736733 Starting From 5K to...VIP Model Call Girls Baramati ( Pune ) Call ON 8005736733 Starting From 5K to...
VIP Model Call Girls Baramati ( Pune ) Call ON 8005736733 Starting From 5K to...
 
celebrity 💋 Patna Escorts Just Dail 8250092165 service available anytime 24 hour
celebrity 💋 Patna Escorts Just Dail 8250092165 service available anytime 24 hourcelebrity 💋 Patna Escorts Just Dail 8250092165 service available anytime 24 hour
celebrity 💋 Patna Escorts Just Dail 8250092165 service available anytime 24 hour
 
2024: The FAR, Federal Acquisition Regulations, Part 30
2024: The FAR, Federal Acquisition Regulations, Part 302024: The FAR, Federal Acquisition Regulations, Part 30
2024: The FAR, Federal Acquisition Regulations, Part 30
 
2024: The FAR, Federal Acquisition Regulations, Part 31
2024: The FAR, Federal Acquisition Regulations, Part 312024: The FAR, Federal Acquisition Regulations, Part 31
2024: The FAR, Federal Acquisition Regulations, Part 31
 
1935 CONSTITUTION REPORT IN RIPH FINALLS
1935 CONSTITUTION REPORT IN RIPH FINALLS1935 CONSTITUTION REPORT IN RIPH FINALLS
1935 CONSTITUTION REPORT IN RIPH FINALLS
 
The Economic and Organised Crime Office (EOCO) has been advised by the Office...
The Economic and Organised Crime Office (EOCO) has been advised by the Office...The Economic and Organised Crime Office (EOCO) has been advised by the Office...
The Economic and Organised Crime Office (EOCO) has been advised by the Office...
 
Call Girls Nanded City Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Nanded City Call Me 7737669865 Budget Friendly No Advance BookingCall Girls Nanded City Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Nanded City Call Me 7737669865 Budget Friendly No Advance Booking
 
Call Girls Chakan Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Chakan Call Me 7737669865 Budget Friendly No Advance BookingCall Girls Chakan Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Chakan Call Me 7737669865 Budget Friendly No Advance Booking
 
An Atoll Futures Research Institute? Presentation for CANCC
An Atoll Futures Research Institute? Presentation for CANCCAn Atoll Futures Research Institute? Presentation for CANCC
An Atoll Futures Research Institute? Presentation for CANCC
 
Chakan ( Call Girls ) Pune 6297143586 Hot Model With Sexy Bhabi Ready For S...
Chakan ( Call Girls ) Pune  6297143586  Hot Model With Sexy Bhabi Ready For S...Chakan ( Call Girls ) Pune  6297143586  Hot Model With Sexy Bhabi Ready For S...
Chakan ( Call Girls ) Pune 6297143586 Hot Model With Sexy Bhabi Ready For S...
 
Hinjewadi * VIP Call Girls Pune | Whatsapp No 8005736733 VIP Escorts Service ...
Hinjewadi * VIP Call Girls Pune | Whatsapp No 8005736733 VIP Escorts Service ...Hinjewadi * VIP Call Girls Pune | Whatsapp No 8005736733 VIP Escorts Service ...
Hinjewadi * VIP Call Girls Pune | Whatsapp No 8005736733 VIP Escorts Service ...
 
Call On 6297143586 Yerwada Call Girls In All Pune 24/7 Provide Call With Bes...
Call On 6297143586  Yerwada Call Girls In All Pune 24/7 Provide Call With Bes...Call On 6297143586  Yerwada Call Girls In All Pune 24/7 Provide Call With Bes...
Call On 6297143586 Yerwada Call Girls In All Pune 24/7 Provide Call With Bes...
 
VIP Model Call Girls Kiwale ( Pune ) Call ON 8005736733 Starting From 5K to 2...
VIP Model Call Girls Kiwale ( Pune ) Call ON 8005736733 Starting From 5K to 2...VIP Model Call Girls Kiwale ( Pune ) Call ON 8005736733 Starting From 5K to 2...
VIP Model Call Girls Kiwale ( Pune ) Call ON 8005736733 Starting From 5K to 2...
 

Governing risk from decision-making learning algorithms (DMLAs)

  • 1. No part of this document may be quoted or reproduced without prior written approval from IRGC November 2018 Outcome from an IRGC workshop, July 2018 Governing risk from decision-making learning algorithms (DMLAs) https://irgc.epfl.ch
  • 2. Algorithms can now learn, self-evolve and decide autonomously Decision-making learning algorithms (DMLAs) can be understood as information systems that use data and advanced computational techniques, including machine learning or deep learning (neural networks), to issue guidance or recommend a course of action for human actors and/or produce specific commands for automated systems • For the time being, there is limited implementation of DMLAs, besides a handful of industry innovators and dominant players (e.g. tech giants and certain governments). Most organisations are still exploring what is possible, to the extent that they are exploring the full potential that such algorithms learn, self-evolve and can make decisions autonomously. • The potential of DMLAs is recognized in key sectors – notably in healthcare and for automated driving. More broadly, this huge technological revolution can also involve a profound transformation of society and the economy. 2Governing Risks from Decision-Making Learning Algorithms
  • 3. Applications Societies are becoming increasingly dependent on digital technologies, including machine learning applied across a broad spectrum of areas such as: o Transportation (e.g. autonomous driving) o Health (e.g. diagnostics and prognostics, data-driven precision / genomic medicine) o Administration (e.g. predictive policing, criminal risk assessment) o Surveillance (e.g. citizen scoring schemes, counter-terrorism) o Insurance (e.g. insurance underwriting, claim processing, insurance fraud detection, etc.) o News and social media o Advertising 3Governing Risks from Decision-Making Learning Algorithms
  • 4. Evaluating risks and opportunities from DMLAs • Policymakers face a difficult balancing act between allowing and incentivising the meaningful uses of DMLAs from the adverse ones. Risk of wrong or unfair outcome, including possible discrimination, must be carefully evaluated in light of expected benefits in efficiency and accuracy. • The more automated or ‘independently’ deciding algorithms are, the more they need to be scrutinized. DMLAs remain particularly challenging to decision-making when the stakes are high, when human judgment matters to concerns such as privacy, non-discrimination and confidentiality, especially when there is a risk of irreversible damage. • Technical and governance issues are tightly interconnected. There are opportunities and risks at both levels. 4Governing Risks from Decision-Making Learning Algorithms
  • 5. 5Governing Risks from Decision-Making Learning Algorithms Examples Potential risk of relying on DMLAs Expected benefit of using DMLAs Insurance contracts Incorrect actuarial analysis misprices risk or introduces unfair discrimination in prices More efficient allocation of risk, e.g. through better actuarial analysis and fraud detection Medical diagnostics & prognostics Wrong medical diagnosis, prognostic or treatment decision Improving the capacity to diagnose, prevent or treat life-threatening diseases Automated driving Wrong assessment of a car environment (car-to-car and car-to- infrastructure) leading to an accident Benefits of autonomous (connected) guiding of vehicles, such as increased traffic efficiency and fewer accidents; Comfort and convenience Predictive policy - Criminal justice Incorrect prediction of recidivism, potential unfair discrimination Ability to enforce rules a priori by embedding them into code - Public services / social benefits Incorrect, potentially unfair discriminative distribution of social benefits Embedding into code rules for a loan or social benefit attribution - Face recognition (ID) Undue or illegal citizen surveillance Reducing eyewitness misidentification (a lead cause in wrongful convictions)
  • 6. DMLAs can bring many benefits to society • analysing large volumes and flows of data, from multiple sources, in ways not possible for humans Analytic prowess • generating outcomes more promptly and less costly than could be done by human processors Efficiency gains • drawing linkages, finding patterns and yielding outcomes across domainsScalability • processing information more consistently and systematically than humansConsistency • processing and learning with dynamic data and adapting to changing inputs or variables fastAdaptability • performing fastidious or time-consuming tasks so as to free up human time for other meaningful pursuitsConvenience 6Governing Risks from Decision-Making Learning Algorithms
  • 7. DMLAs can cause new risks or amplify existing risks (1) 7Governing Risks from Decision-Making Learning Algorithms • difficulty to identify or correct errors or inaccuracy due to the intrinsic biases in input data and lack of transparency on the provenance of decisions, and difficulty to test DMLAs Erroneous or inaccurate outcomes • DMLAs are embedded in software and we lack sufficient knowledge on how to produce software that is always correct Recurring problem of software correctness • tension between privacy protecting rights such as ‘the right to be forgotten’ and the need for more complete and unbiased datasets for DMLAs to live up to their potential Threats to data protection and privacy • notably through the reproduction of certain undue biases around race, gender, ethnicity, age, income, etc. Social discrimination and unfairness • some DMLAs resemble ‘black boxes’ such that decision- making is difficult to understand and/or explain and thus attribution of responsibility or liability may be difficult Loss of accountability
  • 8. DMLAs can cause new risks or amplify existing risks (2) 8Governing Risks from Decision-Making Learning Algorithms • DMLAs are increasingly deployed in domains (e.g. of medicine, criminal justice, etc.) where human judgment and oversight matter Loss of human oversight • DMLAs are deployed by powerful actors, be they governments, businesses or other non-state actors to survey citizens or unduly influence their behaviour Excessive surveillance and social control • such as for criminal purposes, interference with democratic politics, or in human rights breaches (e.g. as part of indiscriminate warfare) Manipulation or malignant use
  • 9. DMLAs 10 key themes 9Governing Risks from Decision-Making Learning Algorithms
  • 10. #1 - Technology and governance are tightly connected • The governance of DMLAs entails both technical and non-technical aspects, and the challenge is to relate them well. • An important part of governance by DMLAs will be to define desired policy, research and business goals in a way that allows machine learning and data scientists and developers to embed the appropriate governance rules, norms or regulations into the very functioning of the algorithms. • It is further valuable to include a mechanism of auditing and quality control, to check adherence to these rules or norms. 10Governing Risks from Decision-Making Learning Algorithms
  • 11. #2 - What is new: algorithms 'learn' and self-evolve • Amidst different types of algorithms used for machine learning, algorithms that learn and self-evolve warrant particular attention. • In deep learning (with e.g. neural networks) algorithms are no longer “programmed” but increasingly “learned” and adaptive, giving them an ability to perform tasks that were previously done by humans trained or entrusted for such purpose. 11Governing Risks from Decision-Making Learning Algorithms
  • 12. #3 – Evaluating risk, across domains and applications • The evaluation of distinct and shared risks requires careful assessment of oundue biases in input data omethodological inadequacies or shortcuts caused by low-quality input data or inappropriate learning environment owrong outcome, e.g. possibly resulting in social discrimination or unfairness oloss of accountability and of human oversight oinappropriate or illegal surveillance, and malignant manipulation 12Governing Risks from Decision-Making Learning Algorithms
  • 13. #4 – Governing risk, considering existing benchmarks and regulations • When DMLAs are deployed in specialised domains –like medicine, insurance, public administration– they do not develop in a contextual vacuum: there already are certain decision-making practices, analytical thresholds, prescriptive or historical norms in place, which matter for calibrating and evaluating the performance of DMLAs vis-à-vis alternatives. • Existing regulatory frameworks vary by domain, therefore specific applications of DMLAs require spelling out the relevant benchmarks against which their performance must be evaluated and calibrated. • An overarching question is how to evaluate decisions by DMLAs in contrast to decisions by humans, which are not error or bias-free. When the benchmarks are lacking, how to define them? 13Governing Risks from Decision-Making Learning Algorithms
  • 14. #5 - Accuracy of outcome: critically important, not yet granted • The accuracy (or correctness) of DMLAs’ outputs is what needs to be established, especially when the decision-making process is not transparent or is difficult to explain, and/or the outcomes are difficult to interpret or explain. • Greater attention is needed to assure more robust methodological practices (particularly as regards the quality and appropriate use of input data and learning context) and to probe the computational dynamics and learning at play. • Of particular concern is that we do not quite know how to test machine learned and adaptive algorithms. • DMLAs may have the potential to optimise fewer errors in aggregate, but those errors may be qualitatively worse when judged against those made for a specific individual, or in comparison to those expected from equivalent human decision-making. • It is thus increasingly necessary to decide, perhaps even regulate for, how we determine accuracy for DMLA systems. o Explainability of the outcome will probably be a key component of trustworthiness o DMLAs should be able to detect when the outcome is inaccurate. • Anyway, when individuals or organisations are affected by a decision which they believe is wrong or unfair, they should be given a right to receive an explanation, and a right to recourse. 14Governing Risks from Decision-Making Learning Algorithms
  • 15. #6 – Biases are a key challenge • Algorithmic bias – at the level of data inputs, learning context or outputs – can yield discriminatory treatment along sensitive or legally protected attributes of race, gender, ethnicity, age, income, etc. Algorithmic bias remains tricky to address, particularly when manifesting through proxies. • De-biasing techniques exist, but entail a trade-off: in order to evaluate whether undesired proxy measures creep into an algorithm, some very sensitive categories of information, such as race, gender, age, ethnicity, etc. may have to be included, to know if the biases are sufficiently minimised. • Obtaining more complete and unbiased data remains a pervasive and significant challenge. • Examples: errors in insurance contract proposal, unfairness or undue discrimination in predictive policing 15Governing Risks from Decision-Making Learning Algorithms
  • 16. #7 - Humans in control: under which circumstances and how? • A key question when evaluating whether we make the right choice in relying on decision- making learning algorithms for specific applications is to ask if humans are o in the loop (actively in control) o on the loop (i.e. in alert mode, able to take control if need be) or o off the loop (unable to take control back). • While ‘on’ the loop may strike as the most balanced approach in that it suggests an ideal level of control, it comes with some risk that humans may struggle to ‘jump in’ when handed control if lacking the relevant context, practice, attention and time for making a critical decision. • Examples: role of the driver in automated driving, role of human judgement in criminal justice, role of medical doctor in medical diagnostic 16Governing Risks from Decision-Making Learning Algorithms
  • 17. #8 - Need to: develop standards, principles and good practices • Standards, principles and good practices can help developers, industry, and other organisations embed best practices ‘by design’. • The IEEE Ethically Aligned Design Principles and standards (Global Initiative for Ethical Considerations in the Design of Autonomous Systems), the Asilomar principles for Responsible AI, or other initiatives by international organisations like the OECD, show that the development of governance arrangements for the programming, implementation and use of DMLAs is a shared concern. 17Governing Risks from Decision-Making Learning Algorithms
  • 18. #9 - Defining accountability, responsibility and liability is central • It becomes ever more important to determine what are the concrete ways to demand and deliver accountability, who assumes what responsibility in DMLA’s development and use, and who is liable in case of erroneous or wrongful applications. Better defining legal uses of DMLAs can also help different organisations determine whether to enable, accelerate or restrict their adoption. • Regulated sectors might be able to include the use of DMLAs in their existing regulatory frames. • Liability attribution is particularly challenging given the ‘many hands’ partaking in the design and deployment of DMLAs, and the lack of clarity about software liability. • In Europe, the EU Global Data Protection Regulation (GDPR) makes in theory some provision for the right not to be subject to automated decision-making and the right to an explanation (art. 22.1). But this provision may not apply in many cases (as defined in art. 22.2), which leaves much ambiguity about which aspects of the GDPR apply to DMLAs. There remains a general prohibition on making such decisions using personal data, but interpretation of the law and compliance with it may vary by countries and/or domains. 18Governing Risks from Decision-Making Learning Algorithms
  • 19. #10 - Engineering digital and social trust is a critical challenge and increasingly relevant • Digital trust can be ensured by techniques such as accountable computing, blockchains, smart contracts, software verification, cryptography, and trusted hardware technologies that can, for example, enable distributed or decentralised enforcement of accountability and transparency. Developing provable theorems that algorithms do what they are supposed to do are both possible and important: they help set certain critical ‘guard-rails’ or ensure against classes of ‘bad decision events’. • While possible to look for ways to mandate or improve digital trustworthiness, the challenge is also about trusting the broader ecosystem in and around DMLAs, for which a governance structure might help. Who benefits? Whom or what to trust? • Informational asymmetries –between data subjects, brokers, companies or various platforms where data is gathered– may affect public perception as to whether DMLAs are being put to good general use. • Especially when the stakes are high, some actors might try to ‘game it’ to their advantage, including for adversarial purposes. Thus, an important general consideration for international governance of DMLAs revolves around incentives for and vulnerability to abuse. 19Governing Risks from Decision-Making Learning Algorithms
  • 20. Conclusion • The development of DMLAs provides many opportunities, but also a series of technical and governance challenges around accuracy, explainability and fairness of the outcome, and transparency of the methodology and process. • While more sharing of higher quality data is needed, data privacy must also be ensured. Social norms are changing and principles are being redefined. • Incentivizing the production of accurate, fair and socially acceptable outcome from DMLAs will be key for the development of the technology. This will require a dialogue between scientists and society, facilitated by trusted bodies. • Assigning accountability and legal responsibility should ensure that DMLAs will be developed for good. 20Governing Risks from Decision-Making Learning Algorithms
  • 21. https://irgc.epfl.ch • This presentation summarises some of the findings and recommendations presented in a report elaborated by IRGC, after a multi-stakeholder and multi- disciplinary workshop on governing decision-making algorithms, on 9-10 July 2018 at the Swiss Re Institute in Zurich. • Autorisation to reproduce granted under the condition of full acknowledgement of IRGC as the source: EPFL IRGC (2018) The Governance of Decision-Making Algorithms. Lausanne: EPFL International Risk Governance Center • Available from o https://irgc.epfl.ch/issues/projects-cybersecurity/the-governance-of-decision-making-algorithms/ o https://infoscience.epfl.ch/record/261264?ln=fr&p=irgc 21Governing Risks from Decision-Making Learning Algorithms

Editor's Notes

  1. In medicine, algorithmic decision-making can help with diagnostic and prognostics, especially around ‘omics’ (using genetic information for precision medicine) or with analysing medical reports; In transport, algorithmic decision-making helps automate driving and could, with due care for reliability, improve driving convenience and safety; In space research, it can help analyse publicly available satellite data and improve earth observation; In administration, it can help automate repetitive and fatiguing tasks for humans (e.g. processing different claims, documents, etc.) as well as fight fraud, financial crime or identity theft.
  2. In medicine, algorithms can become highly proficient at learning different relationships or patterns, but they often cannot explain how or why these exist (Greenwald & Oertel, 2017). For example, a predictive machine learning model trained to predict the risk of death in pneumonia patients may also learn that patients with asthma have a better probability of survival, but it may struggle to explain this somewhat counterintuitive finding (e.g. that asthma was predictive of survival because the patients had been treated more aggressively) (Lipton, 2016).  In justice, algorithmic decision-making can be deployed in ways that also discriminate against citizens on the basis of sensitive attributes of race, gender, health, age, etc. As per ProPublica’s investigation, in criminal justice, software that predicts recidivism can perpetuate historically problematic racial biases. Unfair and inaccurate outcomes can also obtain if learning algorithms proceed with incomplete data or are not exposed to the full outcome for one large group (e.g. people denied bail, if they would commit a crime). Algorithmic predictive policing can also unduly target a ‘type’ of person/community (e.g. of particular ethnic belonging, race, immigrants, etc.) as more ‘crime-prone’, bypassing or ignoring important socio-historic dynamics or biases at play. In public administration, algorithmic decision-making can link biometric and other personal data to not only the provision of concrete public services (e.g. healthcare, education, etc.) but also to score and rank citizens in ways that control their behaviour. China’s experiment in ‘social ranking’ raises serious questions about how governments mobilise the technology for improving the provision of public services while at the same time exercising social and political control (Hvistendahl, 2017). In national security, small algorithmic errors can prove big. For example, in the case of SKYNET (a system proposed to the NSA to help identify terroristic couriers based on smartphone data), even small false-alarm rates (0.008%) can be big, affecting some 4,400 individuals when the algorithm is applied to a very large pool of data (circa 55 million datasets (Zweig, Wenzelburger & Krafft, 2018). As per ProPublica’s reporting on the COMPAS software used to predict recidivism with undue racialimpact (Anwin, 2016) and reactions to it (Corbett-Davies, Pierson, Feller, & Goel, 2016)
  3. In medicine, algorithms can become highly proficient at learning different relationships or patterns, but they often cannot explain how or why these exist (Greenwald & Oertel, 2017). For example, a predictive machine learning model trained to predict the risk of death in pneumonia patients may also learn that patients with asthma have a better probability of survival, but it may struggle to explain this somewhat counterintuitive finding (e.g. that asthma was predictive of survival because the patients had been treated more aggressively) (Lipton, 2016).  In justice, algorithmic decision-making can be deployed in ways that also discriminate against citizens on the basis of sensitive attributes of race, gender, health, age, etc. As per ProPublica’s investigation, in criminal justice, software that predicts recidivism can perpetuate historically problematic racial biases. Unfair and inaccurate outcomes can also obtain if learning algorithms proceed with incomplete data or are not exposed to the full outcome for one large group (e.g. people denied bail, if they would commit a crime). Algorithmic predictive policing can also unduly target a ‘type’ of person/community (e.g. of particular ethnic belonging, race, immigrants, etc.) as more ‘crime-prone’, bypassing or ignoring important socio-historic dynamics or biases at play. In public administration, algorithmic decision-making can link biometric and other personal data to not only the provision of concrete public services (e.g. healthcare, education, etc.) but also to score and rank citizens in ways that control their behaviour. China’s experiment in ‘social ranking’ raises serious questions about how governments mobilise the technology for improving the provision of public services while at the same time exercising social and political control (Hvistendahl, 2017). In national security, small algorithmic errors can prove big. For example, in the case of SKYNET (a system proposed to the NSA to help identify terroristic couriers based on smartphone data), even small false-alarm rates (0.008%) can be big, affecting some 4,400 individuals when the algorithm is applied to a very large pool of data (circa 55 million datasets (Zweig, Wenzelburger & Krafft, 2018). As per ProPublica’s reporting on the COMPAS software used to predict recidivism with undue racialimpact (Anwin, 2016) and reactions to it (Corbett-Davies, Pierson, Feller, & Goel, 2016)
  4. Technology and governance are tightly connected. What is new: algorithms can ‘learn’ and self-evolve Risk evaluation and governance must be done for each domain and applications (e.g. healthcare, automated driving, predictive policy-making, insurance, etc.) Governance of DMLAs must consider existing regulations and key benchmarks, against which DMLAs’ performance must be calibrated In is critically important to improve the accuracy of outcome, compared to human decision The problem of algorithmic biases is a key challenge, in particular when it leads to outcome with unfair social consequences Under some circumstances humans should remain in control. It is thus important to differentiate if and when humans are or must be in control, and when they are unable to take control back There is a need to develop standards, principles and governance rules and embed them into the very design and functioning of DMLAs Defining accountability, responsibility and liability remain central Engineering digital trust and developing social trustworthiness is a critical challenge and increasingly relevant
  5. >> In medicine: is, for example, the relevant benchmark or ‘gold standard’ in clinical research and diagnostics the ‘truly positive case’? If so, how is this possible for DMLA enabled diagnostics? Should the latter’s performance equal or surpass the current standard in order to be authorised? >> In transport: Is the ‘fail-safe’ option the standard that matters most and if so, can we mandate it for automated vehicles? >> In aviation: Should DMLAs be as reliable and trustworthy as current avionics systems? >> In criminal justice: is ‘due process’ critical for practising modern criminal justice and if so, can algorithmic decision-making embed this when attributing criminality or determining bail conditions? >> In predictive policy-making: is ‘neutrality’ with respect to one’s gender, race, ethnicity, etc. the relevant norm for non-discrimination in society and if so, can and should we de-bias DMLAs by using these explicit attributes?
  6. In the loop, e.g. humans are fully in control, in the sense that, at some stage in the decision-making process, the algorithm stops, hands the decision over to a human, who then instructs the algorithm to proceed in a certain way; On the loop, e.g. humans can take control if need be, in the sense that the algorithm informs a human supervisor who can intervene if need be to modify the decision or to take control (get back in the loop); and Off the loop, e.g. humans cannot take control, either because there is no supervision by design (in hypothetical fully automated self-driving cars), or because decisions are irreversible (launching nuclear missiles). In practice, many “on the loop” systems put the human “off the loop” through the sheer amount of decisions and data being produced by the algorithm, making human supervision all but impossible (for example in high-speed trading where an algorithm can order complex trades worth millions in a fraction of a second). Thus while ‘on’ the loop may seem like the most balanced if not ideal ‘default’ option, it comes with some risk that humans may struggle to ‘jump in’ when handed control if lacking the relevant context, practice, attention and time for making a critical decision. The discussion around being in/on/off the loop is more metaphorical than a precise typology (see Box 5 for levels of automation in driving), but it is a useful general heuristic to gauge i) when, how and with what latitude can we reasonably allow algorithms to take on decision-making attributes and ii) correspondingly, who assumes responsibility (moral, but especially legal) in case of decisions that lead to some harm?
  7. Possible or emerging tools and techniques to embed trustworthiness in DMLAs It is worthwhile recalling certain technical developments pertinent to the governance of algorithms. Of particular interest are accountable computing, blockchains, smart contracts, software verification, cryptography, and more trusted hardware technologies that can, for example, enable distributed or decentralised enforcement of accountability and transparency. Developing provable theorems that algorithms do what they are supposed to do are both possible and important: they help set certain critical ‘guard-rails’ or ensure against classes of ‘bad decision events’. Open source processes - at the level of software and/or datasets – are important, but no panacea: open source datasets require major privacy caveats and open source software could be gamed, especially when the stakes or incentives are high. These reservations notwithstanding, open source software remains important for developing trust in the underlying technology. It is more difficult to know if/when closed software or unverifiable play with data can be trusted. While anonymisation, differential privacy, and privacy-preserving aggregation and analysis technologies can help produce datasets that have more transparency even when they contain sensitive data, the development of formal verification technologies for algorithms, stress tests and/or independent audits of software are crucial for assuring confidence in engineering best practices. The conversation around governance of DMLAs sometimes bifurcates on whether to regulate ‘data’ or ‘code’, but it is also possible to develop data-analysis platforms that manage datasets together with code in a reproducible, accountable, privacy- and access-controlled fashion. At a time where the technologies underlying trustworthy DMLAs remain very cutting-edge, and where the skilled workforce to develop them is extremely scarce, it may be that access to it will remain limited, for now, to well-resourced companies or organisations. This may enable these organisations to shape decision-making across the board without proper governance. Even so, technical solutions will not alone solve the problems of algorithmic bias, privacy vs. transparency, reliability/accuracy, accountability, etc. They are, however, fundamental tools to aid the governance of DMLAs, if they can be used in pursuance of specific and well-defined goals. An important part of governance by technology will thus be to define desired policy goals and general good principles in a form that can be applied to specific technologies.