The management of AI systems is a shared responsibility. By implementing the ISO 31000 Framework and complying with emerging regulations like the EU ACT, we can jointly create a more reliable, secure, and trustworthy AI ecosystem.
Amongst others, the webinar covers:
• Understanding AI and the regulatory landscape
• AI and the threat landscape
• A risk driven approach to AI assurance - based on ISO 31000 principles
• Stress testing to evaluate risk exposure
Presenters:
Chris Jefferson
Chris is the Co-Founder and CTO at Advai. As the Co-Founder of Advai, Chris is working on the application of defensive techniques to help protect AI and Machine Learning applications from being exploited. This involves work in DevOps and MLOps to create robust and consistent products that support multiple platforms, such as cloud, local, and edge.
Nick Frost
Nick Frost is Co-founder and Lead Consultant at CRMG. Nick’s career in cyber security spanning nearly 20 years. Most recently Nick has held leadership roles at PwC as Group Head of Information Risk and at the Information Security Forum (ISF) as Principal Consultant. In particular Nick was Group Head of Information Risk for PwC designing and implementing best practice solutions that made good business sense that prioritized key risks to the organisation and helped minimize disruption to ongoing operations. Whilst at the ISF Nick led their information risk projects and delivered many of the consultancy engagements to help organisations implement leading thinking in information risk management.
Nicks combined experience as a cyber risk researcher and practitioner designing and implementing risk based solutions places him as a leading cyber risk expert. Prior to cyber security and after graduating from UCNW and Oxford Brookes Nick was a geophysicist in the Oil and Gas Industry.
Date: August 24, 2023
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: https://pecb.com/en/education-and-certification-for-individuals/iso-31000
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
Whitepaper: https://pecb.com/whitepaper
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
YouTube video: https://youtu.be/MXnHC6AvjXc
Managing ISO 31000 Framework in AI Systems - The EU ACT and other regulations
1.
2. PECB Next events
1. Don’t forget to purchase your ticket regarding
PECB’s conference: https://bit.ly/3Sq4nTO
▪ 4-5 October – In-person
2. Don’t miss out on the launching of the Chief
Information Security Officer and NIS Directive 2.0
Training Courses, which will be held online, as well as in-
person at the PECB Insights Conference 2023, in Paris,
France!
▪ 18-19 September – Online
▪ 2-3 October – In-person
Purchase your ticket here: https://bit.ly/3JouNDd
3. Cyber security –
right first time.
Assuring artificial
intelligence.
Co-Founder at Cyber
Risk Management Group
Nick Frost
Co-Founder & CTO at
Advai
Chris Jefferson
linkedin.com/in/nickfrost linkedin.com/in/chris-jefferson-3b43291a
5. Agenda
Co-Founder at Cyber
Risk Management Group
Nick Frost
Co-Founder & CTO at
Advai
Chris Jefferson
UNDERSTANDING AI AND THE REGULATORY LANDSCAPE
Section one
AI AND THE THREAT LANDSCAPE
Section two
A RISK DRIVEN APPROACH TO AI ASSURANCE - BASED ON ISO
31000 PRINCIPLES
Section three
STRESS TESTING TO EVALUATE RISK EXPOSURE
Section four
NEXT STEPS
Section five
6. Poll #1
To what extent is your
organisation adopting AI today?
Our organisation is not
discussing AI at all.
Our organisation has
probably adopted AI
without knowing.
Our organisation has
knowingly adopted AI
but has not assessed the
risks.
Our organisation has
knowingly adopted AI
and has assessed the
risks.
8. An AI system, is a machine-based system capable of
influencing the environment by producing an output
for a given set of objectives..
9. AI Systems and Models
AI system outputs can be recommendations,
predictions or decisions, and are designed to
operate with varying levels of autonomy.
It uses machine and/or human-based inputs/data to:
1. perceive environments;
2. abstract these perceptions into models; and
3. use the models to formulate options for outcomes.
10. AI Introduces risks that current frameworks
are ill equipped to manage.
A new age of threats and bad outcomes
Data poisoning
Overreliance
Evasion Attacks
Data Leakage
Unpredictability
Discrimination
Data protection
11. Organisational Readiness combats these
new threats
#principles
Use Risk Management to identify
Value and aid decision making.
CLAUSE 3. PRINCIPLES
#framework
Incorporate ISO 31000 principles
into the AI Use case definitions.
CLAUSE 4. FRAMEWORK
#process
Ethics, data quality,
representativeness, understanding
context.
CLAUSE 5. PROCESS
Organisational Readiness supports the incorporation
of ISO 31000 to use Risk Management to identify
Risk and Value from AI projects.
12. Useful terms
Here is a quick cheat sheet:
Machine Learning Operations
is the cycle lifecycle for
building, optimizing, deploying
and retraining AI Models.
ML Ops &
Lifecycle
A model that produces “new”
content by predicting the next
likely word, pixel, etc.
GPT4, LLAMA, DAL-E, etc.
Generative AI
Foundation models are a
subset of AI models that have
been trained on vast
quantities of data, to enable
“generalisable” models.
Usually based on Deep
learning and Neural Networks
Foundation
Models
An AI model that is used to
classify, predict, or infer
meaning based on identifying
patterns in the data that it has
been trained on.
BERT,YOLO, etc.
Inference AI
13. Incoming Global Regulation
What regulations can we
expect?
Where are these
Regulations?
When do they become
law? Australia
AI Action Plan
2024/25+
EU
EU AI Act
2024+
UK
UK AI Whitepaper
2024/25+
US
AI Bill of Rights
2024/25+
Japan
Agile AI Governance
2023/24+
China
AI Regulations
2023/24+
Interim
AI
Regulation
14. Incoming Global Regulation
What are the common principles
across these regulations?
Transparency & explainability
01
Fairness & nondiscrimination
02 Human-centrism
06
Safety Accountability
04
Security & robustness
03
Data governance & privacy
05
Societal and environmental well-being
07
15. Case study:
ChatGPT was initially blocked from Italy and were under investigation across Europe.
Case study: When businesses fail to align
with regulatory standards
AI systems that fail to be aligned are halted in their
steps and pose significant risk.
Failing to meet any legal
justification for processing
personal data.
03
Failing to adequately prevent
children under 13 years old
using the service.
04
Allowing ChatGPT to provide
inaccurate or misleading
information.
01
Failing to notify users of its
data collection practices.
02
Improve
Reduce
Prevent
It forced them to change their product (improving security and privacy) to win back
user trust in their system. These improvements mean there is lower risk of
regulatory and public backlash in the future or in other locations.
16. Perceived ‘harms’ – EU Act on AI
Security Threats
Bias and Discrimination Loss of Privacy
Safety Concerns
Transparency and
Explainability
17. With such an increase in global regulatory
efforts, what can we expect ahead?
Focus on registering and regulating “High-
Risk” AI Systems
01
A focus on discrimination, bias, and black-
box systems.
02
Principles to Standards
04
Extended data regulations, legal challenges
over data, copyright, privacy
03
18. The Emergence of AI Standards and Hubs
The Machine Learning field
is moving from Principles
to Standards.
Emerging Standards: ISO
ISO/IEC TR 24027:2021
Information technology — Artificial intelligence
(AI) — Bias in AI systems and AI aided decision
making
ISO/IEC 25059:2023
Software engineering — Systems and software
Quality Requirements and Evaluation (SQuaRE) —
Quality model for AI systems
ISO/IEC FDIS 5338 (Under Development)
Information technology — Artificial intelligence —
AI system life cycle processes
19. Capture the relevant risk and
connect relevant stakeholders
throughout the AI lifecycle.
Identify KPIs and metrics for audit
and approval.
Enables successful deployment.
Advai’s Alignment framework for developing AI
that meets these standards.
21. The sophistication of cyber attacks has compounded
along with technological evolution, culminating in the
incredible complexity we see in AI.
22. z
Examples of intentional AI attacks
According to the Federal Trade
Commission, impostor scams
accounted for the second highest
reported losses in 2022, which
amounted to US$2.6 billion.
UAE based bank duped,
branch manager to
transfer $35M.
Microsoft Tay’s RL
chatbot poisoned to
respond as a racist.
Tesla Stop sign Attack,
makes AI think it is a
30mph sign.
Clearview AI, access
misconfiguration
training data theft
McAfee Advanced
Threat Research Data
Poisoning
23. z
Examples of unintended AI failures
Even when not attacked AI
is bound by its own bias,
the assumptions of its
engineers and struggles
with the complexity of the
real world.
Boeing automated flight
control system released
prematurely, leading to
the crash of flights
Tesla cars crash due to
autopilot feature, not
identifying obstacles
correctly.
False facial recognition
match leads to Black
man’s arrest by Detroit
Police
AI stock trading
software caused a
trillion-dollar flash crash
Amazon’s AI recruiting
tool showed bias
against women.
24. z
A cyber threat taxonomy
Unintentional corruption of
business information
2
Accidental physical damage
3
Unintentional misconfiguration
of software or hardware
1
Unforeseen effects of change
4
User error
5
Cross site scripting
6
Modifying privileged access
7
Denial of service attack
9
Malicious corruption of
business information
10
Brute force / dictionary
attacks
8
Malicious interference with
communications
11
Flood
12
Fire
13
Failure of HVAC system
14
Theft of software code
16
Theft of personally
identfiable information (PII)
17
Pandemic
15
Theft of sensitive business
information (incl. IP)
18
Theft of hardware
19
Data leakage
20
Unauthorised access to
systems or networks
21
Misuse of corporate systems
23
Phishing
24
Unauthorised modification of
information
22
Vishing
25
Spear phishing
26
Social engineering
27
Communications
eavesdropping
28
Shoulder surfing
30
Malfunction of software
31
Session hijacking
29
Malfunction of hardware
32
Malicious physical damage
33
Unathorised network
scanning
34
Unintentional surge in
network traffic
35
Power disruption
36
Malicious code
37
Malware
38
Botnet
39
Ransomware
40
Changes to risk governance
41 Out of date strategy for risk
management
42 Lack of or poorly executed
decisions related to risk
43 Rapid changes in regulatory
landscape
44 Lack of risk management
skills
45
25. zzz
Examples of specific AI threats
Examples:
Cyber security
threats to AI
systems
Adversarial data poisoning
Unauthorized access to AI/ML model source code
Input data manipulation
Introduction of selection bias
Overloading machine learning models
Theft of Personally Identifiable
Information (PII)
Theft of sensitive business information
Data scarcity
Label manipulation and inaccuracy
Selection
bias
Data
manipulation
Overloading
ML
Privacy data
disclosure
Data theft
Data scarcity
Label
manipulation
and
inaccuracy
Adversarial
data
poisoning
Unauthorized
access to
AI/ML model
source code
26. z
Prioritising critical controls for AI systems
Threat x
AI voice
cloning
Threat y
Threat
intel.
2 2 3
Multi
factor
authentic
ation
1 4 0
Anti
malware
2 4 1
Control library
Threats specific
to AI systems
4
Vital control
4
Vital control
Effective control
against the AI
threat
27. A RISK DRIVEN APPROACH TO AI
ASSURANCE - BASED ON ISO
31000 PRINCIPLES
Section four
28. Utilising an existing enterprise risk architecture
Board / Exec
Committee
Aggregated AI
Risk Profile
Aggregation of AI risk
information
AI Risk Architecture
- Supported by AI Assurance
Framework
AI Risk Architecture
/ Risk Management
Approach implementation
AI 1 AI 2 AI 3
AI 3
Risk
Profile
AI 2
Risk
Profile
AI 1
Risk
Profile AI Risk Assessment,
including stress testing (for
specific instances of AI)
AI Risk Orientation
for Boards / Exec
Committees
Floating box to make
a key point if wanted
Aggregation of AI
risk information
29. Components for a risk architecture for AI
Cyber Risk Management
Data sets and
supporting
components
Core Program
Components
Cyber risk monitoring process
Cyber risk related triggers
Updates to cyber risk assessment
Triggers to the cyber risk
landscape
Monitoring
Reporting
Cyber risk reporting
Cyber risk and ERM
Cyber risk remediation plan
Remediation approval via risk
committees
Assessment
Determining asset criticality
Profile cyber threats
Evaluate control strength
Determine cyber risk
Response to cyber risk
CMMI Capability Maturity Model for cyber risk management covering Governance, Assessment, Reporting, Monitoring
A glossary of key
terms
Asset Criticality Table
Cyber risk appetite
template
Mapping of cyber
risks to other risk
disciplines
Risk reporting
template
Cyber threat templates
(for AI systems)
Standard for
establishing a cyber
risk appetite
Communications plan
for cyber risk reporting
Education packs to
understand cyber risk
Risk Management
software platform
Standard for
integrating cyber risk
with core risk
processes
Risk exception process
Governance
Cyber risk function and structure
Training and education for cyber
risk
RACI model for cyber risk
management
Cyber risk appetite
30. Risk assessment for AI
Follows the ISO 31000
risk process
Whilst the
fundamentals of risk
assessment won't
change, there are
specific aspects of the
assessment process
and data sets that will.
31. Communicating business criticality
A High-Risk system
(described in EU AI
Act) may mean
something different
for your organisation.
Step 1. Order made by
customer online
Step 2. Receipt of
products confirmed by
supplier. Entered to ERP
system
Step 3. Access provided to
third parties that provide
delivery services
Online portal
AI ERP
system
Applications that support the
business process
Risks 1:
Business email compromise
Compromise of customer
accounts
Risks 2:
Data entry error
Theft of data
Loss of power
Risks 2:
Ransomware
Unauthorised deletion
Theft of data
Business objective: Reduce cost for customer call centres
Business process: Online ordering for customers
32. Measuring business criticality
LOW MODERATE HIGH VERY HIGH
FINANCIAL <£100,000 £100,001 - £500,000 £500,001 - £1.5 million >£1.5 million
REPUTATIONAL
No or low media
coverage
Moderate adverse coverage
(e.g story runs over 1-2 days)
Significant adverse coverage
>2 days, focus of attention
Adverse coverage
sustained over more
than 1 week
REGULATORY*
No increased
regulatory focus
Slight increase in regulatory
focus / impact
Significant attention from
regulator / Notified single
breach
Multiple breaches /
License withdrawn
HEALTH /
SAFETY
Very minor injury
/ No ongoing
effect
Non-critical injury requiring
medical intervention / No
prolonged effect
Critical injury requiring
hospitalisation / medium term
effect
Death / Long term
debilitation
*Would apply to current regulations, and future concern.
Articulate
potential impact in
business terms
34. AI Stress Testing Helps You Understand
Your Exposure To Risk
Adversarial AI as an approach to robustness.
Only from understanding
where AI models fail can
you truly understand your
exposure to Risks
Normally, AI researchers are
trying to maximise accuracy,
focusing on where models work.
At Advai, we research where
models go wrong. We’re
purposely looking for failure in AI
systems.
35. Measure Performance
Uses held back training data.
In sample data
Tests basic expectations
Used for finding the most
accurate model
This is a standard test performed to benchmark a
model or multiple models, usually as part of the
training process.
36. Measure Robustness
Uses adversarial data.
Data with tuned perturbations
Exploits model optimisations
Finds fastest path to failure
This can be a general or targeted test that stress the
models input data to identify vulnerabilities and weak
classes.
37. Measure Reliability
Uses engineered and high-
volume inputs
Tests timeliness of response
Test structure of response
Test range of responses
This is a wider test of the “AI system” and looks
to identify the reliability of model responses
within a pipeline, e.g. with guardrails.
38. 1. Summary of key points
1. Regulation is imminent and will impact
where and how AI can be used legally.
2. Pay attention to new threats but focus
on all components for risk management.
3. AI exposes business to new risks, which
need to be benchmarked and mitigated
39. Poll #2
What is the main concern with AI
in the workplace?
Lack of understanding
amongst decision
makers, so unable to
make a risk-based
decision.
Uncontrolled adoption of
AI – from the testing lab
to running critical
processes.
AI enabled cyber-attacks
– self learning AI attacks
that overwhelm
defences.
Stringent regulation that
clashes with innovation
– resulting in potential
loss of competition.
41. How do we proceed?
Understand how
AI is (or will be)
being used today
1
Determine the
extent to which
AI risks are
understood
2
Leverage
existing risk
architecture to
incorporate AI
3
Undertake risk
assessments
and clearly
communicate
the possible
impact
4
Continue to
monitor AI risks
and stress test
key controls
5
42. Any questions?
Cyber security –
right first time.
Assuring artificial
intelligence.
Co-Founder at Cyber
Risk Management Group
Nick Frost
Co-Founder & CTO at
Advai
Chris Jefferson
linkedin.com/in/nickfrost linkedin.com/in/chris-jefferson-3b43291a
2023 has felt like its been the year in which AI has really been launched. Whilst AI has been around for a number of years and we’re familiar with AI in the form of Alexa and Siri, this year has seen a significant amount of development – and visibility from the likes of Open AI and Chat GPT, governments announcing new regulations and frameworks and lets be honest every vendor now says that their product is AI enabled……..But we are seeing a technology shakeup. And from a businesses point of view they will look at AI as an enabler (cost savings, increasing market penetration) and the risks for rapid adoption of AI must be managed, which is what we’re going to cover
OK, so we have a lot to get through today.
We’re going to cover the really important features and facts about AI, including the regulatory landscape
Then we’re going to look at the AI threat landscape, what’s different here and what we’re seeing today in terms of AI attacks – keep in mind that not all AI attacks are intentional
Then we will look at how a risk based approach – think ISO 31000 can help drive AI assurance
And then we will cover something that I would say is unique to AI and that is Stress Testing as part of risk evaluation
Ok first up is a quick poll
Understanding your organisation’s AI Readiness is a key aspect to AI Adoption.
Technical Readiness
Organisational Readiness
So, our aim for this talk is as follows:
- provide background to AI technology
- to understand the AI threat landscape – whats really different when it comes to cyber risk management and AI
- how to plug in a risk-based approach with an existing risk architecture, supported by an assurance FW, that covers cyber security, data privacy and ethics
- integration of AI stress testing to ensure AI systems have undergone a rigorous set of tests so that WE can all gain the benefits of AI safely, securely and predictably
China:
China’s AI Regulations and How They Get Made - Carnegie Endowment for International Peace
Text - S.3572 - 117th Congress (2021-2022): Algorithmic Accountability Act of 2022 | Congress.gov | Library of Congress
Bill C-27: An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts (justice.gc.ca)
Highlight AI principles
https://www.linkedin.com/posts/dan-whitehead-76620027_global-ai-governance-principles-activity-7095069619593334784-iye8?utm_source=share&utm_medium=member_android
So, our aim for this talk is as follows:
- provide background to AI technology
- to understand the AI threat landscape – whats really different when it comes to cyber risk management and AI
- how to plug in a risk-based approach with an existing risk architecture, supported by an assurance FW, that covers cyber security, data privacy and ethics
- integration of AI stress testing to ensure AI systems have undergone a rigorous set of tests so that WE can all gain the benefits of AI safely, securely and predictably
Highlight AI principles
https://www.linkedin.com/posts/dan-whitehead-76620027_global-ai-governance-principles-activity-7095069619593334784-iye8?utm_source=share&utm_medium=member_android
Biometric identification and categorisation of natural persons:
(a) AI systems intended to be used for the ‘real-time’ and ‘post’ remote biometric identification of natural persons;
2. Management and operation of critical infrastructure:
(a) AI systems intended to be used as safety components in the management and operation of road traffic and the supply of water, gas, heating and electricity.
3. Education and vocational training:
(a) AI systems intended to be used for the purpose of determining access or assigning natural persons to educational and vocational training institutions;
(b) AI systems intended to be used for the purpose of assessing students in educational and vocational training institutions and for assessing participants in tests commonly required for admission to educational institutions.
4. Employment, workers management and access to self-employment:
AI systems intended to be used for recruitment or selection of natural persons, notably for advertising vacancies, screening or filtering applications, evaluating candidates in the course of interviews or tests;
(b) AI intended to be used for making decisions on promotion and termination of work-related contractual relationships, for task allocation and for monitoring and evaluating performance and behavior of persons in such relationships.
5. Access to and enjoyment of essential private services and public services and benefits:
(a) AI systems intended to be used by public authorities or on behalf of public authorities to evaluate the eligibility of natural persons for public assistance benefits and services, as well as to grant, reduce, revoke, or reclaim such benefits and services;
(b) AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems put into service by small scale providers for their own use;
(c) AI systems intended to be used to dispatch, or to establish priority in the dispatching of emergency first response services, including by firefighters and medical aid.
6. Law enforcement:
(a) AI systems intended to be used by law enforcement authorities for making individual risk assessments of natural persons in order to assess the risk of a natural person for offending or reoffending or the risk for potential victims of criminal offences;
(b) AI systems intended to be used by law enforcement authorities as polygraphs and similar tools or to detect the emotional state of a natural person; EN 5 EN
(c) AI systems intended to be used by law enforcement authorities to detect deep fakes as referred to in article 52(3);
(d) AI systems intended to be used by law enforcement authorities for evaluation of the reliability of evidence in the course of investigation or prosecution of criminal offences;
(e) AI systems intended to be used by law enforcement authorities for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 or assessing personality traits and characteristics or past criminal behaviour of natural persons or groups;
(f) AI systems intended to be used by law enforcement authorities for profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 in the course of detection, investigation or prosecution of criminal offences;
(g) AI systems intended to be used for crime analytics regarding natural persons, allowing law enforcement authorities to search complex related and unrelated large data sets available in different data sources or in different data formats in order to identify unknown patterns or discover hidden relationships in the data.
7. Migration, asylum and border control management:
AI systems intended to be used by competent public authorities as polygraphs and similar tools or to detect the emotional state of a natural person;
(b) AI systems intended to be used by competent public authorities to assess a risk, including a security risk, a risk of irregular immigration, or a health risk, posed by a natural person who intends to enter or has entered into the territory of a Member State;
(c) AI systems intended to be used by competent public authorities for the verification of the authenticity of travel documents and supporting documentation of natural persons and detect non-authentic documents by checking their security features;
(d) AI systems intended to assist competent public authorities for the examination of applications for asylum, visa and residence permits and associated complaints with regard to the eligibility of the natural persons applying for a status.
8. Administration of justice and democratic processes:
(a) AI systems intended to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts.
ISO 25059:2023
ISO/IEC TR 24027:2021
ISO/IEC DIS 5259-(1 to 6): Artificial intelligence — Data quality for analytics and machine learning (ML) — Part 1: Overview, terminology, and examples
ISO/IEC CD TR 5469: Artificial intelligence — Functional safety and AI systems
ISO/IEC 24029-2 : Artificial intelligence (AI) — Assessment of the robustness of neural networks — Part 2: Methodology for the use of formal methods
ISO/IEC FDIS 42001: Information technology — Artificial intelligence — Management system
ISO/IEC FDIS 5338: Information technology — Artificial intelligence — AI system life cycle processes
ISO/IEC JTC 1/SC 42: Artificial intelligence
ISO/IEC 23894:2023: Information technology — Artificial intelligence — Guidance on risk management > BUY
ISO/IEC TR 24029-1:2021: Artificial Intelligence (AI) — Assessment of the robustness of neural networks — Part 1: Overview
ISO/IEC 24668:2022: Information technology — Artificial intelligence — Process management framework for big data analytics
ISO/IEC 23053:2022: Framework for Artificial Intelligence (AI) Systems Using Machine Learning (ML)
Seed that we’ve developed a Cyber risk management framework to capture intentional and accidental attacks.
Voices Are Among the Easiest of Biometrics to Clone
Fraudsters tend to target a single victim, obtain a voice sample and convert it to a natural-sounding message – or use a real-time cloning app.
iRobot
https://red-goat.com/voice-cloning-heist/
https://www.ftc.gov/news-events/news/press-releases/2023/02/new-ftc-data-show-consumers-reported-losing-nearly-88-billion-scams-2022
https://www.zdnet.com/article/hacking-ai-how-googles-ai-red-team-is-fighting-security-attacks/
April of this year, Arizona lady heard her 15 year old daughter had been kidnapped and was threatened with violence unless $1 million was paid, and then dropped to $50,000
Cases where AI has produced entirely unexpected outcomes. In the most extreme cases, this has resulted in death. Two fatal Boeing 737 MAX 8 crashes in 2018 and 2019 have been attributed to ‘aggressive and riskier AI’ changes to an AI system1 innocuously named the ‘Manoeuvring Characteristics Augmentation System’.
When assessing risk to AI, think carefully about existing threats that you would profile as part of a wider risk assessment.
There is a tendency to produce a completely different approach to assessing and managing risk for AI systems.
Yes there are differences and we will highlight these, but it doesn’t warrant a completely different approach. And ISO 31000 is still very applicable, and I will cover this in more detail later.
So here is a typical threat library and many of these – if not all are applicable
But lets look at a few of the newer threat types for AI
There are a number of AI related threats here that should be considered:
Unathorised access to AI/ML source code: Gaining unauthorized source code access to exfiltrate IP (such as model parameters) and/or identify model weaknesses e.g. dependent library vulnerabilities
Data poisoning: Corrupting a training model for machine learning to compromise model integrity e.g. by replacing a legitimate model file by a poisoned model file on a cloud-hosted filesystem
Label manipulation and inaccuracy: Mislabeling, or adversarial modifications of data labels in supervised machine learning models to generate inaccurate model results
Data scarcity: Limiting the quantity of data available for AI/ML models by targeted attacks, therefore impacting functional capabilities as AI relies on the availability of consistent and accessible data
Intro of section bias: Selection bias may be accidentally or purposefully introduced in raw datasets which may affect subsequent inference and overall trustworthiness of the platform
Data manipulation: Manipulating input data fed into the system to alter the output to serve attacker objectives
Maliciously adding random samples to the set of training data to deny basic model availability by preventing the model from computing any meaningful inference
Now when we have assessed those threats that are applicable, we need to determine the most effective controls. A simple matrix such as the one you see here is an ideal tool for anyone assessing risks to AI to quickly determine if you have the correct controls in place.
This is something we do at CRMG, but its vital for security and risk functions to have a tool such as this so you have a consistent and rapid approach to determining key controls based on threats you are concerned with, and this ties in to the stress testing that Chris will cover later, because when you have identified those critical controls to have in place, these are the ones that then must undergo stress testing. For example, You don’t want to stress test controls that are partially effective.
Now what we have covered there in terms of threats is only part of the broader risk management capability, and what I want to highlight now is what we must get in place for managing any risk for any technology to provide assurance that AI is being developed, implemented and operated in a manner that meets your organisations risk appetite.
This draws our experiences and the approach is consistent to 31000
So, our aim for this talk is as follows:
- provide background to AI technology
- to understand the AI threat landscape – whats really different when it comes to cyber risk management and AI
- how to plug in a risk-based approach with an existing risk architecture, supported by an assurance FW, that covers cyber security, data privacy and ethics
- integration of AI stress testing to ensure AI systems have undergone a rigorous set of tests so that WE can all gain the benefits of AI safely, securely and predictably
So, our aim for this talk is as follows:
- provide background to AI technology
- to understand the AI threat landscape – whats really different when it comes to cyber risk management and AI
- how to plug in a risk-based approach with an existing risk architecture, supported by an assurance FW, that covers cyber security, data privacy and ethics
- integration of AI stress testing to ensure AI systems have undergone a rigorous set of tests so that WE can all gain the benefits of AI safely, securely and predictably
So, our aim for this talk is as follows:
- provide background to AI technology
- to understand the AI threat landscape – whats really different when it comes to cyber risk management and AI
- how to plug in a risk-based approach with an existing risk architecture, supported by an assurance FW, that covers cyber security, data privacy and ethics
- integration of AI stress testing to ensure AI systems have undergone a rigorous set of tests so that WE can all gain the benefits of AI safely, securely and predictably