Generative AI offers great opportunities for innovation in various industries. Hence, by adopting ISO/IEC 27032, you can enhance your cybersecurity resilience and efficiently address the risks associated with generative AI.
Amongst others, the webinar covers:
• AI & Privacy
• Generative AI, Models & Cybersecurity
• AI & ISO/IEC 27032
Presenters:
Christian Grafenauer
Anonymization expert, privacy engineer, data protection officer, LegalTech researcher (GDPR, Blockchain, AI) Christian Grafenauer is an accomplished privacy engineer, anonymization expert, and computer science specialist, currently serving as the project lead for anonymity assessments at techgdpr. With an extensive background as a senior architect in Blockchain for IBM and years of research in the field since 2013, Christian co-founded privacy by Blockchain design to explore the potential of Blockchain technology in revolutionizing privacy and internet infrastructure. As a dedicated advocate for integrating legal and computer science disciplines, Christian’s expertise in anonymization and GDPR compliance enables innovative AI applications, ensuring a seamless fusion of technology and governance, particularly in the realm of smart contracts. In his role at techgdpr, he supports technical compliance, Blockchain, and AI initiatives, along with anonymity assessments. Christian also represents consumer interests as a member of the national Blockchain and DTL standardization committee at din (German standardization institute) in ISO/TC 307.
Akin Johnson
Akin J. Johnson is a renowned Cybersecurity Expert, known for his expertise in protecting digital systems from potential threats. With over a decade of experience in the field, Akin has developed a deep understanding of the ever-evolving cyber landscape.
Akin is an advocate for cybersecurity awareness and frequently shares his knowledge through speaking engagements, workshops, and publications. He firmly believes in the importance of educating individuals and organizations on the best practices for safeguarding their digital assets.
Lucas Falivene
Lucas is a highly experienced cybersecurity professional with a solid base in business, information systems, information security, and cybersecurity policy-making. A former Fulbright scholar with a Master of Science degree in Information Security Policy and Management at Carnegie Mellon University (Highest distinction) and a Master's degree in Information Security at the University of Buenos Aires (Class rank 1st). Lucas has participated in several trainings conducted by the FBI, INTERPOL, OAS, and SEI/CERT as well as in the development of 4 cyber ISO national standards.
Date: July 26, 2023
YouTube Link: https://youtu.be/QPDcROniUcc
Impact of Generative AI in Cybersecurity - How can ISO/IEC 27032 help?
1.
2. Agenda
▪ AI & Privacy
▪ Generative AI, Models & Cybersecurity
▪ AI & ISO/IEC 27032
3. PECB Next events
1. Don’t forget to purchase your ticket regarding
PECB’s conference: https://bit.ly/3Sq4nTO
▪ 4-5 October – In-person
2. Don’t miss out on the launching of the Chief
Information Security Officer and NIS Directive 2.0
Training Courses, which will be held online, as well
as in-person at the PECB Insights Conference
2023, in Paris, France!
▪ 18-19 September – Online
▪ 2-3 October – In-person
Purchase your ticket here: https://bit.ly/3JouNDd
4. Presenting our speakers
Lucas is a former Fulbright scholar with a Master of
Science degree in Information Security Policy and
Management at Carnegie Mellon University (Highest
distinction) and a Master's degree in Information
Security at the University of Buenos Aires (Class rank
1st).
Lucas has participated in several trainings conducted
by the FBI, INTERPOL, OAS, and SEI/CERT as well as
in the development of 4 cyber ISO national standards.
He also represents Malta and Argentina as an expert in
ISO's Information Security, Cybersecurity, and Privacy
Protection subcommittee (ISO/IEC JTC 1/SC 27) and
as the Secretary of Argentina's ISO mirror
subcommittee.
linkedin.com/in/christian-grafenauer
8. Such high-risk AI systems would have to comply with a range of requirements
particularly on risk management, testing, technical robustness, data training and
data governance, transparency, human oversight, and (Articles 8 to 15). In this
regard, providers, importers, distributors and users of high-risk AI systems would
have to fulfil a range of obligations.
Providers from outside the EU will require an authorized representative in the EU
to (inter alia), ensure the conformity assessment, establish a post-market
monitoring system and take corrective action as needed. AI systems that
conform to the new harmonised EU standards, currently under
development, would benefit from a presumption of conformity with the draft
AI act requirements.
AI Act Quote High risk: Regulated high-risk AI systems
9. AI systems that conform to the new harmonized EU
standards, currently under development, would benefit from
a presumption of conformity with the draft AI act
requirements.
AI Act Quote High risk: Regulated high-risk AI systems
10. AI systems that conform to
the new harmonised EU standards, currently under
development, would benefit from a presumption of conformity
with the draft AI act requirements.
AI Act Quote High risk: Regulated high-risk AI systems
AI Act Standards for AI
Compliance Rules
11. Such high-risk AI systems would have to comply with a range of requirements
particularly on risk management, testing, technical robustness, data training and
data governance, transparency, human oversight, and cybersecurity (Articles 8
to 15). In this regard, providers, importers, distributors and users of high-risk AI
systems would have to fulfil a range of obligations.
Providers from outside the EU will require an authorized representative in the EU
to (inter alia), ensure the conformity assessment, establish a post-market
monitoring system and take corrective action as needed. AI systems that conform
to the new harmonized EU standards, currently under development, would
benefit from a presumption of conformity with the draft AI act requirements.
AI Act Quote High risk: Regulated high-risk AI systems
13. Anonymization and AI
AI Act
Anonymization is a great PET
to protect the rights and
freedoms of your users.
In the high risk category such
assessments are mandatory.
15. Generative A.I
● What it is
● Models & Uses
● What could go wrong
● Risk and Treatments
● Recommendations
16. Generative AI is a type of artificial
intelligence that can produce
content such as audio, text, code,
video, images, and other data.
Generative AI is a type of
machine learning, which, at its
core, works by training software
models to make predictions based
on data without the need for
explicit programming.
Generative AI: what is it?
17. Artificial intelligence has a surprisingly long history, with the concept
of thinking machines traceable back to ancient Greece. Modern AI
really kicked off in the 1950s, however, with Alan Turing’s research
on machine thinking and his creation of the eponymous Turing test.
The first neural networks (a key piece of technology underlying
generative AI) that were capable of being trained were invented in
1957 by Frank Rosenblatt, a psychologist at Cornell University.
Further development of neural networks led to their widespread use
in AI throughout the 1980s and beyond. In 2014, a type of algorithm
called a generative adversarial network (GAN) was created,
enabling generative AI applications like images, video, and audio.
Generative AI: A brief History
18. • Generative adversarial networks (GANs): best for image duplication
and synthetic data generation.
• Transformer-based models: best for text generation and content/code
completion. Common subsets of transformer-based models include
generative pre-trained transformer (GPT) and bidirectional encoder
representations from transformers (BERT) models.
• Diffusion models: best for image generation and video/image synthesis.
• Variational autoencoders (VAEs): best for image, audio, and video
content creation, especially when synthetic data needs to be
photorealistic; designed with an encoder-decoder infrastructure.
• Unimodal models: models that are set up to accept only one data input
format; most generative AI models today are unimodal models.
Generative AI: Types/Models
19. • Multimodal models: designed to accept multiple types of inputs and
prompts when generating outputs; for example, GPT-4 can accept both
text and images as inputs.
• Large language models: the most popular and well-known type of
generative AI model right now, large language models (LLMs) are
designed to generate and complete written content at scale.
• Neural radiance fields (NeRFs): emerging neural network technology
that can be used to generate 3D imagery based on 2D image inputs.
New tools bring extended capabilities, but they also introduce new
vulnerabilities.
Generative AI: Types
20. The rise of generative AI have led to a variety of security concerns.
According to research by Grammarly and Forrester, most companies
still don’t have a clear strategy to deploy generative AI within their
organizations at scale.
According to the report, generative AI is a critical or important priority
for 89% of respondents’ companies, and by 2025, nearly all (97%)
will be using the technology to support communication with hurdles
like security concerns (32%), lack of a cohesive AI strategy (30%)
and lack of internal policies to govern generative AI (27%) prevent
adoption.
This is one of the reasons to implement ISO 27032 CyberSecurity
Standards in the day to day activities of Addressing Internet Security
issues / common threats
Cyber issues with Generative AI: Industry
Readiness
21. Generative AI technology can be applied in many sectors where human
creativity would has been a requirement. There has been series of
progressive development in the following industries.
Generative AI:examples of uses
Images Videos Text Audio
Code
Generation
Data
Augmentati
on
Other Use
Cases
22. • Privacy and security:
• Undetected bias
• Model Malfunction
• Copyrights and Intellectual Property
• Hallucination - Data Inaccuracy
Cyber issues with Generative AI: what could go
wrong?
23. Cyber issues with Generative AI: what could go
wrong?
A lot can go wrong here if the
proper data protection
measures aren’t taken. A
company would need to have
the right security
infrastructure in place.
There can be machine malfunction in
training and building Generative AI in
post production if there is no close
monitoring and maintenance
architecture in place.
24. Cyber issues with Generative AI: what could go
wrong?
Who owns a Generated Content?
Who owns the output of a generative AI model—if the output
can be owned at all—might be set out by the terms of use for
the AI tool (which may be available on the website associated
with the tool), or by an implied license if there are no terms.
Generative AI won't state that it is unable to provide a correct
answer
Whenever it generate ANY answer that appears to be correct, this
is known as a “hallucination”. It is often unknown where the data
used to train generative AI has come from from various sources,
such as databases, APIs, social media, websites, etc.
25. This Ethical standards
are not in any order of
priority but are the base
guidelines for
implementing A.I. tools
without endangering the
CyberSpace.
Cyber issues with Generative AI: Ethical
Standards and Principles
Reliability
Fairness
Transparency
Responsibility
Accountability
27. Generative AI: main cyber risks
• Data Poisoning
• Misinformation
• Deep Fakes
• Hoax News
• Reconnaissance at Scale
• Prompt Injection
• A.I. Malwares - WormGPT
28. Generative AI is often patched together by a network of very different
creators which makes it hard to achieve the levels of accountability,
reliability, and security needed for ethical AI. To become truly secure,
we need a unified approach such as the ISO 27032:2023 to secure
the entire lifespan of the AI system. Security measures need to be
implemented in every step of the development cycle to ensure that
sensitive data is accurate, stored and used securely. These
measures include data encryption, locating system vulnerabilities and
defending against malicious attacks and breach(s).
Cyber issues with Generative AI:
Recommendation
29. Cyber issues with Generative AI:
Recommendation based on ISO/IEC 27032
32. Cybersecurity — Guidelines for Internet Security
Focus on
(1) Addressing Internet Security issues / common threats
(2) Preservation of CIA & other properties
Provides
(1) Controls to mitigate internet security risks
(2) Guidance for Internet Security governance
Combines several international standards
ISO/IEC 27032
33. What is Internet Security?
Cybersecurity
Safeguarding of people, society,
organizations and nations from cyber risks
Internet security
Preservation of CIA of information over the
Internet
Network security
(1) Design, implementation, operation and
improvement of networks
(2) Identification and treatment of network-
related security risks
34. Interested parties
Users Coordinator and
standardization
organisations
Government
authorities
Law enforcement
agencies
Internet service
providers
37. • Less focus on cyberspace security
• Less focus on collaboration
• Scope reduction
• Interested parties enhancement
• Improved recommended controls section
ISO/IEC 27032:2023 vs ISO/IEC 27032:2012
38. Title
- 2012 version
Information technology — Security techniques — Guidelines for cybersecurity
- 2023 version
Cybersecurity — Guidelines for Internet Security
ISO/IEC 27032:2023 version 2023 vs version 2012
39. Definition of cybersecurity
- 2012 version
Preservation of confidentiality, integrity and availability of information in the Cyberspace
- 2023 version
Safeguarding of people, society, organizations and nations from cyber risks
Managing Information Security risks when information is in DIGITAL form in computers,
storage, and networks
ISO/IEC 27032:2023 version 2023 vs version 2012
40. ISO/IEC 27032:2023 version 2023 vs version 2012
This document provides:
— An explanation of the relationship between
Internet security, web security, network security and
cybersecurity,
— An overview of Internet security,
— Identification of interested parties and a
description of their roles in Internet security,
— High level guidance for addressing common
Internet security issues.
This document does not specifically address
controls that organizations can require for systems
supporting critical infrastructure or national
security.
2012 VERSION 2023 VERSION
SCOP
E
This International Standard provides guidance for
improving the state of Cybersecurity, drawing out
the unique aspon other security domains, in
particular:
— Information security,
— Network security,
— Internet security, and
— Critical information infrastructure protection
(CIIP)
42. ISO/IEC 27032:2023 version 2023 vs version 2012
Denominated as interested parties:
— Users
— Government authorities
— Internet service providers
— Coordinator and standardization
organisations
— Law enforcement agencies.
2012 VERSION 2023 VERSION
INTERESTE
D PARTIES
Denominated as
stakeholders:
— Consumers
— Providers
43. ISO/IEC 27032:2023 version 2023 vs version 2012
17 controls:
— Preventive
— Detective
— Recover
— Respond
2012 VERSION 2023 VERSION
CONTROL
S
6 controls:
— Application level controls
— Server protection
— End-user controls
— Controls against social
engineering attacks
— Cybersecurity Readiness
— Other controls
44. GenAI: How can ISO/IEC 27032 help?
Policies for
Internet security
Access control
Security incident
management
Asset
management
Business
continuity over the
Internet
Supplier
management
Network
management
Vulnerability
management
Privacy protection
over the Internet
Protection against
malware
Change
management
Identification of applicable
legislation and
compliance requirements
Use of
cryptography
Application security
for Internet-facing
applications
Endpoint device
management
Monitoring
Education,
awareness &
training
45. GenAI: How can ISO/IEC 27032 help?
Policies for
Internet security
Access control
Security incident
management
Asset
management
Business
continuity over the
Internet
Supplier
management
Network
management
Vulnerability
management
Privacy protection
over the Internet
Protection against
malware
Change
management
Identification of applicable
legislation and
compliance requirements
Use of
cryptography
Application security
for Internet-facing
applications
Endpoint device
management
Monitoring
Education,
awareness &
training GRC
Protect /
Identify
46. GenAI: How can ISO ISO/IEC 27032 help?
Policies for
Internet security
Access control
Security incident
management
Asset
management
Business
continuity over the
Internet
Supplier
management
Network
management
Vulnerability
management
Privacy protection
over the Internet
Protection against
malware
Change
management
Identification of applicable
legislation and
compliance requirements
Use of
cryptography
Application security
for Internet-facing
applications
Endpoint device
management
Monitoring
Education,
awareness &
training GRC
Protect /
Identify
Tehnical Protect / Identify
47. GenAI: How can ISO/IEC 27032 help?
Policies for
Internet security
Access control
Security incident
management
Asset
management
Business
continuity over the
Internet
Supplier
management
Network
management
Vulnerability
management
Privacy protection
over the Internet
Protection against
malware
Change
management
Identification of applicable
legislation and
compliance requirements
Use of
cryptography
Application security
for Internet-facing
applications
Endpoint device
management
Monitoring
Education,
awareness &
training GRC
Protect /
Identify
Tehnical Protect / Identify
Detect /
Respond /
Recover
48. GenAI: How can ISO/IEC 27032 help?
To get the most from this groundbreaking technology, we need to manage its extended
landscape of risks while considering the organization / ecosystem as a whole
Take an
(1) Overarching approach
(2) Interdisciplinary approach
(3) Collaborative approach will all interested parties
Combine
(1) Several frameworks
(2) Best practices
Consider
(1) Your individual and organisational requirements / views
(2) Stakeholders requirements / views
(3) Ecosystem requirements / views
50. EU AI Act (draft)
Unacceptable Risk
(Art. 5)
High Risk
(Art. 6)
Minimal or No Risk
Limited Risk
(Art. 52)
Prohibited
within the EU
Permitted
subject to
(1) conformity
assessment
(2) market
monitoring
Permitted
Subject to
transparency
disclosures
Permitted
No restrictions