[Confidential]
Join Us:
https://www.linkedin.com/company/
application-security-virtual-meetups
QR Link:
[Confidential]
Uncover critical genAI
risks and vulnerabilities
Benji Preminger, Head of Product
[Confidential]
Prompt.Security
About Prompt Security
● Founded in August 2023
● Three commercially-available products:
○ Prompt for Employees
○ Prompt for Developers
○ Prompt for Homegrown Apps
● Customers in Fortune 1000
● Partners, resellers, MSSPs worldwide
Company Overview
3
[Confidential]
Prompt.Security 4
Agenda
4
1. GenAI - increasing adoption -> increasing risk
2. GenAI risk frameworks
3. OWASP top 10 for LLM
4. Prompt Injection
5. Information disclosure
6. Safety and content moderation
7. What can you do about it
[Confidential]
Prompt.Security
Adopting AI is
key to business
survival
5
5
5
5
[Confidential]
Prompt.Security
92%
Adoption of GenAI
75%
94%
of Fortune 500 companies use OpenAI
products (FT)
of CIOs planning to increase their AI budget in
2024 (Gartner)
of business leaders agree that AI is
critical to success over the next five years
(Deloitte)
Source: Gartner 2023
[Confidential]
Prompt.Security
Generative AI
introduces a new
array of security risks
7
[Confidential]
Prompt.Security
Insecure Agent
Prompt Injection Privilege Escalation
Sensitive Data Disclosure
Shadow AI Prompt Leak
Indirect Prompt Injection
Jailbreak
Brand Reputation Damage Denial of wallet/service
[Confidential]
Prompt.Security
42%
GenAI security concerns in 2023
89%
30%
of IT Executives in 2023 listed Data Privacy
as their main GenAI concern
of business technologists would bypass
cybersecurity guidance to meet a business
objective, leading to Shadow AI
of Enterprises deploying AI suffered a
security data breach
Source: Gartner 2023
[Confidential]
Prompt.Security
Key genAI concerns
Blocking sensitive data
exposure and leaks via
customer-facing and internal
genAI apps
Privacy
Preventing your users from
being exposed to inappropriate,
toxic or off-brand content
generated by LLMs
Safety
10
Protecting your organization
from Prompt Injection,
Jailbreaks, DDoS, RCE and
other risks
Security
[Confidential]
Prompt.Security
Types of genAI usage
11
By
employees
By
developers
By your
homegrown apps
Employees using
genAI-driven
software without
any visibility or
control
Developers
leveraging genAI
coding assistants
(e.g. GitHub Copilot)
GenAI Applications
developed by companies,
e.g. customer-facing
chatbots, internal services
[Confidential]
Prompt.Security
Useful Frameworks & Guidelines
Top 10 for LLM Applications ATLAS AI Risk Management Framework
Guidelines for secure AI system development Generative AI Policy Kit
[Confidential]
Prompt.Security
LLM01:
Prompt Injection
13
OWASP Top 10 for Large Language Model Applications
LLM02: Insecure
Output Handling
LLM03: Training
Data Poisoning
LLM04: Model
Denial of service
LLM05: Supply
Chain
Vulnerabilities
LLM06: sensitive
information
disclosure
LLM07: Insecure
plugin design
LLM08: Excessive
agency
LLM09:
Overreliance
LLM10: Model
Theft
[Confidential]
Prompt.Security
LLM01:
Prompt Injection
14
OWASP Top 10 for Large Language Model Applications
LLM02: Insecure
Output Handling
LLM03: Training
Data Poisoning
LLM04: Model
Denial of service
LLM05: Supply
Chain
Vulnerabilities
LLM06: sensitive
information
disclosure
LLM07: Insecure
plugin design
LLM08: Excessive
agency
LLM09:
Overreliance
LLM10: Model
Theft
[Confidential]
Prompt.Security
LLM01:
Prompt Injection
15
[Confidential]
Prompt.Security
➡️ System
prompt
An application’s set of guidelines/guardrails, used
to direct the AI's responses. It gives the AI
instructions on what it should and should not do,
greatly impacting the accuracy and relevance of
its outputs.
16
16
Some rules can be
bent, others broken
Example: You are a Financial chatbot. You do
not answer any topics outside the realm of
finance. You should not respond with any
profanity or improper language. Assist the user
with all financial related questions.
[Confidential]
Prompt.Security
When an attacker manipulates an LLM through carefully crafted inputs, causing it to perform
actions in line with the attacker's intentions. This could potentially lead to data leaks, legal risk,
and other security issues.
17
17
What is
Prompt Injection (LLM01)
17
17
😈 Attacker
genAI application
Injected document
😈 Attacker
genAI application
Directly Indirectly, through manipulated
external inputs
[Confidential]
Prompt.Security
Real world scenario: The $1 Chevy
● Manipulating a
customer-facing chatbot
powered by ChatGPT to
sell a car for $1
● Legal risk
● Brand damage
[Confidential]
Prompt.Security
DAN - the Do Anything Now jailbreak
● Perhaps the most well-known
jailbreak script to date
● Has had multiple versions
released since, and already
blocked in most LLMs
● Key concept: Manipulating LLM to
take on a persona and
exit/ignore their guidelines
[Confidential]
Prompt.Security
Many-shot Jailbreaking
● “Training” an LLM inside a
prompt to answer
whatever question you
send its way
● Takeaway: Exploiting LLM
capabilities (e.g. in-context
learning) as a weakness
Source:
[Confidential]
Prompt.Security
LLM06:
Sensitive Information
Disclosure
21
[Confidential]
Prompt.Security
LLM06: Sensitive Information Disclosure
22
22
LLM applications can inadvertently expose sensitive data. This can
occur when LLM users unintentionally input sensitive data into a
genAI application that could be revealed in the output.
[Confidential]
Prompt.Security
Shadow AI and uncontrolled
employee usage of genAI
apps dramatically increases
risk of sensitive data
leakage
23
23
23
23
[Confidential]
Prompt.Security
Real world scenario: Untrusted genAI app
Acme
Acme corp. Shadow AI -
False positive?
Newly detected
genAI app
[Confidential]
Prompt.Security
Real world scenario: Secrets leaked to 3rd party
GenAI coding
assistant
Exposure of secrets
(e.g. cloud tokens)
Secrets leakage to
training data
🤫
[Confidential]
Prompt.Security
Safety and AI guardrails
26
26
● AI guardrails are safeguards that prevent harmful results like offensive
content or privacy breaches.
● They ensure ethical use of AI and prevent unintended negative impacts.
● They may include anything from rules in your System Prompt, to
advanced AI-driven topic detection and limitation.
[Confidential]
Prompt.Security 27
27
27
27
[Confidential]
Prompt.Security
It's not always readily apparent
28
28
Fintech chatbot example
[Confidential]
Prompt.Security
Ok great…
but what can I do about it?
29
[Confidential]
Prompt.Security
Where do we get started?
Inventory all GenAI tools and applications
Define and socialize acceptable GenAI uses in the organization
Monitor, detect and escalate possible security policy violations
in GenAI behavior or use
Check us out at prompt.security 🙂
Understand regulatory requirements and copyright laws
EU AI Act, US Executive Order on AI.
[Confidential]
Prompt.Security
Strengthen your system prompt
31
31
[Confidential]
Prompt.Security
Questions?
32
32
benji@prompt.security
https://www.linkedin.com/in/benjipreminger/
[Confidential]
Prompt.Security
Thank you!
33
33
benji@prompt.security
https://www.linkedin.com/in/benjipreminger/
[Confidential]
GenAI Strategy
May Brooks Kempler, CISSP ISC2 BoD
Risks & Opportunities
[Confidential]
[Confidential]
[Confidential]
[Confidential]
[Confidential]
[Confidential]
[Confidential]
[Confidential]
[Confidential]
[Confidential]
[Confidential]
May Brooks Kempler, CISSP ISC2 BoD
Information Security Level 2 – Sensitive
© 2020 – Proprietary & Confidential Information of Amdocs
46
[Confidential]
Navigating GenAI Risk
from a CISO's Lens
Einat Shimoni
Director @ Anaplan
[Confidential]
Anaplan transforms the
way you see, plan, and
lead your business.
Dynamically connect your financial,
strategic, and operational plans to:
• Anticipate change
• Address complexity
• Move at market speed
[Confidential]
Einat Shimoni
• Director of IAM and Data Security at Anaplan
• Prior, Head of Cyber Security Operation Unit at Amdocs
• 25+ years of experience in Cyber Security and
Infrastructure
• CISO , DPO , CISM ,CDPSE Certifications
• BA in Computer Science
• Located in Israel, a wife and mother of three teenagers
[Confidential]
Generative AI Adoption
[Confidential]
ChatGPT
Samsung
GitHub
Why CISO’s Should Care ?
Information Security Level 2 – Sensitive
© 2020 – Proprietary & Confidential Information of Amdocs
51
[Confidential]
Key Gen AI Risks
Confidential
[Confidential]
Is GenAI Safe?
Risk
Data Loss
Risk
Adversarial Prompting
Risk
Output Risk (Hallucination, Biased or inappropriate output )
Risk
Data Poisoning
Risk
Retrieval Risk (Unauthorized retrieval)
[Confidential]
Mitigating Gen AI Risks
Confidential
[Confidential]
Building Gen AI - Cloud Security
Gen AI AUP
Data Security
Standards
Bot Detection
SaaS Checklist
SSE - Security
Service Edge
Cloud Security
Standards
Bot Detection
Cloud Security
Product
(CSPM/DSPM)
API Security
Capabilities
Application Security
Capabilities
Consumed Gen AI - Web & SaaS Security
Prerequisite
[Confidential]
Risk Consuming Gen AI Building Gen AI
Data Loss
Block the entire unapproved or unsecure GenAI
application
Ensure fine-tuning training data is properly protected
with data security measures
Use only approved and secured GenAI applications
via web or SaaS to process sensitive information.
Contractually obliged not to train or fine-tune their
large language models (LLMs) using client data.
Prevent sensitive files from being uploaded to
unapproved or unsecured GenAI applications.
Access policy to limit access to custom GPTs to the
owners or authorized users.
Access policy to limit access to custom GPTs to the
owners or authorized users.
Ability for users to purge, or disable chat history ,
enforce access control and audit logging.
Ability for users to purge, or disable chat history ,
enforce access control and audit logging.
Evaluate the cloud LLM provider’s security measures
to safeguard application logging
[Confidential]
Risk Consuming Gen AI Building Gen AI
Adversarial
Prompting
Input and output logging and monitoring. Input and output logging and monitoring.
Use system prompts to build guardrails around common
Prompt Injection techniques.
Select an LLM with strong built-in Prompt Injection
detection and prevention capabilities.
Ensure the chosen LLM hosting provider offers LLMs with
strong built-in Prompt Injection detection and prevention
capabilities.
Perform periodic manual reviews of input and output logs
to identify misuse.
Perform red team exercises to test custom-built GenAI
applications’ resilience to common Prompt Injection
attacks.
[Confidential]
Risk Consuming Gen AI Building Gen AI
Output Risk
Perform user awareness training Perform user awareness training
Check the accuracy of GenAI applications’ output Check the accuracy of GenAI applications’ output
The vendor is using techniques to reduce
hallucination, Biased, harmful or inappropriate output
Use techniques to reduce hallucination, Biased,
harmful or inappropriate output
Data Poisoning
The storage of the training data is protected
Retrieval Risk
Governance Process to Approve GenAI Apps
Authorization and entitlements to current user
content
Authorization and entitlements to current user
content
[Confidential]
Regulatory and Ethical
Considerations
Confidential
[Confidential]
Disinformation
IP Protection
Privacy Transparency
Regulations
AI Act
[Confidential]
Building Resilience
Confidential
[Confidential]
Building Resilience Against GenAI risks
Continuous monitoring and threat intelligence
Investing in AI security research and development
Enhancing organizational cybersecurity culture
Collaborating for a Secure Future
[Confidential]
Anaplan Meetup
Tuesday,
May 15th, 2024,
17:00- 20:00
Rappaport 3 Kfar
Saba, 15th floor
[Confidential]
Recommendation
Foundation
Consuming
GenAI Controls
Building
GenAI
• Build custom GenAI
applications securely
with LLM built-in
security and safety
guardrails
• Sensitive prompt filtering
• Content safety filters
• Prompt injection
detection
• Block sensitive
information from being
used in unapproved
GenAI Apps.
• Consume secured and
approved web- or SaaS-
delivered GenAI
• Cloud Security
• Data Security
• Application Security
• GenAI-specific security
control
Embrace a continuous learning mindset to stay up-to-date on the young and fast-
evolving field of GenAI Security
[Confidential]
Stay Connected
Information Security Level 2 – Sensitive
© 2020 – Proprietary & Confidential Information of Amdocs
65
[Confidential]
Q A
Confidential
Information Security Level 2 – Sensitive
© 2020 – Proprietary & Confidential Information of Amdocs
66
[Confidential]
Thank you!
[Confidential]
Reference
• https://www.gartner.com/document/4751831?ref=solrAll&refval=407663027&
• https://www.gartner.com/document/5354463?ref=solrAll&refval=408557073&
• https://cloudsecurityalliance.org/
• https://team8.vc/wp-content/uploads/2023/04/Team8-Generative-AI-and-ChatGPT-Enterprise-Risks.pdf
• https://www.cmswire.com/digital-experience/chatgpt-suffers-first-data-breach-exposes-personal-information/
• https://devclass.com/2022/10/17/github-copilot-under-fire-as-dev-claims-it-emits-large-chunks-of-my-copyrighted-code/
• https://www.darkreading.com/vulnerabilities-threats/samsung-engineers-sensitive-data-chatgpt-warnings-ai-use-
workplace
• https://mashable.com/article/samsung-chatgpt-leak-details
• https://learn.microsoft.com/en-us/legal/cognitive-services/openai/customer-copyright-
commitment?context=%2Fazure%2Fai-services%2Fopenai%2Fcontext%2Fcontext
[Confidential]
Thank You!
Questions?
To be continued…
https://www.linkedin.com/company/application-security-virtual-meetups

GenAI Risks & Security Meetup 01052024.pdf