Companies have AI projects. Security products use AI to keep attackers out and insiders at bay. But what is this "AI" that everyone talks about? In this talk we will explore what artificial intelligence in cyber security is, where the limitations and dangers are, and in what areas we should invest more in AI. We will talk about some of the recent failures of AI in security and invite a conversation about how we verify artificially intelligent systems to understand how much trust we can place in them.
Alongside the AI conversation, we will discover that we need to make a shift in our traditional approach to cyber security. We need to augment our reactive approaches of studying adversary behaviors to understanding behaviors of users and machines to inform a risk-driven approach to security that prevents even zero day attacks.
Artificial Intelligence – Time Bomb or The Promised Land?
VP Research and Intelligence
Head of X-Labs, Forcepoint
Artificial Intelligence –
Time Bomb or The Promised Land?
Cyber Symposium | September 2019 | Colorado Springs
A Brief Summary
We don’t have artificial intelligence (yet)
Algorithms can be dangerous - Understand
your data and your algorithms
We need a paradigm shift in security to escape
the cat and mouse game
Human factors play a key role in detecting and
preventing cyber attacks and insider threat
Build systems that capture “expert knowledge”
and augment human capabilities
ML & AI
Artiﬁcial Intelligence in Security
Facial Recognition Privacy
Malware Detection Failure
Pentagon AI Fail
Algorithm Bias Data Biases
The Dangers of AI
What Makes Algorithms Dangerous?
Algorithms make assumptions about the data.
Algorithms are too easy to use.
Algorithms do not take domain knowledge into account.
History is not a predictor of the future.
http://vis.pku.edu.cn/people/simingchen/docs/vastchallenge13-mc3.pdfUnderstand Your Data
$1 Trillion Has Been Spent Over
The Past 7 Years On Cybersecurity,
With 95% Success …For The Attackers
46% say they can’t prevent attackers
from breaking into internal networks
each time it is attempted.
100% of CIOs believe a breach will
occur through a successful phishing
attack in next 12 months.
Enterprises have seen a 26% increase
in security incidents despite
increasing budgets by 9% YoY.
Source: CyberArk Global Advanced Threat Landscape Report 2018 Sources: Verizon 2018 Data Breach Investigations Report.
Escaping the Security Cat and Mouse Game
Threat Intelligence (IOCs)
Protection and Automation
Behavior of humans and machines
Rational for The New Paradigm
Recon Weaponization Delivery Exploitation Installation Execution
• Understand the ‘normal’ behavior of humans and devices
• Try to catch attacker as early as possible,
• Move coverage to later stages
• Does not rely on knowing about types of attacks (zero day resistant)
• Constantly changing
• Chasing zero days
• Very reactive
• Focuses on external attackers
Does not work if there is no
E.g., with insiders
Where harm is caused: Critical data and IP
being modified, deleted, or exfiltrated
Expanding the Security Framework
1. Understand the
movement of data
Catch attacks in the
Exfiltrate, Modify, Destroy
Dwell time can be months
2. Understand human
and device behavior:
Monitor for deviations from norm
Assess peer group
any harm is done
in the kill chain
Understanding Humans and Data
Exfiltrate, Modify, Destroy
• Learn their normal behavior
• Learn how they behave relative to their peers
• Learn how they interact with critical data and IP
• Based on deviations, compute an entity risk
• Track and assess human factors
Shift to a risk-based approach
• An ‘event’ can both be good or bad, depending on
the context of the entity
Detecting and perceiving risk is
shaped by our ability to integrate
expert knowledge about risk
factors and human behavior.
Without context, behaviors that
may seem “obviously bad” in
retrospect, may not be noticed.
A Framework to Understand Humans
• Med/psych conditions
• Personality & Social Skills
• Previous Rule Violations
• Social Network Risks
• Personal (life
• Professional (job
loss, salary, etc.)
• Policy violations
• Mental Health, Addiction
• Social Network, Travel
Note: None of these components alone are indicators of a crime or attack!
From Activities to Concerning Behaviors
”Detection Rules” that normally
generate a lot of false positives are not
weighed by the risk of the entities.
Activities that, out of
context would be benign,
now ﬂag an attack
”Risk Adaptive Protection”
Risk adjusted “Detections”
Am I here to work
for you, or for
• Seeking access or
beyond current need
• Testing security
• Multiple usernames & identities
• Social and professional network
• Unreported travel
• Low communication, lack of
social connections in office
• None • Communication
Research Areas / Where we Need AI
problems still unsolved
• Entity resolution /
Capturing expert knowledge
• Belief networks
• Peer group analytics (CBC)
Risk computation (risk is not linear)
• Belief Networks
Validation and expansion of human
The Big AI Challenges
Verifyability and Explainability
• Doing the right thing for the ‘consumer’
• Compliance with GDPR and other regulations
• How to provably show what algorithms do?
• How to compare against other solutions / algorithms?
• How to know we are protected?
• Preventing ‘snake oil’
• Big brother? Surveillance?
Where are the boundaries?
• Where are boundaries of
what is okay to be
collected and analyzed?
Training data for both
and hypothesis testing
The world's first dynamic 'non-factor’ based quantum AI encryption software, utilizing multi-dimensional encryption technology, including time,
music's infinite variability, artificial intelligence, and most notably mathematical constancies to generate entangled key pairs. – Snake Oil
“The way algorithms are used is
often dangerous. Hire experts.”
“We need a paradigm shift to escape
the security cat and mouse game.”
“Understanding human factors can
help getting ahead of attackers”