Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Artificial Intelligence – Time Bomb or The Promised Land?

285 views

Published on

Companies have AI projects. Security products use AI to keep attackers out and insiders at bay. But what is this "AI" that everyone talks about? In this talk we will explore what artificial intelligence in cyber security is, where the limitations and dangers are, and in what areas we should invest more in AI. We will talk about some of the recent failures of AI in security and invite a conversation about how we verify artificially intelligent systems to understand how much trust we can place in them.
Alongside the AI conversation, we will discover that we need to make a shift in our traditional approach to cyber security. We need to augment our reactive approaches of studying adversary behaviors to understanding behaviors of users and machines to inform a risk-driven approach to security that prevents even zero day attacks.

Published in: Internet
  • Be the first to comment

  • Be the first to like this

Artificial Intelligence – Time Bomb or The Promised Land?

  1. 1. Raffael Marty VP Research and Intelligence Head of X-Labs, Forcepoint Artificial Intelligence – Time Bomb or The Promised Land? Cyber Symposium | September 2019 | Colorado Springs
  2. 2. A Brief Summary We don’t have artificial intelligence (yet) Algorithms can be dangerous - Understand your data and your algorithms We need a paradigm shift in security to escape the cat and mouse game Human factors play a key role in detecting and preventing cyber attacks and insider threat Build systems that capture “expert knowledge” and augment human capabilities
  3. 3. Raffael Marty Sophos PixlCloud Loggly Splunk ArcSight IBM Research Security Visualization Big Data ML & AI SIEM Corp Strategy Leadership Zen
  4. 4. Deep Learning Statistics Unsupervised Machine Learning Natural Language Processing MALWARE IDENTIFICATION PHISHING DETECTION COMMUNICATION ANALYSIS Artificial Intelligence in Security
  5. 5. SECURITY EXAMPLES Facial Recognition Privacy Malware Detection Failure Blacklisting of Windows Executable Pentagon AI Fail Algorithm Bias Data Biases The Dangers of AI
  6. 6. What Makes Algorithms Dangerous? Algorithms make assumptions about the data. Algorithms are too easy to use. Algorithms do not take domain knowledge into account. History is not a predictor of the future.
  7. 7. dest port! Port 70000? src ports! http://vis.pku.edu.cn/people/simingchen/docs/vastchallenge13-mc3.pdfUnderstand Your Data
  8. 8. $1 Trillion Has Been Spent Over The Past 7 Years On Cybersecurity, With 95% Success …For The Attackers 46% say they can’t prevent attackers from breaking into internal networks each time it is attempted. 100% of CIOs believe a breach will occur through a successful phishing attack in next 12 months. Enterprises have seen a 26% increase in security incidents despite increasing budgets by 9% YoY. Source: CyberArk Global Advanced Threat Landscape Report 2018 Sources: Verizon 2018 Data Breach Investigations Report.
  9. 9. Escaping the Security Cat and Mouse Game Reactive Detection Threat Intelligence (IOCs) Event-based Paradigm Shift Proactive Protection and Automation Behavior of humans and machines Risk-based
  10. 10. Rational for The New Paradigm Recon Weaponization Delivery Exploitation Installation Execution • Understand the ‘normal’ behavior of humans and devices • Try to catch attacker as early as possible, • Move coverage to later stages • Does not rely on knowing about types of attacks (zero day resistant) Behavioral Intelligence • Constantly changing • Chasing zero days • Very reactive • Focuses on external attackers Threat Intelligence Does not work if there is no ‘exploitation’ phase. E.g., with insiders Where harm is caused: Critical data and IP being modified, deleted, or exfiltrated
  11. 11. Expanding the Security Framework 1. Understand the movement of data Execution Catch attacks in the preparation phase Discover Explore Collect Exfiltrate, Modify, Destroy Dwell time can be months 2. Understand human and device behavior: Monitor human factors Monitor for deviations from norm Recon Weaponize Assess peer group membership 89 John Flag suspicious entities before any harm is done Includes insiders in the kill chain
  12. 12. Understanding Humans and Data Discover Explore Collect Exfiltrate, Modify, Destroy Monitor Entities • Learn their normal behavior • Learn how they behave relative to their peers • Learn how they interact with critical data and IP • Based on deviations, compute an entity risk Understand Humans • Track and assess human factors Shift to a risk-based approach • An ‘event’ can both be good or bad, depending on the context of the entity 89 John
  13. 13. Addiction, Gambling Performance Patterns of Violations Interpersonal Issues Knowledge, Skills, Ability Financial Distress Detecting and perceiving risk is shaped by our ability to integrate expert knowledge about risk factors and human behavior. Accesses sensitive files Searches for sensitive files Without context, behaviors that may seem “obviously bad” in retrospect, may not be noticed.
  14. 14. Predisposition (Vulnerabilities) Stressors (Triggering Factors) Concerning Behaviors A Framework to Understand Humans • Med/psych conditions • Personality & Social Skills • Previous Rule Violations • Social Network Risks • Personal (life changes, health) • Professional (job loss, salary, etc.) • Financial • Interpersonal • Policy violations • Financial • Mental Health, Addiction • Social Network, Travel Note: None of these components alone are indicators of a crime or attack! RISK
  15. 15. Activities From Activities to Concerning Behaviors RISK ”Detection Rules” that normally generate a lot of false positives are not weighed by the risk of the entities. Activities that, out of context would be benign, now flag an attack ”Risk Adaptive Protection” Risk adjusted “Detections”
  16. 16. Am I here to work for you, or for someone else? Regular Activities Activities Predisposition Stressors Concerning Behaviors • Seeking access or clearance levels beyond current need • Testing security boundaries • Multiple usernames & identities • Social and professional network • Unreported travel • Low communication, lack of social connections in office • None • Communication with competitors
  17. 17. Research Areas / Where we Need AI Numerous foundational problems still unsolved • Taxonomies • Entity resolution / identity attribution Capturing expert knowledge • Explicit • Re-enforcement • Belief networks Communication analytics • NLP • SNA • Peer group analytics (CBC) Risk computation (risk is not linear) • Belief Networks Validation and expansion of human factors framework
  18. 18. The Big AI Challenges Verifyability and Explainability Privacy • Doing the right thing for the ‘consumer’ • Compliance with GDPR and other regulations Efficacy • How to provably show what algorithms do? • How to compare against other solutions / algorithms? • How to know we are protected? • Preventing ‘snake oil’ Socio-ethical conversation • Big brother? Surveillance? Where are the boundaries? • Where are boundaries of what is okay to be collected and analyzed? Training data for both supervised algorithms and hypothesis testing The world's first dynamic 'non-factor’ based quantum AI encryption software, utilizing multi-dimensional encryption technology, including time, music's infinite variability, artificial intelligence, and most notably mathematical constancies to generate entangled key pairs. – Snake Oil
  19. 19. Takeaways “The way algorithms are used is often dangerous. Hire experts.” “We need a paradigm shift to escape the security cat and mouse game.” “Understanding human factors can help getting ahead of attackers”
  20. 20. @raffaelmarty @ForcepointSec @ForcepointLabs ForcepointForcepoint LLCForcepoint Questions? http://slideshare.net/zrlram

×