Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Jim Geovedi - Machine Learning for Cybersecurity

862 views

Published on

Machine Learning for Cybersecurity

Published in: Technology
  • Be the first to comment

Jim Geovedi - Machine Learning for Cybersecurity

  1. 1. GVDJ #IDSECCONF2016 machine learning 
 for cybersecurity
  2. 2. USA SOUTH KOREA NORTH KOREA INDONESIA
  3. 3. GVDJ #IDSECCONF2016 security goals ▸ security goals ▸ confidentiality of information and resources ▸ integrity of information and resources ▸ availability of information and resources ▸ basic definitions ▸ threat: potential violation of a security goal ▸ security: protection from intentional threats ▸ attack: intentional violation of a security goal
  4. 4. GVDJ #IDSECCONF2016 security mechanisms ▸ security policies and mechanisms ▸ policy: statement of what is and what is not allowed ▸ mechanism: method or tool enforcing a security policy ▸ security is a process, not a product! ▸ strategies for security mechanisms ▸ prevention of attacks, e.g. encryption ▸ detection of attacks, e.g. virus scanner ▸ analysis of attacks, e.g. forensic
  5. 5. GVDJ #IDSECCONF2016 prevention is a hard task ▸ continuous discovery of vulnerabilities ▸ insecure software and hardware ▸ developers unawareness goto fail; goto fail; goto fail
 (february 2014) heartbleed
 (april 2014) shellshock
 (september 2014)
  6. 6. GVDJ #IDSECCONF2016 attacks against services ▸ numerous security breaches at popular web services ▸ identities often include real names, addresses, emails, passwords, etc. ‘;--have i been pwned? 142
 pwned websites 1,444,567,928
 pwned accounts 39,842
 pastes 31,108,929
 paste accounts
  7. 7. GVDJ #IDSECCONF2016 imbalance of security cycle ▸ increasing imbalance of security cycle ▸ increasing number of vulnerabilities ▸ high amount of novel attacks ▸ high diversity of malicious software ▸ bottleneck: human analyst in the loop ▸ manual discovery of vulnerabilities ▸ manual generation of attack signatures ▸ manual analysis of malicious software
  8. 8. GVDJ #IDSECCONF2016 conventional detection ▸ conventional attack detection using signatures ▸ ineffective against novel and unknown attacks ▸ inherent delay to availability of novel signatures ▸ analysis obstructed by polymorphism and obfuscation HEADER APPLICATION PAYLOAD ... IP TCP GET /scripts/ ..%c1%9c.. /system32/cmd.exe TCP ..%c1%9c.. NIMDA WORM
  9. 9. GVDJ #IDSECCONF2016 intelligent defence ▸ construction of intelligent security systems ▸ combining computer security and machine learning ▸ minimum human intervention on prevention, detection, and analysis ▸ challenge in practice ▸ effectivity, efficiency, and robustness ▸ transparency and controlability
  10. 10. machine learning for cybersecurity
  11. 11. MACHINE LEARNING PREDICTION PLATFORM HUMAN INTUITION
  12. 12. attack mitigation issues supervised unsupervised rules driven
 (limited by experiences and expertise) high rates undetectable attacks
 (false negatives) delayed response
 (between detection and prevention) statistical driven
 (improved detection of new attacks) substantial investigative efforts 
 (false positives) alarm fatigue and distrust
 (reversion to supervised method)
  13. 13. GVDJ #IDSECCONF2016 implementation challenges ▸ lack of data: limited or no history of previous attacks (required by supervised learning model) ▸ evolving attacks: attackers constantly change their behaviours, making current models obsolete ▸ limited resources: relying on security analysts to investigate the attacks can be costly and time consuming
  14. 14. GVDJ #IDSECCONF2016 components THREAT PREDICTION PLATFORM MODEL ANALYSTS PREDICTIONFEATURE RAW DATA ACTION EVENTS MODELLING CONTEXTUAL MODELLING
  15. 15. GVDJ #IDSECCONF2016 components ▸ big data processing system: quantifying features from raw data ▸ outlier detection system: learning a descriptive model using features from unsupervised learning process ▸ feedback mechanism and continuous learning: incorporating analyst input
  16. 16. GVDJ #IDSECCONF2016 data characteristics
  17. 17. GVDJ #IDSECCONF2016 data characteristics
 0.1 data sources ▸ common sources: networking devices and applications log ▸ router, switch, firewall, ids, ips, and load balancer devices ▸ web, database, and micro services ▸ frontend and backend applications ▸ delivered in realtime from widely distributed systems
  18. 18. GVDJ #IDSECCONF2016 data characteristics
 0.2 data dimensions and unique entities ▸ volume of raw data: metrics (GB/TB) or number of lines (≥ tens of millions on a daily basis) ▸ specific to behavioural analytics: IP addresses, users, sessions, etc. 01010101010101001111010111010101 01010001100010010100010011110110 10100100010010010010001010111101 10100111101101001100011110101011 10101110011010111011011101100111 11100000101001100010000011101101 01100001000000011010111110111011 00111001110001000100010011100100 00111011111011110110100100100110 10001010001110111110001001001001
  19. 19. GVDJ #IDSECCONF2016 data characteristics
 0.3 malicious activity prevalence ▸ under normal circumstances, malicious activities are extremely rare (generally ≤ 0.1%) ▸ resulting extreme class imbalance in supervised learning ▸ increasing the difficulty of detection processes ▸ unknown and/or unreported attacks introduce noise into data ▸ attack vectors can take a wide variety of shapes
  20. 20. GVDJ #IDSECCONF2016 big data analytics DAILY WEEKLY MONTHLY RAW DATA AGGREGATED DATA JIM ✖ ✖ ✖ FEATURES ISNEWUSER? LASTCHANGEDPASSWORD LASTIPADDRESS LASTSESSIONLENGTH ..... ..... ..... ..... ..... NUMBEROFFAILEDLOGIN JIM
  21. 21. GVDJ #IDSECCONF2016 big data analytics
 0.1 behavioural signatures ▸ quantifying signatures (often comprises the series of attack steps) from raw data ▸ quantitative values can be defined by security analysts ▸ extracting features per-entity and per-time-segment basis
  22. 22. GVDJ #IDSECCONF2016 big data analytics
 0.2 design requirements ▸ capable of analysing ≥ 10 millions entities in daily basis ▸ capable of updating and retrieving signatures of active entities, on demand and/or in realtime
  23. 23. GVDJ #IDSECCONF2016 big data analytics
 0.3.1 process: activity tracking ▸ absorbing the log stream: identifying the entities and updating corresponding records ▸ in short temporal window: 30 minutes, 1 hour, 12 hours, or 24 hours. ▸ focus on efficient retrieval for feature computation
  24. 24. GVDJ #IDSECCONF2016 big data analytics
 0.3.2 process: activity aggregation ▸ computing behavioural features over an interval of time ▸ retrieving all activity records within given interval ▸ aggregating smaller time unit (minutes, hours, days, weeks) as the feature demands
  25. 25. GVDJ #IDSECCONF2016 algorithm selection
  26. 26. GVDJ #IDSECCONF2016 algorithm selection
  27. 27. GVDJ #IDSECCONF2016 outlier detection OUTLIER
  28. 28. GVDJ #IDSECCONF2016 outlier detection ▸ matrix decomposition-based outlier analysis ▸ replicator neural networks ▸ density-based outlier analysis ▸ score interpretation ▸ transforming score to probabilities ▸ detection ensembles MATRIX DECOMPOSITION REPLICATOR NEURAL NETWORKS
  29. 29. GVDJ #IDSECCONF2016 continuous learning ▸ overcomes limited analyst bandwidth ▸ overcomes weaknesses of unsupervised learning ▸ actively adapts and synthesises new models PREDICTACT TRAIN
  30. 30. GVDJ #IDSECCONF2016 example: open network insight
 leveraging insights from flow and packet analysis
  31. 31. GVDJ #IDSECCONF2016 example: open network insight
 advantages
  32. 32. GVDJ #IDSECCONF2016 example: open network insight
 how it works
  33. 33. GVDJ #IDSECCONF2016 example: entrada
 network data analytics platform
  34. 34. GVDJ #IDSECCONF2016 summary ▸ current problems of security ▸ automatisation of attacks ▸ massive amount of novel malicious code ▸ defences involving manual actions (often ineffective) ▸ machine learning in security ▸ adaptive defences using learning algorithms ▸ automatic detection and analysis of threats
  35. 35. QUESTIONS?

×