Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

SensePost Threat Modelling

4,831 views

Published on

Presentation by Dominic White at Metricon 6 in 2011.

This presentation is about the methodology behind Sensepost's corporate threat modeling tool.

Published in: Technology, Education
  • Be the first to comment

SensePost Threat Modelling

  1. 1. Metricon 6, a Usenix Workshop<br />SensePostThreat Modeling<br />
  2. 2. Agenda<br />Introduction<br />What is TM<br />Why TM<br />Our goals with CTM<br />Methodology<br />Entities & Mapping<br />Modeling<br />Risk Calculation & Permutations<br />Analysis<br />Scenario Modeling<br />Brief Comparisons<br />
  3. 3. Dominic White<br />@singe<br />http://singe.za.net/<br />Work:<br />Research:<br />MSc in Security<br />Interests in Privacy, Defensive Tech & Security Management<br />About Me<br />
  4. 4. Introduction<br />What is TM<br />Why TM<br />Design Goals<br />1<br />
  5. 5. A model in science is a physical, mathematical, or logical representation of a systemof entities, phenomena, or processes.<br />Basically a model is a simplified abstract view of the complex reality<br />Breadth over depth<br />Represents criteria specific to analysis<br />What is Modeling?<br />
  6. 6. A threat model is:<br />“A systematic, non-provable, internally consistent method of modeling a system, enumerating risks against it, and prioritising them.”<br />Systematic<br />Non-Provable<br />Internally Consistent<br />System Model<br />Risk Enumeration<br />Prioritisation<br />What is Threat Modeling?<br />
  7. 7. Escaping the Detail Trade Off<br />
  8. 8. Usual Drivers of Controls<br />Audit reports<br />Prioritises: financial systems, audit house priorities, auditor skills, rotation plan, known systems<br />Vendor marketing<br />Prioritises: new problems, overinflated problems, product as solution<br />New Attacks<br />Prioritises: popular vulnerability research, complex attacks<br />Individual Experience<br />Prioritises: past experience, new systems, individual motives <br />Why Threat Model?<br />
  9. 9. Threat Modeling provides:<br />All (most) information security risks<br />systematic enumeration of risks<br />Prioritisation of risks<br />puts known risks in their place & compares new risks<br />Justification<br />no appeal to expert authority<br />Decision Making<br />scenario modeling to test decisions<br />Education<br />Can involve whole team<br />Why Threat Model?<br />
  10. 10. Developed for consultative role <br />i.e. likely not the person making the changes<br />Focus is on:<br />Providing decision making information<br />Rapid initial model creation<br />Hybrid approach<br />Bit of all the others<br />Some parts we just threw out<br />Highly flexible<br />Initially, due to uncertainty, increasingly less so<br />Detailed & aggregated results<br />Includes test plan for verification<br />SensePost CTM Design Goals<br />
  11. 11. Methodology<br />Entities & Mappings<br />2.1<br />
  12. 12. Entity Overview<br />Locations <br />Controls<br />Users <br />enforceable trust<br />Interfaces <br />method of system access<br />asset value<br />Attacks<br />likelihood<br />Damage<br />Tests<br />certainty<br />relevance<br />
  13. 13. Represent the trust (controls) of a location<br />Interfaces are exposed at locations<br />Users are present at locations<br />Three types:<br />Physical<br />Data centers, Head Office, Remote Sites<br />Network<br />Internet, DMZ, Server Network, User Network<br />Logical / Functional (new)<br />Represent controls within authorisation levels<br />Administrative, authenticated, unauthenticated access<br />Locations<br />
  14. 14. Enforceable trust of user group<br />i.e. contractual or controlled trust, not gut feel<br />Users are mapped to locations<br />Interfaces are exposed to users via locations<br />Example general groups:<br />Anonymous<br />unidentified or unauthenticated users<br />External Users<br />suppliers, contractors<br />Internal Employees<br />application users, administrators, call center<br />Users<br />
  15. 15. Methods of interacting with a system or asset<br />They are things an attacker could compromise<br />Exposes the value of asset<br />Interfaces to the same system have a consistent value<br />Value can be set to existing system criticality ratings<br />Types<br />Physical<br />Console Access, Hardware<br />Network<br />Remote Desktop, SSH, NTP<br />Functional (new)<br />Represent access to data & functionality within an authorisation role<br />Administrative Access, Approve Transaction<br />Interfaces<br />
  16. 16. Users are present at certain locations<br />Many to Many mapping<br />Both are a representation of controls<br />“Company founder in the mission impossible room”<br />vs..<br />“Unknown Outsider on the Internet”<br />Location type mappings<br />Physical – users who can be physically present<br />Network – users who can access the network<br />Logical – users who have been granted, or have authorisation<br />MappingUsers to Locations<br />
  17. 17. Interfaces are present at certain locations<br />Many to Many mapping<br />Constraints<br />Physical interfaces only mapped to physical locations<br />Physical Server in Data Centre A<br />Technical interfaces only mapped to network locations<br />Remote Desktop in Internal Network<br />Functionality interfaces only to functional locations<br />Execute Trade in Broken Role<br />MappingInterfaces to Locations<br />
  18. 18. Attacks<br />An attack in performed on an interface to expose some of its value<br />Likelihood is based on factors specific to the attack<br />Excludes trust of users, or controls in place<br />General likelihood defined per attack, but made specific when mapped<br />Popularity, easy of discovery/exploitation, prevalence (DREAD)<br />Initial work into using external attack metrics<br />VERIS – best mapping, sometimes non-discreet<br />CWE – too detailed, vulns specific, no “abuse of privilege”<br />STRIDE – not specific enough<br />Impact is the worst case scenario<br />Defines how much of interface value would be affected (damage)<br />Originally named “risks”<br />
  19. 19. Attacks are performed against interfaces<br />Many to one mapping<br />Likelihood & Impact made specific per mapping<br />System CIA should be considered e.g. theft of e-mail may be more damaging to the CEO than the gardener<br />“Could this attack lead to a full compromise of the system?”<br />Examples<br />Physical theft of the Physical Server<br />Password Bruteforce against Outlook Web Access Web Front-End<br />Abuse of Privilege of the Administrator Role<br />MappingAttacks to Interfaces<br />
  20. 20. Validate permutations of threat vector combinations<br />Can be any type of test that provides more information<br />Technical test, research, policy work<br />Different tests provide a different level of certainty<br />Proved  Disproved<br />Can be granularly mapped<br />Against a specific entity or combination of entities<br />Tests<br />
  21. 21. Methodology<br />Modeling<br />How-to<br />System Template<br /> Guidelines<br />2.2<br />
  22. 22. Data Gathering<br />Collect as much information about the environment as you can. Network diagrams, key system documentation, existing risk/criticality analysis, past audit reports<br />Interview<br />Ideally, find a tech generalist with a good overview, then get specific, large company’s knowledge is more distributed<br />Look to validate statements across interviews<br />Get multiple “views” on criticality<br />Testing<br />Light testing to validate claims e.g. basic network footprintingor application use<br />Passive collection<br />Look for problems that should come out in the TM e.g. if they have regular & damaging virus outbreaks and the TM disagrees …<br />ModelingA How-To<br />
  23. 23. System Template<Name> |<Description><br />AA<br />Authentication Source<br />Authorization<br />Integration<br />Source<br />Destination<br />Criticality<br />Include overall rating, individual ratings & reasons<br />Confidentiality<br />Integrity<br />Availability<br />Possession<br />Authenticity<br />Utility<br />Locations<br />Physical<br />Network<br />Functional (controls)<br />Interfaces<br />Include number & locations<br />Physical<br />Network<br />Functional (access)<br />Users<br />Include number & locations<br />Admins<br />Users<br />Support<br />
  24. 24. Identify unique entitiesfrom completed system templates and general documentation – reconcile if differences<br />Identify shared infrastructure & create new systems for them<br />Map users & interfaces to locations<br />Linked systems<br />An application's functional interfaces should be mapped to its infrastructure’s functional locations<br />For integrated systems, map functional interfaces to locations depending on access (push vs.. pull vs.. full)<br />Process<br />
  25. 25. Keep everything equally specific<br />Summarise when no meaningful security difference<br />Don’t create interfaces per-system for shared infrastructure<br />Don’t represent risks more than once<br />e.g. Don’t map two interfaces to the same system to the same location unless they are different “paths” e.g. administrator access implies “normal” access<br />Avoid catch-all systems such as “user’s computers” rather model them as interfaces to relevant areas<br />2-tier vs.. 3-tier applications have different access from the application to the DB & vice-versa<br />Modeling Gotchas<br />
  26. 26. Methodology<br />Calculations & Permutations<br />2.3<br />
  27. 27. Entity Relationship<br />present at<br />available at<br />performed at<br />performed on<br />performed by<br />
  28. 28. Permutations<br />Administrators<br />Exchange MAPI<br />Head Office<br />Remote MAPI<br />Exploit<br />Anonymous<br />
  29. 29. Understand concepts in relation to each other<br />Discrete<br />Individually necessary<br />Collectively sufficient<br />risk = threat x vulnerability x impact<br />Disclaimer: Σ – The International Sign for “Stop Reading Here”<br />Risk Equation<br />
  30. 30. The tool gives us the following inputs<br />User Trust<br />Location Trust (controls)<br />Interface Value<br />Attack Likelihood<br />Attack Impact<br />But, complete freedom in defining how they are mashed up<br />Input Values<br />
  31. 31. risk = threat x vulnerability x impact<br />likelihood = threat x vulnerability<br />risk = likelihood x impact<br />Likelihood<br />
  32. 32. risk = applied likelihood + value at risk<br />applied likelihood = <br />attack likelihood (reduced by)<br />user trust + location trust<br />value at risk = <br />value of asset (reduced by)<br /> amount of asset exposed by attack<br />Risk Equation Used<br />
  33. 33. [6 minus] – Ratings are out of 5 & denote a positive trust value, we want the “distrust” value<br />[multiply 0.2] – We want the trust & impact to moderate the likelihood & value<br />[divide by 2] – We take an average of user & location trust (equally weighted)<br />Risk Equation<br />
  34. 34. Methodology<br />Analysis <br />2.4<br />
  35. 35. Takes every permutation & provides analysis graphs & a risk curve<br />Provides three things<br />Risk Curve<br />One view to rule them all<br />Analysis Graphs<br />Slice & Dice<br />Detailed risk searching/pivot table<br />Zoom<br />Threat Model Dashboard<br />
  36. 36. Challenge was to provide management view<br />Single number loses too much context<br />Frequency graph of number of “risks” per severity level<br />Risk Curve<br />
  37. 37. “Digital” version of risk curve, with ability to show risks per entity type<br />Can view per “perspective”<br />physical, technical, functional<br />Can zoom into showing only risks relating to a specific system<br />Can look at “pivot” risks, i.e. attacks available to someone once they have compromised a system<br />Analysis Graphs<br />
  38. 38. Analysis Graphs ExampleAll Perspectives, by Attack<br />
  39. 39. Analysis Graphs ExampleAll Perspectives, by Interfaces<br />
  40. 40. Analysis Graphs ExampleAll Perspectives, by Users<br />
  41. 41. Analysis Graphs ExampleAll Perspectives, by Locations<br />
  42. 42. Analysis Graphs ExampleTechnical Perspectives, by Attack<br />
  43. 43. Analysis Graphs ExampleFunctional attacks from Active Directory<br />
  44. 44. Methodology<br />Scenario Modeling<br />2.5<br />
  45. 45. Area Under Review<br />
  46. 46. Risk Frequency<br />
  47. 47. Cumulative Risk per Location<br />
  48. 48. Cumulative Risk per Threat<br />
  49. 49. Suggested Change<br />
  50. 50. Resulting Risk Frequency<br />
  51. 51. Resulting Risk per Location<br />
  52. 52. Recommended Change<br />
  53. 53. Resulting Risk Frequency<br />
  54. 54. Tool re-write - aiming for<br />cross platform<br />enforcing certain design constraints e.g. physical <-> physical mappings only<br />macro’ing time consuming tasks<br />Adding population size<br />Permutations favour specificity e.g. if you define multiple user groups for one application & not another, the first app has more risks<br />Refining the risk equation<br />Equal consideration of user & location trust may need refining<br />Normalise across physical, network & functional “views”<br />Refining modeling bounding as results are tested<br />Future Work<br />
  55. 55. Questions?<br />http://www.sensepost.com/blog/<br />The End<br />

×