SensePost Threat Modelling


Published on

Presentation by Dominic White at Metricon 6 in 2011.

This presentation is about the methodology behind Sensepost's corporate threat modeling tool.

Published in: Technology, Education
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • Can we improve on qualitative risk assessments to provide a better view of how bag guys will attack you, but can we also extend the scope of what a pentest can achieve without extending the time. The CTM can also dictate a testing plan to make sure you test the “right” things, which would improve the accuracy but also the testing plan should allow a pentest to run faster as not everything is tested. The danger is that if you’ve made a bad assumption, the “find stuff we didn’t think of aspect” of the pentest could be lost.
  • Other tools didn’t give us what we need. Either didn’t give metrics we needed, required a view external consultants couldn’t easily get, took too long or just weren’t great.
  • The trust associated with a location is related to how much control we have over the actions performed by users in that organization. This can lead to some non-obvious scenarios at first glance. For example, administrative access has fewer controls by default than normal user access, and is hence less trusted. Specific technology to monitor and control admin access would increase this trust. The fact that administrators are more trusted would be represented on the user not the location. * Different applications part of the same system will get different logical *locations* and possibly different network interfaces, but not different functional interfaces (as locations represent control, interface represents a full compromise of a system) * Make sure all unauthenticated location have a high level of trust, since you can't do anything on them (we assume)
  • The trust afforded to a user should ideally be based on ability to monitor their actions, employee screening lengths, contractual remediations etc. For example, super users are generally considered more trusted, but quite frequently only because of the position and seniority they occupy, not for solid codified reasons.
  • Interface values must be consistent throughout, unless an interface exposes much less value than the entire system. For example, if a system is critical to the business, and if the web application to access it only exposes a subset of functionality, it would still be possible to compromise that interface to provide full value and the less functionality can be represented by the likelihood and impact under risk. Even an NTP interface (especially if it runs as SYSTEM or an administrative user and has a history of buffer overflows) should mirror the value of the system.
  • Removed reference to automated system locations
  • <Name> - <Description> Locations #Places interfaces and users exist Physical #Physical places such as offices or data centers Network #Network locations such as internal net or DMZ Functional #Authorization levels within a system e.g. Administrative, Authenticated & Unauthenticated access. This represents the controls in place for members of this role. Interfaces #Means of accessing a system Physical #Physical means such as hardware, or the console Network #Electronic communication means such as RDP or NTP Functional #Authorization levels within a system e.g. Administrative, Authenticated & Unauthenticated access. This represents the functionality & data members with this role would have access to. Users #Actors utilizing systems or relevant to the threat model Admins #Administrators of the system, keep it specific to the system not supporting infrastructure Users #Normal users of the system Support #Who supports the system/application, are they different from the administrators AA #How authentication & authorization are handled Authentication Source #Where are usernames & passwords stored, and what is checked when a user logs in Authorization #How is access within the app managed e.g. externally via AD groups, or internal to the App Integration #How does it connect to other systems. Focus on attack paths Source #How does it receive data Destination #Where does it send it to Criticality #How important to the business is this system Confidentiality #How critical is keeping the data secret Integrity #How critical is keeping the data accurate Availability #How critical is the uptime
  • Interlinking 1) e.g. an Oracle DBA may be able to access the administrative functionality of an app being hosted on it 2) Push, pull, full. If data is pushed from a system to another, then the source will have access to the destination and the destination’s functional interface will be exposed at the source’s functional location
  • 2 vs. 3 tier: With 2-tier, if you access to the app, then you have DB access. 3-tier, access doesn’t imply DB access.
  • But, measuring threat and vulnerability is somewhat difficult, so we measure likelihood.
  • Latex to generate the equation: \\left(AttackLikelihood \\times \\frac{(6-UserTrust)+(6-LocationTrust)}{2}\\times 0.2 \\right) + \\left(InterfaceValue \\times Impact \\times 0.2\\right ) use at
  • SensePost Threat Modelling

    1. 1. Metricon 6, a Usenix Workshop<br />SensePostThreat Modeling<br />
    2. 2. Agenda<br />Introduction<br />What is TM<br />Why TM<br />Our goals with CTM<br />Methodology<br />Entities & Mapping<br />Modeling<br />Risk Calculation & Permutations<br />Analysis<br />Scenario Modeling<br />Brief Comparisons<br />
    3. 3. Dominic White<br />@singe<br /><br />Work:<br />Research:<br />MSc in Security<br />Interests in Privacy, Defensive Tech & Security Management<br />About Me<br />
    4. 4. Introduction<br />What is TM<br />Why TM<br />Design Goals<br />1<br />
    5. 5. A model in science is a physical, mathematical, or logical representation of a systemof entities, phenomena, or processes.<br />Basically a model is a simplified abstract view of the complex reality<br />Breadth over depth<br />Represents criteria specific to analysis<br />What is Modeling?<br />
    6. 6. A threat model is:<br />“A systematic, non-provable, internally consistent method of modeling a system, enumerating risks against it, and prioritising them.”<br />Systematic<br />Non-Provable<br />Internally Consistent<br />System Model<br />Risk Enumeration<br />Prioritisation<br />What is Threat Modeling?<br />
    7. 7. Escaping the Detail Trade Off<br />
    8. 8. Usual Drivers of Controls<br />Audit reports<br />Prioritises: financial systems, audit house priorities, auditor skills, rotation plan, known systems<br />Vendor marketing<br />Prioritises: new problems, overinflated problems, product as solution<br />New Attacks<br />Prioritises: popular vulnerability research, complex attacks<br />Individual Experience<br />Prioritises: past experience, new systems, individual motives <br />Why Threat Model?<br />
    9. 9. Threat Modeling provides:<br />All (most) information security risks<br />systematic enumeration of risks<br />Prioritisation of risks<br />puts known risks in their place & compares new risks<br />Justification<br />no appeal to expert authority<br />Decision Making<br />scenario modeling to test decisions<br />Education<br />Can involve whole team<br />Why Threat Model?<br />
    10. 10. Developed for consultative role <br />i.e. likely not the person making the changes<br />Focus is on:<br />Providing decision making information<br />Rapid initial model creation<br />Hybrid approach<br />Bit of all the others<br />Some parts we just threw out<br />Highly flexible<br />Initially, due to uncertainty, increasingly less so<br />Detailed & aggregated results<br />Includes test plan for verification<br />SensePost CTM Design Goals<br />
    11. 11. Methodology<br />Entities & Mappings<br />2.1<br />
    12. 12. Entity Overview<br />Locations <br />Controls<br />Users <br />enforceable trust<br />Interfaces <br />method of system access<br />asset value<br />Attacks<br />likelihood<br />Damage<br />Tests<br />certainty<br />relevance<br />
    13. 13. Represent the trust (controls) of a location<br />Interfaces are exposed at locations<br />Users are present at locations<br />Three types:<br />Physical<br />Data centers, Head Office, Remote Sites<br />Network<br />Internet, DMZ, Server Network, User Network<br />Logical / Functional (new)<br />Represent controls within authorisation levels<br />Administrative, authenticated, unauthenticated access<br />Locations<br />
    14. 14. Enforceable trust of user group<br />i.e. contractual or controlled trust, not gut feel<br />Users are mapped to locations<br />Interfaces are exposed to users via locations<br />Example general groups:<br />Anonymous<br />unidentified or unauthenticated users<br />External Users<br />suppliers, contractors<br />Internal Employees<br />application users, administrators, call center<br />Users<br />
    15. 15. Methods of interacting with a system or asset<br />They are things an attacker could compromise<br />Exposes the value of asset<br />Interfaces to the same system have a consistent value<br />Value can be set to existing system criticality ratings<br />Types<br />Physical<br />Console Access, Hardware<br />Network<br />Remote Desktop, SSH, NTP<br />Functional (new)<br />Represent access to data & functionality within an authorisation role<br />Administrative Access, Approve Transaction<br />Interfaces<br />
    16. 16. Users are present at certain locations<br />Many to Many mapping<br />Both are a representation of controls<br />“Company founder in the mission impossible room”<br />vs..<br />“Unknown Outsider on the Internet”<br />Location type mappings<br />Physical – users who can be physically present<br />Network – users who can access the network<br />Logical – users who have been granted, or have authorisation<br />MappingUsers to Locations<br />
    17. 17. Interfaces are present at certain locations<br />Many to Many mapping<br />Constraints<br />Physical interfaces only mapped to physical locations<br />Physical Server in Data Centre A<br />Technical interfaces only mapped to network locations<br />Remote Desktop in Internal Network<br />Functionality interfaces only to functional locations<br />Execute Trade in Broken Role<br />MappingInterfaces to Locations<br />
    18. 18. Attacks<br />An attack in performed on an interface to expose some of its value<br />Likelihood is based on factors specific to the attack<br />Excludes trust of users, or controls in place<br />General likelihood defined per attack, but made specific when mapped<br />Popularity, easy of discovery/exploitation, prevalence (DREAD)<br />Initial work into using external attack metrics<br />VERIS – best mapping, sometimes non-discreet<br />CWE – too detailed, vulns specific, no “abuse of privilege”<br />STRIDE – not specific enough<br />Impact is the worst case scenario<br />Defines how much of interface value would be affected (damage)<br />Originally named “risks”<br />
    19. 19. Attacks are performed against interfaces<br />Many to one mapping<br />Likelihood & Impact made specific per mapping<br />System CIA should be considered e.g. theft of e-mail may be more damaging to the CEO than the gardener<br />“Could this attack lead to a full compromise of the system?”<br />Examples<br />Physical theft of the Physical Server<br />Password Bruteforce against Outlook Web Access Web Front-End<br />Abuse of Privilege of the Administrator Role<br />MappingAttacks to Interfaces<br />
    20. 20. Validate permutations of threat vector combinations<br />Can be any type of test that provides more information<br />Technical test, research, policy work<br />Different tests provide a different level of certainty<br />Proved  Disproved<br />Can be granularly mapped<br />Against a specific entity or combination of entities<br />Tests<br />
    21. 21. Methodology<br />Modeling<br />How-to<br />System Template<br /> Guidelines<br />2.2<br />
    22. 22. Data Gathering<br />Collect as much information about the environment as you can. Network diagrams, key system documentation, existing risk/criticality analysis, past audit reports<br />Interview<br />Ideally, find a tech generalist with a good overview, then get specific, large company’s knowledge is more distributed<br />Look to validate statements across interviews<br />Get multiple “views” on criticality<br />Testing<br />Light testing to validate claims e.g. basic network footprintingor application use<br />Passive collection<br />Look for problems that should come out in the TM e.g. if they have regular & damaging virus outbreaks and the TM disagrees …<br />ModelingA How-To<br />
    23. 23. System Template<Name> |<Description><br />AA<br />Authentication Source<br />Authorization<br />Integration<br />Source<br />Destination<br />Criticality<br />Include overall rating, individual ratings & reasons<br />Confidentiality<br />Integrity<br />Availability<br />Possession<br />Authenticity<br />Utility<br />Locations<br />Physical<br />Network<br />Functional (controls)<br />Interfaces<br />Include number & locations<br />Physical<br />Network<br />Functional (access)<br />Users<br />Include number & locations<br />Admins<br />Users<br />Support<br />
    24. 24. Identify unique entitiesfrom completed system templates and general documentation – reconcile if differences<br />Identify shared infrastructure & create new systems for them<br />Map users & interfaces to locations<br />Linked systems<br />An application's functional interfaces should be mapped to its infrastructure’s functional locations<br />For integrated systems, map functional interfaces to locations depending on access (push vs.. pull vs.. full)<br />Process<br />
    25. 25. Keep everything equally specific<br />Summarise when no meaningful security difference<br />Don’t create interfaces per-system for shared infrastructure<br />Don’t represent risks more than once<br />e.g. Don’t map two interfaces to the same system to the same location unless they are different “paths” e.g. administrator access implies “normal” access<br />Avoid catch-all systems such as “user’s computers” rather model them as interfaces to relevant areas<br />2-tier vs.. 3-tier applications have different access from the application to the DB & vice-versa<br />Modeling Gotchas<br />
    26. 26. Methodology<br />Calculations & Permutations<br />2.3<br />
    27. 27. Entity Relationship<br />present at<br />available at<br />performed at<br />performed on<br />performed by<br />
    28. 28. Permutations<br />Administrators<br />Exchange MAPI<br />Head Office<br />Remote MAPI<br />Exploit<br />Anonymous<br />
    29. 29. Understand concepts in relation to each other<br />Discrete<br />Individually necessary<br />Collectively sufficient<br />risk = threat x vulnerability x impact<br />Disclaimer: Σ – The International Sign for “Stop Reading Here”<br />Risk Equation<br />
    30. 30. The tool gives us the following inputs<br />User Trust<br />Location Trust (controls)<br />Interface Value<br />Attack Likelihood<br />Attack Impact<br />But, complete freedom in defining how they are mashed up<br />Input Values<br />
    31. 31. risk = threat x vulnerability x impact<br />likelihood = threat x vulnerability<br />risk = likelihood x impact<br />Likelihood<br />
    32. 32. risk = applied likelihood + value at risk<br />applied likelihood = <br />attack likelihood (reduced by)<br />user trust + location trust<br />value at risk = <br />value of asset (reduced by)<br /> amount of asset exposed by attack<br />Risk Equation Used<br />
    33. 33. [6 minus] – Ratings are out of 5 & denote a positive trust value, we want the “distrust” value<br />[multiply 0.2] – We want the trust & impact to moderate the likelihood & value<br />[divide by 2] – We take an average of user & location trust (equally weighted)<br />Risk Equation<br />
    34. 34. Methodology<br />Analysis <br />2.4<br />
    35. 35. Takes every permutation & provides analysis graphs & a risk curve<br />Provides three things<br />Risk Curve<br />One view to rule them all<br />Analysis Graphs<br />Slice & Dice<br />Detailed risk searching/pivot table<br />Zoom<br />Threat Model Dashboard<br />
    36. 36. Challenge was to provide management view<br />Single number loses too much context<br />Frequency graph of number of “risks” per severity level<br />Risk Curve<br />
    37. 37. “Digital” version of risk curve, with ability to show risks per entity type<br />Can view per “perspective”<br />physical, technical, functional<br />Can zoom into showing only risks relating to a specific system<br />Can look at “pivot” risks, i.e. attacks available to someone once they have compromised a system<br />Analysis Graphs<br />
    38. 38. Analysis Graphs ExampleAll Perspectives, by Attack<br />
    39. 39. Analysis Graphs ExampleAll Perspectives, by Interfaces<br />
    40. 40. Analysis Graphs ExampleAll Perspectives, by Users<br />
    41. 41. Analysis Graphs ExampleAll Perspectives, by Locations<br />
    42. 42. Analysis Graphs ExampleTechnical Perspectives, by Attack<br />
    43. 43. Analysis Graphs ExampleFunctional attacks from Active Directory<br />
    44. 44. Methodology<br />Scenario Modeling<br />2.5<br />
    45. 45. Area Under Review<br />
    46. 46. Risk Frequency<br />
    47. 47. Cumulative Risk per Location<br />
    48. 48. Cumulative Risk per Threat<br />
    49. 49. Suggested Change<br />
    50. 50. Resulting Risk Frequency<br />
    51. 51. Resulting Risk per Location<br />
    52. 52. Recommended Change<br />
    53. 53. Resulting Risk Frequency<br />
    54. 54. Tool re-write - aiming for<br />cross platform<br />enforcing certain design constraints e.g. physical <-> physical mappings only<br />macro’ing time consuming tasks<br />Adding population size<br />Permutations favour specificity e.g. if you define multiple user groups for one application & not another, the first app has more risks<br />Refining the risk equation<br />Equal consideration of user & location trust may need refining<br />Normalise across physical, network & functional “views”<br />Refining modeling bounding as results are tested<br />Future Work<br />
    55. 55. Questions?<br /><br />The End<br />