Published on

  • Be the first to comment

  • Be the first to like this


  1. 1. Application security metrics from the organization on down to the vulnerabilities Chris Wysopal CTO Veracode [email_address] November 13, 2009 11:30am-12:30pm
  2. 2. Agenda <ul><li>Why use metrics? </li></ul><ul><li>Challenges & Goals for Application Security Metrics </li></ul><ul><li>Enumerations </li></ul><ul><li>Organizational Metrics </li></ul><ul><li>Testing Metrics </li></ul><ul><li>Application Metrics </li></ul><ul><li>WASC Web Application Security Statistics Project 2008 </li></ul><ul><li>Future Plans </li></ul>
  3. 3. <ul><li>To measure is to know. </li></ul><ul><li>James Clerk Maxwell, 1831-1879 </li></ul><ul><li>Measurement motivates. </li></ul><ul><li>John Kenneth Galbraith. 1908-2006 </li></ul>
  4. 4. Metrics do matter <ul><li>Metrics quantify the otherwise unquantifiable </li></ul><ul><li>Metrics can show trends and trends matter more than measurements do </li></ul><ul><li>Metrics can show if we are doing a good or bad job </li></ul><ul><li>Metrics can show if you have no idea where you are </li></ul><ul><li>Metrics establish where “You are here” really is </li></ul><ul><li>Metrics build bridges to managers </li></ul><ul><li>Metrics allow cross sectional comparisons </li></ul><ul><li>Metrics set targets </li></ul><ul><li>Metrics benchmark yourself against the opposition </li></ul><ul><li>Metrics create curiosity </li></ul>Source: Andy Jaquith, Yankee Group, Metricon 2.0
  5. 5. Metrics don’t matter <ul><li>It is too easy to count things for no purpose other than to count them </li></ul><ul><li>You cannot measure security so stop </li></ul><ul><li>This following is all that matters and you can’t map security metrics to them: </li></ul><ul><ul><li>Maintenance of availability </li></ul></ul><ul><ul><li>Preservation of wealth </li></ul></ul><ul><ul><li>Limitation on corporate liability </li></ul></ul><ul><ul><li>Compliance </li></ul></ul><ul><ul><li>Shepherding the corporate brand </li></ul></ul><ul><li>Cost of measurement not worth the benefit </li></ul>Source: Mike Rothman, Security Incite, Metricon 2.0
  6. 6. Bad metrics are worse than no metrics
  7. 7. Security metrics can drive executive decision making <ul><li>How secure am I? </li></ul><ul><li>Am I better off than this time last year? </li></ul><ul><li>Am I spending the right amount of $$? </li></ul><ul><li>How do I compare to my peers? </li></ul><ul><li>What risk transfer options do I have? </li></ul>Source: Measuring Security Tutorial, Dan Geer
  8. 8. Goals of Application Security Metrics <ul><li>Provide quantifiable information to support enterprise risk management and risk-based decision making </li></ul><ul><li>Articulate progress towards goals and objectives </li></ul><ul><li>Provide a repeatable, quantifiable way to assess, compare, and track improvements in assurance </li></ul><ul><li>Focus activities on risk mitigation in order of priority and exploitability </li></ul><ul><li>Facilitate adoption and improvement of secure software design and development processes </li></ul><ul><li>Provide an objective means of comparing and benchmarking projects, divisions, organizations, and vendor products </li></ul>Source: Practical Measurement Framework for Software Assurance and Information Security, DHS SwA Measurement Working Group
  9. 9. Use Enumerations <ul><li>Common Vulnerabilities and Exposures </li></ul><ul><li>Common Weakness Enumeration </li></ul><ul><li>Common Attack Pattern Enumeration and Classification </li></ul>Enumerations help identify specific software-related items that can be counted, aggregated, evaluated over time
  10. 10. Organizational Metrics <ul><li>Percentage of application inventory developed with SDLC (which version of SDLC?) </li></ul><ul><li>Business criticality of each application in inventory </li></ul><ul><li>Percentage of application inventory tested for security (what level of testing?) </li></ul><ul><li>Percentage of application inventory remediated and meeting assurance requirements </li></ul><ul><li>Roll up of testing results </li></ul>
  11. 11. Organizational Metrics <ul><li>Cost to fix defects at different points in the software lifecycle </li></ul><ul><li>Cost of data breaches related to software vulnerabilities </li></ul>
  12. 12. Testing Metrics <ul><li>Number of threats identified in threat model </li></ul><ul><li>Size of attack surface identified </li></ul><ul><li>Percentage code coverage (static and dynamic) </li></ul><ul><li>Coverage of defect categories (CWE) </li></ul><ul><li>Coverage of attack pattern categories (CAPEC) </li></ul>
  13. 13. SANS Top 25 Mapped to Application Security Methods Source: 2009 Microsoft
  14. 14. Weakness Class Prevalence based on 2008 CVE data 4855 total flaws tracked by CVE in 2008
  15. 15. Basic Metrics: Defect counts <ul><li>Design and implementation defects </li></ul><ul><ul><li>CWE identifier </li></ul></ul><ul><ul><li>CVSS score </li></ul></ul><ul><ul><li>Severity </li></ul></ul><ul><ul><li>Likelihood of exploit </li></ul></ul>
  16. 16. Automated Code Analysis Techniques <ul><li>Static Analysis: (White Box Testing) Similar to a line by line code review. Benefit is there is complete coverage of the entire source or binary. Downside is it is computationally impossible to have a perfect analysis. </li></ul><ul><ul><li>Static Source – analyze the source code </li></ul></ul><ul><ul><li>Static Binary – analyze the binary executable </li></ul></ul><ul><ul><li>Source vs. Binary – You don’t always have all the source code. You don’t want to part with your source code to get a 3 rd party analysis </li></ul></ul><ul><li>Dynamic Analysis: (Black Box Testing) Run time analysis more like traditional testing. Benefit is there is perfect modeling of a particular input so you can show exploitability. Downside is you cannot create all inputs in reasonable time. </li></ul><ul><ul><li>Automated dynamic testing (also known as penetration testing) using tools </li></ul></ul><ul><ul><li>Manual Penetrating Testing (with or without use of tools) </li></ul></ul><ul><li>Create lists of defects that can be labeled with CWE, CVSS, Exploitability </li></ul>
  17. 17. Manual Analysis <ul><li>Manual Penetration Testing – can discover some issues that cannot be determined automatically because a human can understand issues related to business logic or design </li></ul><ul><li>Manual Code Review – typically focused only on specific high risk areas of code </li></ul><ul><li>Manual Design Review – can determine some vulnerabilities early on in the design process before the program is even built. </li></ul><ul><li>Threat Modeling </li></ul>
  18. 18. WASC Web Application Security Statistics Project 2008 <ul><li>Purpose </li></ul><ul><ul><li>Collaborative industry wide effort to pool together sanitized website vulnerability data and to gain a better understanding about the web application vulnerability landscape. </li></ul></ul><ul><ul><li>Ascertain which classes of attacks are the most prevalent regardless of the methodology used to identify them. MITRE CVE project for custom web applications. </li></ul></ul><ul><li>Goals </li></ul><ul><ul><li>Identify the prevalence and probability of different vulnerability classes. </li></ul></ul><ul><ul><li>Compare testing methodologies against what types of vulnerabilities they are likely to identify. </li></ul></ul>
  19. 19. Project Team <ul><li>Project Leader </li></ul><ul><ul><li>Sergey Gordeychik  </li></ul></ul><ul><li>Project Contributors </li></ul><ul><ul><li>Sergey Gordeychik, Dmitry Evteev ( POSITIVE TECHNOLOGIES ) </li></ul></ul><ul><ul><li>Chris Wysopal, Chris Eng ( VERACODE ) </li></ul></ul><ul><ul><li>Jeremiah Grossman ( WHITEHAT SECURITY ) </li></ul></ul><ul><ul><li>Mandeep Khera ( CENZIC ) </li></ul></ul><ul><ul><li>Shreeraj Shah ( BLUEINFY ) </li></ul></ul><ul><ul><li>Matt Lantinga ( HP APPLICATION SECURITY CENTER ) </li></ul></ul><ul><ul><li>Lawson Lee ( dns – used WebInspect) </li></ul></ul><ul><ul><li>Campbell Murray ( ENCRIPTION LIMITED ) </li></ul></ul>
  20. 20. Summary <ul><li>12186 web applications with 97554 detected vulnerabilities </li></ul><ul><li>more than 13% * of all reviewed sites can be compromised completely automatically </li></ul><ul><li>About 49% of web applications contain vulnerabilities of high risk level detected by scanning </li></ul><ul><li>manual and automated assessment by white box method allows to detect these high risk level vulnerabilities with probability up to 80-96% </li></ul><ul><li>99% of web applications are not compliant with PCI DSS standard </li></ul><ul><li>* Web applications with Brute Force Attack, Buffer Overflow, OS Commanding, Path Traversal, Remote File Inclusion, SSI Injection, Session Fixation, SQL Injection, Insufficient Authentication, Insufficient Authorization vulnerabilities detected by automatic scanning. </li></ul>
  21. 21. Compared to 2007 WASS Project <ul><li>Number of sites with SQL Injection fell by 13% </li></ul><ul><li>Number of sites with Cross-site Scripting fell 20% </li></ul><ul><li>Number of sites with different types of Information Leakage rose by 24% </li></ul><ul><li>Probability to compromise a host automatically rose from 7 to 13 %. </li></ul>
  22. 22. Probability to detect a vulnerability
  23. 23. % of total vulnerabilities
  24. 24. White box vs. black box
  25. 25. Full Report <ul><li> </li></ul>
  26. 26. Future Plans <ul><li>Veracode processes over 100 applications and 500 Million lines of code per month </li></ul><ul><li>Collecting data: </li></ul><ul><ul><li>vulnerabilities found/fixed </li></ul></ul><ul><ul><li>Application metadata: industry, time in dev cycle, application type </li></ul></ul><ul><li>Vulnerability trends </li></ul><ul><li>Industry/Platform/Language differences </li></ul>
  27. 27. Further reading on software security metrics & testing <ul><li>NIST, Performance Measurement Guide for Information Security </li></ul><ul><li>Security Metrics: Replacing Fear, Uncertainty, and Doubt by Andrew Jaquith, </li></ul><ul><li>The Art of Software Security Testing by Chris Wysopal, Lucas Nelson, Dino Dai Zovi, Elfriede Dustin </li></ul>
  28. 28. Q&A [email_address]