Slides
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

Slides

on

  • 664 views

 

Statistics

Views

Total Views
664
Views on SlideShare
664
Embed Views
0

Actions

Likes
0
Downloads
9
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Metrics have worked well for scientists and economists
  • Bad metrics drive the wrong behavior
  • Enumerations also enable the consolidation of multiple tools

Slides Presentation Transcript

  • 1. Application security metrics from the organization on down to the vulnerabilities Chris Wysopal CTO Veracode [email_address] November 13, 2009 11:30am-12:30pm
  • 2. Agenda
    • Why use metrics?
    • Challenges & Goals for Application Security Metrics
    • Enumerations
    • Organizational Metrics
    • Testing Metrics
    • Application Metrics
    • WASC Web Application Security Statistics Project 2008
    • Future Plans
  • 3.
    • To measure is to know.
    • James Clerk Maxwell, 1831-1879
    • Measurement motivates.
    • John Kenneth Galbraith. 1908-2006
  • 4. Metrics do matter
    • Metrics quantify the otherwise unquantifiable
    • Metrics can show trends and trends matter more than measurements do
    • Metrics can show if we are doing a good or bad job
    • Metrics can show if you have no idea where you are
    • Metrics establish where “You are here” really is
    • Metrics build bridges to managers
    • Metrics allow cross sectional comparisons
    • Metrics set targets
    • Metrics benchmark yourself against the opposition
    • Metrics create curiosity
    Source: Andy Jaquith, Yankee Group, Metricon 2.0
  • 5. Metrics don’t matter
    • It is too easy to count things for no purpose other than to count them
    • You cannot measure security so stop
    • This following is all that matters and you can’t map security metrics to them:
      • Maintenance of availability
      • Preservation of wealth
      • Limitation on corporate liability
      • Compliance
      • Shepherding the corporate brand
    • Cost of measurement not worth the benefit
    Source: Mike Rothman, Security Incite, Metricon 2.0
  • 6. Bad metrics are worse than no metrics
  • 7. Security metrics can drive executive decision making
    • How secure am I?
    • Am I better off than this time last year?
    • Am I spending the right amount of $$?
    • How do I compare to my peers?
    • What risk transfer options do I have?
    Source: Measuring Security Tutorial, Dan Geer
  • 8. Goals of Application Security Metrics
    • Provide quantifiable information to support enterprise risk management and risk-based decision making
    • Articulate progress towards goals and objectives
    • Provide a repeatable, quantifiable way to assess, compare, and track improvements in assurance
    • Focus activities on risk mitigation in order of priority and exploitability
    • Facilitate adoption and improvement of secure software design and development processes
    • Provide an objective means of comparing and benchmarking projects, divisions, organizations, and vendor products
    Source: Practical Measurement Framework for Software Assurance and Information Security, DHS SwA Measurement Working Group
  • 9. Use Enumerations
    • Common Vulnerabilities and Exposures
    • Common Weakness Enumeration
    • Common Attack Pattern Enumeration and Classification
    Enumerations help identify specific software-related items that can be counted, aggregated, evaluated over time
  • 10. Organizational Metrics
    • Percentage of application inventory developed with SDLC (which version of SDLC?)
    • Business criticality of each application in inventory
    • Percentage of application inventory tested for security (what level of testing?)
    • Percentage of application inventory remediated and meeting assurance requirements
    • Roll up of testing results
  • 11. Organizational Metrics
    • Cost to fix defects at different points in the software lifecycle
    • Cost of data breaches related to software vulnerabilities
  • 12. Testing Metrics
    • Number of threats identified in threat model
    • Size of attack surface identified
    • Percentage code coverage (static and dynamic)
    • Coverage of defect categories (CWE)
    • Coverage of attack pattern categories (CAPEC)
  • 13. SANS Top 25 Mapped to Application Security Methods Source: 2009 Microsoft
  • 14. Weakness Class Prevalence based on 2008 CVE data 4855 total flaws tracked by CVE in 2008
  • 15. Basic Metrics: Defect counts
    • Design and implementation defects
      • CWE identifier
      • CVSS score
      • Severity
      • Likelihood of exploit
  • 16. Automated Code Analysis Techniques
    • Static Analysis: (White Box Testing) Similar to a line by line code review. Benefit is there is complete coverage of the entire source or binary. Downside is it is computationally impossible to have a perfect analysis.
      • Static Source – analyze the source code
      • Static Binary – analyze the binary executable
      • Source vs. Binary – You don’t always have all the source code. You don’t want to part with your source code to get a 3 rd party analysis
    • Dynamic Analysis: (Black Box Testing) Run time analysis more like traditional testing. Benefit is there is perfect modeling of a particular input so you can show exploitability. Downside is you cannot create all inputs in reasonable time.
      • Automated dynamic testing (also known as penetration testing) using tools
      • Manual Penetrating Testing (with or without use of tools)
    • Create lists of defects that can be labeled with CWE, CVSS, Exploitability
  • 17. Manual Analysis
    • Manual Penetration Testing – can discover some issues that cannot be determined automatically because a human can understand issues related to business logic or design
    • Manual Code Review – typically focused only on specific high risk areas of code
    • Manual Design Review – can determine some vulnerabilities early on in the design process before the program is even built.
    • Threat Modeling
  • 18. WASC Web Application Security Statistics Project 2008
    • Purpose
      • Collaborative industry wide effort to pool together sanitized website vulnerability data and to gain a better understanding about the web application vulnerability landscape.
      • Ascertain which classes of attacks are the most prevalent regardless of the methodology used to identify them. MITRE CVE project for custom web applications.
    • Goals
      • Identify the prevalence and probability of different vulnerability classes.
      • Compare testing methodologies against what types of vulnerabilities they are likely to identify.
  • 19. Project Team
    • Project Leader
      • Sergey Gordeychik 
    • Project Contributors
      • Sergey Gordeychik, Dmitry Evteev ( POSITIVE TECHNOLOGIES )
      • Chris Wysopal, Chris Eng ( VERACODE )
      • Jeremiah Grossman ( WHITEHAT SECURITY )
      • Mandeep Khera ( CENZIC )
      • Shreeraj Shah ( BLUEINFY )
      • Matt Lantinga ( HP APPLICATION SECURITY CENTER )
      • Lawson Lee ( dns – used WebInspect)
      • Campbell Murray ( ENCRIPTION LIMITED )
  • 20. Summary
    • 12186 web applications with 97554 detected vulnerabilities
    • more than 13% * of all reviewed sites can be compromised completely automatically
    • About 49% of web applications contain vulnerabilities of high risk level detected by scanning
    • manual and automated assessment by white box method allows to detect these high risk level vulnerabilities with probability up to 80-96%
    • 99% of web applications are not compliant with PCI DSS standard
    • * Web applications with Brute Force Attack, Buffer Overflow, OS Commanding, Path Traversal, Remote File Inclusion, SSI Injection, Session Fixation, SQL Injection, Insufficient Authentication, Insufficient Authorization vulnerabilities detected by automatic scanning.
  • 21. Compared to 2007 WASS Project
    • Number of sites with SQL Injection fell by 13%
    • Number of sites with Cross-site Scripting fell 20%
    • Number of sites with different types of Information Leakage rose by 24%
    • Probability to compromise a host automatically rose from 7 to 13 %.
  • 22. Probability to detect a vulnerability
  • 23. % of total vulnerabilities
  • 24. White box vs. black box
  • 25. Full Report
    • http://projects.webappsec.org/Web-Application-Security-Statistics
  • 26. Future Plans
    • Veracode processes over 100 applications and 500 Million lines of code per month
    • Collecting data:
      • vulnerabilities found/fixed
      • Application metadata: industry, time in dev cycle, application type
    • Vulnerability trends
    • Industry/Platform/Language differences
  • 27. Further reading on software security metrics & testing
    • NIST, Performance Measurement Guide for Information Security http://csrc.nist.gov/publications/nistpubs/800-55-Rev1/SP800-55-rev1.pdf
    • Security Metrics: Replacing Fear, Uncertainty, and Doubt by Andrew Jaquith,
    • The Art of Software Security Testing by Chris Wysopal, Lucas Nelson, Dino Dai Zovi, Elfriede Dustin
  • 28. Q&A [email_address]