On October 23rd, 2014, we updated our
By continuing to use LinkedIn’s SlideShare service, you agree to the revised terms, so please take a few minutes to review them.
Metrics can show trends and trends matter more than measurements do
Metrics can show if we are doing a good or bad job
Metrics can show if you have no idea where you are
Metrics establish where “You are here” really is
Metrics build bridges to managers
Metrics allow cross sectional comparisons
Metrics set targets
Metrics benchmark yourself against the opposition
Metrics create curiosity
Source: Andy Jaquith, Yankee Group, Metricon 2.0
Metrics don’t matter
It is too easy to count things for no purpose other than to count them
You cannot measure security so stop
This following is all that matters and you can’t map security metrics to them:
Maintenance of availability
Preservation of wealth
Limitation on corporate liability
Shepherding the corporate brand
Cost of measurement not worth the benefit
Source: Mike Rothman, Security Incite, Metricon 2.0
Bad metrics are worse than no metrics
Security metrics can drive executive decision making
How secure am I?
Am I better off than this time last year?
Am I spending the right amount of $$?
How do I compare to my peers?
What risk transfer options do I have?
Source: Measuring Security Tutorial, Dan Geer
Goals of Application Security Metrics
Provide quantifiable information to support enterprise risk management and risk-based decision making
Articulate progress towards goals and objectives
Provide a repeatable, quantifiable way to assess, compare, and track improvements in assurance
Focus activities on risk mitigation in order of priority and exploitability
Facilitate adoption and improvement of secure software design and development processes
Provide an objective means of comparing and benchmarking projects, divisions, organizations, and vendor products
Source: Practical Measurement Framework for Software Assurance and Information Security, DHS SwA Measurement Working Group
Common Vulnerabilities and Exposures
Common Weakness Enumeration
Common Attack Pattern Enumeration and Classification
Enumerations help identify specific software-related items that can be counted, aggregated, evaluated over time
Percentage of application inventory developed with SDLC (which version of SDLC?)
Business criticality of each application in inventory
Percentage of application inventory tested for security (what level of testing?)
Percentage of application inventory remediated and meeting assurance requirements
Roll up of testing results
Cost to fix defects at different points in the software lifecycle
Cost of data breaches related to software vulnerabilities
Number of threats identified in threat model
Size of attack surface identified
Percentage code coverage (static and dynamic)
Coverage of defect categories (CWE)
Coverage of attack pattern categories (CAPEC)
SANS Top 25 Mapped to Application Security Methods Source: 2009 Microsoft
Weakness Class Prevalence based on 2008 CVE data 4855 total flaws tracked by CVE in 2008
Basic Metrics: Defect counts
Design and implementation defects
Likelihood of exploit
Automated Code Analysis Techniques
Static Analysis: (White Box Testing) Similar to a line by line code review. Benefit is there is complete coverage of the entire source or binary. Downside is it is computationally impossible to have a perfect analysis.
Static Source – analyze the source code
Static Binary – analyze the binary executable
Source vs. Binary – You don’t always have all the source code. You don’t want to part with your source code to get a 3 rd party analysis
Dynamic Analysis: (Black Box Testing) Run time analysis more like traditional testing. Benefit is there is perfect modeling of a particular input so you can show exploitability. Downside is you cannot create all inputs in reasonable time.
Automated dynamic testing (also known as penetration testing) using tools
Manual Penetrating Testing (with or without use of tools)
Create lists of defects that can be labeled with CWE, CVSS, Exploitability
Manual Penetration Testing – can discover some issues that cannot be determined automatically because a human can understand issues related to business logic or design
Manual Code Review – typically focused only on specific high risk areas of code
Manual Design Review – can determine some vulnerabilities early on in the design process before the program is even built.
WASC Web Application Security Statistics Project 2008
Collaborative industry wide effort to pool together sanitized website vulnerability data and to gain a better understanding about the web application vulnerability landscape.
Ascertain which classes of attacks are the most prevalent regardless of the methodology used to identify them. MITRE CVE project for custom web applications.
Identify the prevalence and probability of different vulnerability classes.
Compare testing methodologies against what types of vulnerabilities they are likely to identify.