Security Predictions

529 views

Published on

This is a presentation I gave at a workshop that was co-located with ESSoS 2010. The presentation is about using Metrics Validation Criteria to choose a valid predictive metric for security vulnerabilities.

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
529
On SlideShare
0
From Embeds
0
Number of Embeds
7
Actions
Shares
0
Downloads
5
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • You have the burden of proof. Not just that these metrics point to something, but that they are meaningful.
  • A metric is a "quantitative scale and method that can be used to determine the value a feature takes for a specific software product”.
  • Model for less than a specific value, let’s say .20.
  • Concrete evidence of proposed metrics which emanates upward into increasingly abstracted analysis of the information we discovered These sections are actually from the journal paper we are submitting to EmSE. Backwards informative sometimes we would learn something later down the line that would help us go back to do an earlier process better.
  • Google, CiteSeerX, IEEExplore, ACM Portal
  • So which one of these does prediction fall into?
  • Again, which one of these does prediction fall into?
  • You have the burden of proof. Not just that these metrics point to something, but that they are meaningful.
  • 47 Total – 21 Removed = 26 Remaining / 47 = 55%
  • If we have to redefine a given metric, so it’s just a rephrasing of a well-known one, so that it can be applied to the project at hand, then that’s OK as long as the newly defined metric is predictive. This isn’t a property we necessarily want out of a metric, for example, code coverage shouldn’t increase when concatenating two components together, it should be the average of the two…
  • Imagine a metric that is always predictive and with 100% accuracy will tell you which files are vulnerable in a system but which takes half a year to calculate and extract. Such a metric is not usable because by the time you obtain your much-needed values, the software system has changed—not to mention that you might have had a release or a complete architectural revamping. Alternatively imagine a metric which costs twice the budget of the entire project to collect—such a metric, no matter how accurate, is not worth collecting. The instrument can be a collection method or something a concrete as the tool used to measure some part of the metric. For example, imagine a test coverage utility that doesn’t accurately calculate branch coverage. This version of branch coverage is invalid, even if it’s predictive, because the method to increase the value of the metric is unclear. Testing more branches may decrease the value of the measurement, or increase the value too much.
  • Security Predictions

    1. 1. Metrics validation criteria: How do we know when a metric is worthwhile? Ben Smith Andy Meneely Laurie Williams
    2. 2. Scenario <ul><li>You and your team are asked to choose a set of metrics for your development company’s front-running application, iAwesome. The goal of this metrics project is to reduce post-release vulnerabilities by predicting them during the software lifecycle. How do you demonstrate to management that your metrics are meaningful and worthwhile ? </li></ul>
    3. 3. Metric Uses Metrics Quality Assessment Process Certification Process Improvement Task Planning Research Prediction
    4. 4. Motivation Software System Component m=.25 Component m=.95 Component m=. 05 Component m=.21 Component m=.15 Component m=.01 Prediction M < .2
    5. 5. Well, the metric was predictive… <ul><li>… but may not be valid ! </li></ul><ul><li>How do we know when a metric is valid? </li></ul>
    6. 6. Metrics Validation Criteria <ul><li>Metrics validation criteria : boolean statements about various aspects of the validity of a metric. </li></ul><ul><li>Example: </li></ul><ul><li>Underlying theory validity : Is there an underlying theory as to why the metric was chosen? </li></ul>
    7. 7. Agenda <ul><li>Motivation: what is validity? </li></ul><ul><li>Anatomy of a systematic literature review </li></ul><ul><li>Validating a security metric for prediction </li></ul><ul><li>Is prediction the only answer? </li></ul>
    8. 8. Objective <ul><li>Guide researchers in making </li></ul><ul><li>Sound contributions to the metrics field </li></ul><ul><li>Providing a practical summary </li></ul><ul><li>The “superset” of all proposed metrics validation criteria </li></ul>
    9. 9. Foundation in the Literature
    10. 10. Systematic literature review Phase Size of Source List Literature Index 2,228 Title 536 Cross-confirmed Title 156 Abstract 44 Full-text 17 Follow-up 20
    11. 11. Results of the Review <ul><li>Three major categories for metrics validation criteria: </li></ul><ul><ul><li>Internal : the metric correctly measures the attribute it purports to measure. </li></ul></ul><ul><ul><li>External : the metric is related in some way with an external quality factor. </li></ul></ul><ul><ul><li>Construct : the gathering of a metric’s measurements is suitable for the definition of the targeted attribute. </li></ul></ul>
    12. 12. Two Competing Philosophies <ul><li>Goal-driven : philosophy holds that the primary purpose of a metric is to apply it to a software process. </li></ul><ul><li>Theory-driven : views that the primary purpose of a metric is to gain understanding of the nature of software. </li></ul>
    13. 13. Agenda <ul><li>Motivation: what is validity? </li></ul><ul><li>Anatomy of a systematic literature review </li></ul><ul><li>Validating a security metric for prediction </li></ul><ul><li>Is prediction the only answer? </li></ul>
    14. 14. Scenario <ul><li>You and your team are asked to choose a set of metrics for your development company’s front-running application, iAwesome. The goal of this metrics project is to reduce post-release vulnerabilities by predicting them during the software lifecycle. How do you demonstrate to management that your metrics are meaningful and worthwhile ? </li></ul>
    15. 15. Choosing the best criteria <ul><li>To succeed with this metrics project, you should chose validation criteria that: </li></ul><ul><ul><li>Help with the accuracy of prediction </li></ul></ul><ul><ul><li>Prioritize business over knowledge for the sake of knowledge </li></ul></ul><ul><ul><li>Are absolutely necessary </li></ul></ul>
    16. 16. Metrics Validation Criteria A priori validity Actionability Appropriate Continuity Appropriate Granularity Association Attribute validity Causal model validity Causal relationship validity Content validity Construct validity Constructiveness Definition validity Discriminative power Dimensional consistency Economic productivity Empirical validity External validity Factor independence Improvement validity Instrument validity Increasing growth validity Interaction sensitivity Internal consistency Internal validity Monotonicity Metric Reliability Non-collinearity Non-exploitability Non-uniformity Notation validity Permutation validity Predictability Prediction system validity Process or Product Relevance Protocol validity Rank Consistency Renaming insensitivity Repeatability Representation condition Scale validity Stability Theoretical validity Trackability Transformation invariance Underlying theory validity Unit validity Usability
    17. 17. Reduced Metrics Validation Criteria
    18. 18. Rejected (and why) <ul><li>A metric has improvement validity if the metric is an improvement over existing metrics . </li></ul><ul><li>A metric has increasing growth validity if the metric increases when concatenating two entities together . </li></ul>
    19. 19. Accepted (and why) <ul><li>A metric has usability if it can be cost-effectively implement in a quality assurance program . </li></ul><ul><li>A metric has instrument validity if the underlying measurement is valid and properly calibrated . </li></ul>
    20. 20. Agenda <ul><li>Motivation: what is validity? </li></ul><ul><li>Anatomy of a systematic literature review </li></ul><ul><li>Validating a security metric for prediction </li></ul><ul><li>Is prediction the only answer? </li></ul>
    21. 21. Measurement Theory <ul><li>Metrics can be used as the route to understanding the nature of software and the software development process </li></ul><ul><li>Rather than a list of components, we’d like a list of action items based on a set of theories: applied science </li></ul><ul><li>Reactive vs. Proactive </li></ul>
    22. 22. Questions?

    ×