DATA, METRICS, AND
AUTOMATION:
A STRANGE LOOP
@MROYTMAN
Data
D
M
Metrics
Automation
DAN GEER &
BRUCE SCHNEIER &
ANDREW JAQUITH &
ALEX HUTTON &

ED BELLIS
SQUAD GOALS:
WHAT IS GOOD DATA? (Bellis, Hutton)
WHAT IS A GOOD METRIC? (Jaquith, Geer)
WHAT CAN BE AUTOMATED? (Geer,
Schneier)
SQUAD GOALS:
What parts of risk management should be
automated? (Schneier, Bellis)
What ought to be left to the humans?
(Schneier, Hutton)
What makes a good product? (Schneier)
ATTACKERS ARE
BETTER AT
AUTOMATION
WE
ARE
SLOW
ATTACKERS
ARE
FAST
ATTACKERS ARE
BETTER AT
AUTOMATION
2014
Q1
Q2
Q3
Q4
WE NEED BETTER
AUTOMATION
WE NEED BETTER AUTOMATION
CURRENT VULN MANAGEMENT:
AUTOMATED VULN DISCOVERY
MANUAL-ISH VULN SCANNING
MANUAL THREAT INTELLIGENCE
MANUAL VULN SCORING
MANUAL REMEDIATION PRIORITIZATION
MANUAL
WE NEED BETTER DATA:
BETTER BASE RATES FOR EXPLOITATION
BETTER EXPLOIT AVAILABILITY
BETTER VULNERABILITY TRENDS
BETTER BREACH DATA
BETTER M E T R I C S
SOMETIMES WE MAKE
BAD DECISIONS
SOMETIMES WE HAVE
BAD METRICS
METRICS
ARE
DECISION SUPPORT
GOOD METRICS
ARE
OBJECTIVE FUNCTIONS FOR
AUTOMATION
WHAT MAKES A METRIC
GOOD?
TWEET WITH ME NOW
#WHATISAGOODMETRIC
HEARTBLEED
CVSS 5
SHELLSHOCK
CVSS 10
POODLE
CVSS 4.3
CVSS
IS NOT THE PROBLEM
CVSS FOR PRIORITIZATION
IS A SYSTEMIC PROBLEM
CVSS AS A BREACH VOLUME
PREDICTOR:
ATTACKERS CHANGE TACTICS DAILY
WHAT DEFINES A GOOD METRIC?
GOOD DATA
TWEET WITH ME NOW
#WHATISGOODDATA
WHICH SYSTEM IS MORE SECURE?
$1,000 $1,000,000
CONTROL 1 CONTROL 1
ASSET 1 ASSET 2
TYPES OF METRICS
-EXCLUDE REAL LIFE
THREAT ENVIRONMENT
TYPE 1
% FALLING FOR SIMULATED
PHISHING EMAIL
CVSS SCORE
-OCCURANCE RATE
CONTROLLED
-INTERACTION WITH
THREAT ENVIRONMENT
TYPE 2
# INFECTED MACHINES OF ISP
% VULNS WITH METASPLOIT
MODULE
-DESCRIBE UNDESIRED
EVENTS
WHAT DEFINES A GOOD METRIC?
1. BOUNDED
2. SCALED METRICALLY
3. OBJECTIVE
4. VALID
5. RELIABLE
6. CONTEXT-SPECIFIC - NO GAMING!
7. COMPUTED AUTOMATICALLY
MEAN TIME TO INCIDENT DISCOVERY?
1. BOUNDED
2. SCALED METRICALLY
3. OBJECTIVE
4. VALID
5. RELIABLE
6. CONTEXT-SPECIFIC
7. COMPUTED AUTOMATICALLY
X
✓
✓
X
✓
✓
X
VULNERABILITY SCANNING COVERAGE?
1. BOUNDED
2. SCALED METRICALLY
3. OBJECTIVE
4. VALID
5. RELIABLE
6. CONTEXT-SPECIFIC
7. COMPUTED AUTOMATICALLY
✓
✓
✓
✓
✓
✓
✓
CVSS FOR REMEDIATION?
1. BOUNDED
2. SCALED METRICALLY
3. OBJECTIVE
4. VALID
5. RELIABLE
6. CONTEXT-SPECIFIC
7. COMPUTED AUTOMATICALLY
✓
X
X
X
✓
X
✓
YOU NEED DATA TO MAKE DATA
METASPLOIT PRESENT ON VULN?
1. BOUNDED
2. SCALED METRICALLY
3. OBJECTIVE
4. VALID
5. RELIABLE
6. CONTEXT-SPECIFIC
7. COMPUTED AUTOMATICALLY
✓
✓
✓
✓
✓
✓
✓
YOU NEED DATA TO MAKE METRICS
! Probability*
(You*Will*Be*Breached*On*A*Particular*Open*Vulnerability)?
!"#$%&'($#)*+,(,-,#.% /)#*0ℎ#.%!00')#2%3$%4ℎ#,)%5&6)
43-*(%!"#$%&'($#)*+,(,-,#.
6%
PROBABILITY A VULNERABILITY HAVING CVSS SCORE >
X HAS OBSERVED BREACHES
0 2 4 6 8 10
0
1
2
3
4
5
6
7
8
9
10
Breach1Probability1(%)
CVSS1Base
0 5 10 15 20 25 30 35
CVSS*10
EDB
MSP
EDB+MSP
Breach*Probability*(%)
Positive Predictive Value (the proportion of positive test results that are
true positives) of remediating a vulnerability with property X:
AN ENGINE,
NOT A CAMERA
CONNECTING
THE DOTS
1. EVERYTHING THAT CAN
BE AUTOMATED WILL BE
AUTOMATED
2. METRICS ARE AN
OBJECTIVE FUNCTION FOR
AUTOMATION
3.GOOD METRICS DEFINE
WHAT CAN BE AUTOMATED
Corollary 1. Criteria for
good metrics define what
can (and can’t) be
automated.
4. AUTOMATION
GENERATES TREND DATA,
MAKES INFERENCE
POSSIBLE
Corollary 2. The rate of
data growth (availability,
integrity, context-specificity)
is the upper bound on the
rate of automation.
ASKING THE RIGHT
QUESTIONS
Question 1. What defines
good data?
1a. How do we measure
the rate of data growth?
1b. How do we measure
data integrity?
Question 2. What defines a
good metric?
Question 3. What makes a
product good?
KENNASECURITY.COM
@MROYTMAN
References
Security Metrics
www.securitymetrics.org
Society of Information Risk Analysts
https://societyinforisk.org/
National Weather Service Research Forum
http://www.nws.noaa.gov/mdl/vlab/forum/VLab_forum.php
Dan Geer’s Full Day Tutorial On Measuring Security
http://geer.tinho.net/measuringsecurity.tutorial.pdf
Yasasin, Emrah, and Guido Schryen. "Derivation of Requirements for IT Security Metrics–An Argumentation Theory Based
Approach." (2015).
Savola, Reijo M. "Towards a taxonomy for information security metrics."Proceedings of the 2007 ACM workshop on
Quality of protection. ACM, 2007.
Böhme, Rainer, et al. "4.3 Testing, Evaluation, Data, Learning (Technical Security Metrics)–Working Group Report." Socio-
Technical Security Metrics(2015): 20.
B. Schneier. Attack trees: Modeling security threats. Dr. Dobb’s journal, 24(12):21–29, 1999.
T. Dimkov, W. Pieters, and P. H. Hartel. Portunes: representing attack scenarios spanning through the physical, digital and
social domain. In Proc. of the Joint Workshop on Automated Reasoning for Security Protocol Analysis and Issues in the
Theory of Security (ARSPA/WITS’10), volume 6186 of LNCS, pp. 112–129. Springer, 2010.
A. Beautement, M. A. Sasse, and M. Wonham. The compliance budget: Managing security behaviour in organisations. In
Proc. of the 2008 Workshop on New Security Paradigms, NSPW’08, pp. 47–58, New York, NY, USA, 2008. ACM.
B. Blakley, E. McDermott, and D. Geer. Information security is information risk management. In Proc. of the 2001 New
Security Paradigms Workshop, pp. 97–104, New York, NY, USA, 2001. ACM.
A. Buldas, P. Laud, J. Priisalu, M. Saarepera, and J. Willemson. Rational choice of security measures via multi-parameter attack trees. In Critical Information Infrastructures Security, volume 4347 of LNCS, pp.
235–248. Springer, 2006.
R. Böhme. Security metrics and security investment models. In Isao Echizen, Noboru Kunihiro, and Ryoichi Sasaki, editors, Advances in Information and Computer Security, volume 6434 of LNCS, pp. 10–24.
Springer, 2010.
P. Finn and M. Jakobsson. Designing ethical phishing experiments. Technology and Society Magazine, IEEE, 26(1):46–58, 2007.
M. E. Johnson, E. Goetz, and S. L. Pfleeger. Security through information risk management. IEEE Security & Privacy, 7(3):45–52, May 2009.
R. Langner. Stuxnet: Dissecting a cyberwarfare weapon. Security & Privacy, IEEE, 9(3):49–51, 2011.
E. LeMay, M. D. Ford, K. Keefe, W. H. Sanders, and C. Muehrcke. Model-based security metrics using adversary view security evaluation (ADVISE). In Proc. of the 8th Int’l Conf. on Quantitative Evaluation
of Systems (QEST’11), pp. 191–200, 2011.
B. Littlewood, S. Brocklehurst, N. Fenton, P. Mellor, S. Page, D. Wright, J. Dobson, J. McDermid, and D. Gollmann. Towards operational measures of computer security. Journal of Computer Security, 2(2–3):
211–229, 1993.
H. Molotch. Against security: How we go wrong at airports, subways, and other sites of ambiguous danger. Princeton University Press, 2014.
S. L. Pfleeger. Security measurement steps, missteps, and next steps. IEEE Security & Privacy, 10(4):5–9, 2012.
W. Pieters. Defining “the weakest link”: Comparative security in complex systems of systems. In Proc. of the 5th IEEE Int’l Conf. on Cloud Computing Technology and Science (CloudCom’13), volume 2, pp.
39–44, Dec 2013.
W. Pieters and M. Davarynejad. Calculating adversarial risk from attack trees: Control strength and probabilistic attackers. In Proc. of the 3rd Int’l Workshop on Quantitative Aspects in Security Assurance
(QASA), LNCS, Springer, 2014.
W. Pieters, S. H. G. Van der Ven, and C.W. Probst. A move in the security measurement stalemate: Elo-style ratings to quantify vulnerability. In Proc. of the 2012 New Security Paradigms Workshop, NSPW’12,
pages 1–14. ACM, 2012.
C.W. Probst and R. R. Hansen. An extensible analysable system model. Information security technical report, 13(4):235–246, 2008.
M. J. G. Van Eeten, J. Bauer, H. Asghari, and S. Tabatabaie. The role of internet service providers in botnet mitigation: An empirical analysis based on spam data. OECD STI Working Paper 2010/5, Paris:
OECD, 2010.
Klaus, Tim. "Security Metrics-Replacing Fear, Uncertainty, and Doubt." Journal of Information Privacy and Security 4.2 (2008): 62-63.

Data Metrics and Automation: A Strange Loop - SIRAcon 2015

  • 1.
    DATA, METRICS, AND AUTOMATION: ASTRANGE LOOP @MROYTMAN
  • 3.
  • 4.
    DAN GEER & BRUCESCHNEIER & ANDREW JAQUITH & ALEX HUTTON &
 ED BELLIS
  • 5.
    SQUAD GOALS: WHAT ISGOOD DATA? (Bellis, Hutton) WHAT IS A GOOD METRIC? (Jaquith, Geer) WHAT CAN BE AUTOMATED? (Geer, Schneier)
  • 6.
    SQUAD GOALS: What partsof risk management should be automated? (Schneier, Bellis) What ought to be left to the humans? (Schneier, Hutton) What makes a good product? (Schneier)
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
  • 13.
  • 14.
    WE NEED BETTERAUTOMATION CURRENT VULN MANAGEMENT: AUTOMATED VULN DISCOVERY MANUAL-ISH VULN SCANNING MANUAL THREAT INTELLIGENCE MANUAL VULN SCORING MANUAL REMEDIATION PRIORITIZATION
  • 15.
  • 16.
    WE NEED BETTERDATA: BETTER BASE RATES FOR EXPLOITATION BETTER EXPLOIT AVAILABILITY BETTER VULNERABILITY TRENDS BETTER BREACH DATA BETTER M E T R I C S
  • 17.
    SOMETIMES WE MAKE BADDECISIONS SOMETIMES WE HAVE BAD METRICS
  • 18.
  • 19.
  • 20.
    WHAT MAKES AMETRIC GOOD?
  • 21.
    TWEET WITH MENOW #WHATISAGOODMETRIC
  • 22.
  • 24.
  • 25.
    CVSS FOR PRIORITIZATION ISA SYSTEMIC PROBLEM
  • 26.
    CVSS AS ABREACH VOLUME PREDICTOR:
  • 27.
  • 28.
    WHAT DEFINES AGOOD METRIC? GOOD DATA
  • 29.
    TWEET WITH MENOW #WHATISGOODDATA
  • 30.
    WHICH SYSTEM ISMORE SECURE? $1,000 $1,000,000 CONTROL 1 CONTROL 1 ASSET 1 ASSET 2
  • 31.
    TYPES OF METRICS -EXCLUDEREAL LIFE THREAT ENVIRONMENT TYPE 1 % FALLING FOR SIMULATED PHISHING EMAIL CVSS SCORE -OCCURANCE RATE CONTROLLED -INTERACTION WITH THREAT ENVIRONMENT TYPE 2 # INFECTED MACHINES OF ISP % VULNS WITH METASPLOIT MODULE -DESCRIBE UNDESIRED EVENTS
  • 32.
    WHAT DEFINES AGOOD METRIC? 1. BOUNDED 2. SCALED METRICALLY 3. OBJECTIVE 4. VALID 5. RELIABLE 6. CONTEXT-SPECIFIC - NO GAMING! 7. COMPUTED AUTOMATICALLY
  • 33.
    MEAN TIME TOINCIDENT DISCOVERY? 1. BOUNDED 2. SCALED METRICALLY 3. OBJECTIVE 4. VALID 5. RELIABLE 6. CONTEXT-SPECIFIC 7. COMPUTED AUTOMATICALLY X ✓ ✓ X ✓ ✓ X
  • 34.
    VULNERABILITY SCANNING COVERAGE? 1.BOUNDED 2. SCALED METRICALLY 3. OBJECTIVE 4. VALID 5. RELIABLE 6. CONTEXT-SPECIFIC 7. COMPUTED AUTOMATICALLY ✓ ✓ ✓ ✓ ✓ ✓ ✓
  • 35.
    CVSS FOR REMEDIATION? 1.BOUNDED 2. SCALED METRICALLY 3. OBJECTIVE 4. VALID 5. RELIABLE 6. CONTEXT-SPECIFIC 7. COMPUTED AUTOMATICALLY ✓ X X X ✓ X ✓
  • 36.
    YOU NEED DATATO MAKE DATA
  • 37.
    METASPLOIT PRESENT ONVULN? 1. BOUNDED 2. SCALED METRICALLY 3. OBJECTIVE 4. VALID 5. RELIABLE 6. CONTEXT-SPECIFIC 7. COMPUTED AUTOMATICALLY ✓ ✓ ✓ ✓ ✓ ✓ ✓
  • 38.
    YOU NEED DATATO MAKE METRICS ! Probability* (You*Will*Be*Breached*On*A*Particular*Open*Vulnerability)? !"#$%&'($#)*+,(,-,#.% /)#*0ℎ#.%!00')#2%3$%4ℎ#,)%5&6) 43-*(%!"#$%&'($#)*+,(,-,#. 6%
  • 39.
    PROBABILITY A VULNERABILITYHAVING CVSS SCORE > X HAS OBSERVED BREACHES 0 2 4 6 8 10 0 1 2 3 4 5 6 7 8 9 10 Breach1Probability1(%) CVSS1Base
  • 40.
    0 5 1015 20 25 30 35 CVSS*10 EDB MSP EDB+MSP Breach*Probability*(%) Positive Predictive Value (the proportion of positive test results that are true positives) of remediating a vulnerability with property X:
  • 41.
  • 45.
  • 46.
    1. EVERYTHING THATCAN BE AUTOMATED WILL BE AUTOMATED
  • 47.
    2. METRICS AREAN OBJECTIVE FUNCTION FOR AUTOMATION
  • 48.
  • 49.
    Corollary 1. Criteriafor good metrics define what can (and can’t) be automated.
  • 50.
    4. AUTOMATION GENERATES TRENDDATA, MAKES INFERENCE POSSIBLE
  • 51.
    Corollary 2. Therate of data growth (availability, integrity, context-specificity) is the upper bound on the rate of automation.
  • 52.
  • 53.
    Question 1. Whatdefines good data? 1a. How do we measure the rate of data growth? 1b. How do we measure data integrity?
  • 54.
    Question 2. Whatdefines a good metric?
  • 55.
    Question 3. Whatmakes a product good?
  • 56.
  • 57.
    References Security Metrics www.securitymetrics.org Society ofInformation Risk Analysts https://societyinforisk.org/ National Weather Service Research Forum http://www.nws.noaa.gov/mdl/vlab/forum/VLab_forum.php Dan Geer’s Full Day Tutorial On Measuring Security http://geer.tinho.net/measuringsecurity.tutorial.pdf Yasasin, Emrah, and Guido Schryen. "Derivation of Requirements for IT Security Metrics–An Argumentation Theory Based Approach." (2015). Savola, Reijo M. "Towards a taxonomy for information security metrics."Proceedings of the 2007 ACM workshop on Quality of protection. ACM, 2007. Böhme, Rainer, et al. "4.3 Testing, Evaluation, Data, Learning (Technical Security Metrics)–Working Group Report." Socio- Technical Security Metrics(2015): 20. B. Schneier. Attack trees: Modeling security threats. Dr. Dobb’s journal, 24(12):21–29, 1999. T. Dimkov, W. Pieters, and P. H. Hartel. Portunes: representing attack scenarios spanning through the physical, digital and social domain. In Proc. of the Joint Workshop on Automated Reasoning for Security Protocol Analysis and Issues in the Theory of Security (ARSPA/WITS’10), volume 6186 of LNCS, pp. 112–129. Springer, 2010. A. Beautement, M. A. Sasse, and M. Wonham. The compliance budget: Managing security behaviour in organisations. In Proc. of the 2008 Workshop on New Security Paradigms, NSPW’08, pp. 47–58, New York, NY, USA, 2008. ACM. B. Blakley, E. McDermott, and D. Geer. Information security is information risk management. In Proc. of the 2001 New Security Paradigms Workshop, pp. 97–104, New York, NY, USA, 2001. ACM.
  • 58.
    A. Buldas, P.Laud, J. Priisalu, M. Saarepera, and J. Willemson. Rational choice of security measures via multi-parameter attack trees. In Critical Information Infrastructures Security, volume 4347 of LNCS, pp. 235–248. Springer, 2006. R. Böhme. Security metrics and security investment models. In Isao Echizen, Noboru Kunihiro, and Ryoichi Sasaki, editors, Advances in Information and Computer Security, volume 6434 of LNCS, pp. 10–24. Springer, 2010. P. Finn and M. Jakobsson. Designing ethical phishing experiments. Technology and Society Magazine, IEEE, 26(1):46–58, 2007. M. E. Johnson, E. Goetz, and S. L. Pfleeger. Security through information risk management. IEEE Security & Privacy, 7(3):45–52, May 2009. R. Langner. Stuxnet: Dissecting a cyberwarfare weapon. Security & Privacy, IEEE, 9(3):49–51, 2011. E. LeMay, M. D. Ford, K. Keefe, W. H. Sanders, and C. Muehrcke. Model-based security metrics using adversary view security evaluation (ADVISE). In Proc. of the 8th Int’l Conf. on Quantitative Evaluation of Systems (QEST’11), pp. 191–200, 2011. B. Littlewood, S. Brocklehurst, N. Fenton, P. Mellor, S. Page, D. Wright, J. Dobson, J. McDermid, and D. Gollmann. Towards operational measures of computer security. Journal of Computer Security, 2(2–3): 211–229, 1993. H. Molotch. Against security: How we go wrong at airports, subways, and other sites of ambiguous danger. Princeton University Press, 2014. S. L. Pfleeger. Security measurement steps, missteps, and next steps. IEEE Security & Privacy, 10(4):5–9, 2012. W. Pieters. Defining “the weakest link”: Comparative security in complex systems of systems. In Proc. of the 5th IEEE Int’l Conf. on Cloud Computing Technology and Science (CloudCom’13), volume 2, pp. 39–44, Dec 2013. W. Pieters and M. Davarynejad. Calculating adversarial risk from attack trees: Control strength and probabilistic attackers. In Proc. of the 3rd Int’l Workshop on Quantitative Aspects in Security Assurance (QASA), LNCS, Springer, 2014. W. Pieters, S. H. G. Van der Ven, and C.W. Probst. A move in the security measurement stalemate: Elo-style ratings to quantify vulnerability. In Proc. of the 2012 New Security Paradigms Workshop, NSPW’12, pages 1–14. ACM, 2012. C.W. Probst and R. R. Hansen. An extensible analysable system model. Information security technical report, 13(4):235–246, 2008. M. J. G. Van Eeten, J. Bauer, H. Asghari, and S. Tabatabaie. The role of internet service providers in botnet mitigation: An empirical analysis based on spam data. OECD STI Working Paper 2010/5, Paris: OECD, 2010. Klaus, Tim. "Security Metrics-Replacing Fear, Uncertainty, and Doubt." Journal of Information Privacy and Security 4.2 (2008): 62-63.