Even Giants Start Small

  • 188 views
Uploaded on

Presented at 8/7/2012 Metricon 7 workshop. This is a case study on a vulnerability management measurement project and the lessons learned and real change derived from the project.

Presented at 8/7/2012 Metricon 7 workshop. This is a case study on a vulnerability management measurement project and the lessons learned and real change derived from the project.

More in: Technology , Education
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
188
On Slideshare
0
From Embeds
0
Number of Embeds
1

Actions

Shares
Downloads
0
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide
  • Show of hands – Repeat Metricon attendees? Self-described quants (SIRA, FAIR, New School of Info Sec, etc.) that are actually doing this?Metrics Utopia vs. Metrics Tacoma (nee Dubuque/Detroit)Need more actual stories (not lessons) on how they’ve escape Purgatory and make it to Paradise
  • The measurements used here – vulnerability information – is not special/fancy, but the process of developing and using this (hopefully) is.
  • Data is lightly scrubbedFormat and process is realLearn from our mistakes
  • I hate legal disclaimer and background slides, but they serve a function.“We believe all children have unique needs and should grow up without illness or injury. With the support of the community and through our spirit of inquiry, we will prevent, treat and eliminate pediatric disease.”105 year pediatric tertiary care institution$1 billion a year non-profit organization 7000 “users” Hospital/Research/Foundation Clinics across Alaska, Washington, Montana, and IdahoResearch facilities spanning 11 centersBargain Boutiques
  • Resource constrained, staff of two and half staffed for the past several months
  • Question – is this working?!Give examples of the number/type of spreadsheets1000+ line spreadsheets with various color coded items and percentages, lots of scrollingNot a slam against operational/engineering teams – we have different needs
  • Logical topology of exploitable patchable vulnerabilities, "risk index" ~1 year of working with RedSeal, getting it configured just so, learning how to use to tool-we use it for more than just vulnerability information
  • RedSeal Topology MapTableau Vulnerability Dashboard
  • There are multiple ways to approach this flow. Here are some possible alternative tools in each of these areas.
  • I don't care about $vendor’s analytics, just give me a good engine and clean data export and I'll worry about the analytics
  • Ref PRISEM

Transcript

  • 1. Metricon 7 – David F. Severski
  • 2. ParadisoInferno 2
  • 3.  Addressing a very  Sage head nodding common problem  Application of See what we did wrong principles Calling out tools used Workflows usedBeginners Advanced 3
  • 4.  Problem Identification Descriptive Analysis Implementing Change 4
  • 5. Mandatory Background and Disclaimer Slide 1. We cure sick children. 2. Don’t sue me. 5
  • 6. Framing the question 6
  • 7.  Security strategy Incident management Audit, assessment, and compliance Risk management and monitoring Other duties as assigned… 7
  • 8.  Board focused ◦ Qualitative rankings based on expert opinion Threat/Impact/Capability based Benchmarks leadership risk tolerances, current funding levels Used to identify and prioritize projects 8
  • 9.  Risk management process provides strategic management Managing the tactical side (my responsibility) raises tough questions ◦ How good are our capabilities? ◦ What is the evidence? ◦ What are our capabilities anyways? Working in our favor ◦ Evidence-based medicine ◦ Deep organizational commitment to Lean 9
  • 10.  Defined our controls Defined our threat scenarios Started exploring our data sources ◦ Goal: Understand what data we have and how it can be used 10
  • 11.  Patch management focused Death by spreadsheet Lots of data, little management knowledge/information Things look bad, but hard to be certain 11
  • 12.  “How well is our patch management program performing?” Not explicitly stated or well defined 12
  • 13. Answering the question(maybe) 13
  • 14.  Data ◦ Nessus scan data ◦ Network configuration files ◦ Network topology Tools ◦ Network security posture analysis ◦ Scripting ◦ Visualization platform 14
  • 15. Logical Sources of Topology “Badness” Network ScanConfigurations Network Information Security Posture 15
  • 16.  Tableau ◦ Organization-wide standard visualization tool ◦ A fun tool for visualization  Perhaps a little too fun 16
  • 17. Network Security Scripting VisualizationPosture Analysis •Export topology •Import CSVs into•Logical network based vulnerability Tableau topology report •Massage into•Network •Export topology dashboard configurations based “risk” scores•National Vulnerability Database•Scan data 17
  • 18. Let us beseech the demo gods 18
  • 19. Vulnerability Scripting Visualization Management• Risk I/O by • Perl • Excel HoneyApps • Python • R & Inkscape• Scan vendor • Ruby of choice 19
  • 20. Reception and Problem Solving 20
  • 21.  Figuring out whats broken in our process ◦ Scan data? Patch management process? Key questions so far ◦ Is our SLA correct? What is our SLA?  Prioritized remediation efforts (Have this now)  Prioritized assets (Working on this) ◦ Who owns the process? ◦ Are there feedback loops (operational metrics) in the process? 21
  • 22. 22
  • 23.  Problem not well formed Dashboard is ugly & opaque ◦ Edward Tufte is sad No historical trending Scoring mechanisms not rigorous ◦ CVSS base scores, no temporal or environmental Labor intensive ◦ Currently takes a couple of hours monthly to update Fuzzy numbers ◦ Risk Index metric Data quality problems ◦ Gaps in scan data Data definitions ◦ What is an open vulnerability? 23
  • 24.  Problem was not well enough formed Dashboard has raised useful questions Trending is on the roadmap Scoring is consistent over time Risk Metric – A consistent index that shows of whats out there today versus yesterday Its the data we have at hand Push out with v1.0 metrics now Iterate over time as we get more traction, time, skills 24
  • 25.  Automate ◦ Use PowerShell and REST API ◦ Migrate off of CSVs to SQL Trending Reframe around GQM methodology ◦ Formalize and document 25
  • 26.  Vendor support - pushing our vendors for APIs to data ◦ Many vendors tout their analytics  Speedometers, traffic lights, 3D pie charts, and more ◦ Reference: Symbiotic Security talk from BSidesLV, Josh Sokol and Dan Cornell ◦ Building our tactical metrics around our controls Leverage our control catalog  GQM bottom up approach 26
  • 27.  Data interchange Exchanging security data is tough ◦ Though were trying to do this too Focusing on building our metrics/analytics, then sharing the tools/techniques 27
  • 28.  Spend time up front to frame your question ◦ Drink the GQM Kool-Aid™  Top down or bottom up Visualization is fun, but is tricky to do well Automation and repeatability is key Time is always in short supply ◦ Find a good enough language for your purposes Be prepared for the work to digest your findings Maintain focus on your objective 28
  • 29. “This could be the start of a beautiful program” Thank you! 29
  • 30. Supporting Slides Twitter: @DSeverski 30
  • 31.  Dashboarding mechanisms are uncertain Information overload Concentrating our data targets on our LOB applications What are the boundaries/interconnections between our apps? ◦ Where is the information? 31