This is the main
title of your talk
This is a sub-header: Should be a
sentence and not a paragraph.
Breadth and Depth Analysis with
Atomic Red Team™
What is ATT&CK coverage, anyway?
Former Journalist turned CTI analyst
Dabbled briefly in marketing
○ (was bad at it)
Fourth time attending; third time speaking
○ (I’m sorry)
Write, research, talk about security
Lives nearby ATT&CKcon
Brian Donohue
Principal Security Specialist
RED CANARY
@TheBrianDonohue
Presenter
Background in…
○ Privacy & Encryption
○ Adversary Simulation/Emulation
Soul of a Product Manager
Primary objective: be an enabler
💜’s Purple Teaming
Write enough code to be dangerous
Lives far from ATT&CKcon
Adam Mashinchi
Director, Open Source Programs
RED CANARY
@Adam_Mashinchi
Co-Author
Preface Slide
MITRE ATT&CK
A “common language” for (cyber)
security practitioners, executives,
and stakeholders.
Classification system for the
library of Atomic Red Team
examples.
Atomic Red Team
An open source library of simple,
focused tests mapped to the MITRE
ATT&CK® matrix. Each test runs in
five minutes or less, and many tests
come with easy-to-use configuration
and cleanup commands.
[Audience Interactive Slide: If you know what the “Dewey Decimal System” is, please feel old now!!!]
This is the main
title of your talk
This is a sub-header: Should be a
sentence and not a paragraph.
What is “coverage” in the context of
Atomic Red Team & MITRE ATT&CK?
Problem #1
A (naive) approach to “breadth” coverage…
Atomic test for technique ID? → “Done!”
Breakdown by “all” and each platform
But what about “depth”?
How well do a group of tests cover a technique?
How difficult is a test to execute?
What about “sub-platforms”?
○ (i.e., IaaS vs. IaaS:AWS)
Defining “coverage”
This is the main
title of your talk
This is a sub-header: Should be a
sentence and not a paragraph.
Who will help solve this problem?!?!
Problem #B
Project Maintainers & Contributors
These are amazing people!!!
○ “Thanks for everything you do!”
Great at small/iterative changes
The Atomic Tests are almost 100% community
developed & maintained
Shameless Plug:
If you contribute to Atomic Red Team you get a t-shirt!
Learn more at atomicredteam.io!
Who works on the (free) stuff?
Internships
Exclusively working
on open source
SOLUTION
DO’s & DON'Ts
Internships 101
Always:
Pay your interns (Part/Full Time, Contractor, etc.)
(Offer to) work with educational institutions
Treat them like any other teammate
Pay. Your. Interns.*
Never:
Use them for menial tasks (exclusively)
Rely on “great culture” as primary benefit
* likely the most-important take-away from this talk
Gets all-the-credit for the work done!!!
Has worked for a few cybersecurity firms
An awesome open source contributor
Enjoys (hacking) video games
Cameron Roberts
Open Source Developer Intern
(Former)
@JrOrOneEquals1
Shout-Out: Cameron The Intern™
#1: Analysis of Atomic Red Team coverage
Create scripted means of generating report
Add visualization of data into web interface
#2: Categorization of Atomic Tests by difficulty
Define means of sorting Tests
Generate data
#3: Gap Analysis of Atomic Red Team
Review gaps in coverage
Fill coverage gaps based on effort/value
Atomic Red Team - Coverage
We’re Here
Part #1
Counting Things
AKA “Breadth Analysis”
First with Python
Then later with JavaScript
More than a FOR loop
● Platforms
● Deprecated stuff
● Sub-techniques
● Python → to CSV
○ Pivot tables!
Part #1 - Counting
“Run Upon” != “Run From”
(target) != (source)
Part #1 - Visualize
After doing this in Python, doing it in JavaScript is easy
Advice: Avoid solving, solved problems
○ i.e. Don’t get stuck writing a graphing library—use Google Charts (js lib)
Part #2
Categorizing Things
AKA “Depth Analysis”
Finding a (practical) sorting method
We know if…
○ we have test(s), how many, by platform
But what if…
○ T9100 has 14 tests, but only 2 for Linux?
○ T9200 has 14 tests, but all are really tricky?
And to further complicate things…
○ “System Time Discovery” ← Easy?
○ “Network Sniffing” ← Less-Easy?
○ “Cloud Storage Object Discovery” ← ???
■ Prerequisites & Paid Providers
But how “well covered” is that?
Scoring Rubric
Initial Pass (v1.0)
Use only what is in the YAML
Multiple IF statements
Apply a “Difficulty Score” (0 - 9)
With a Difficulty Score
Can now auto-apply and check
Can even grade on a curve...
Part #2
Define a scale of “difficulty”
MUST BE:
○ Strictly defined
○ Human actionable
Bonus Round:
○ Machine parsable
○ Test execution (not here yet)
Copy/Paste
Work?
Has
Prerequisites?
Default
Arguments?
Need 2nd
Machine?
Requires
Credentials?
Requires
Elevation?
Coverage Gaps!
T9300 doesn’t have
enough macOS tests…
T9400 is the only
technique in the tactic
with 1 Windows test…
With all the Atomic Tests Scored…
Difficulty Gaps!
T9500 only has “hard”
tests…
T9600’s has Linux tests,
but none are “easy”...
Part #3
Filling The Gaps
AKA “Make this better/easier”
Part #3 - Goal
Heatmap to end all heatmaps
Be able to visually identify where we should add atomic tests
○ … at the wholistic, and per-platform, level
How to prioritize where to
start?
○ “Most bang for the buck”
○ “Low-hanging fruit”
Ye olde Product Manager Trick
Effort → 1 == Low
Value → 1 == High
Score → Lower == Better
Options:
○ Fill empty Tests
○ Add more easy Tests
Defining Effort-To-Value Ratios
ITEM EFFORT VALUE SCORE
Alpha
3 3 6
Beta
1 1 2
Gamma
1 2 3
Going to launch a new cool site
Includes interactive charts/graphs
Adding more Atomic Tests
Write up the how/why in detail
So… When do we get to see this stuff?!
Pay. Your. Interns.
And also, in Atomic Red Team land…
Coverage & gap analysis is underway
New site (and tests!) are Coming Soon™
Also… you should contribute!
Last Takeaways
Q&A
Everything Atomic Red Team is at:
atomicredteam.io
The Last Slide
Thanks!

What is ATT&CK coverage, anyway? Breadth and depth analysis with Atomic Red Team

  • 1.
    This is themain title of your talk This is a sub-header: Should be a sentence and not a paragraph. Breadth and Depth Analysis with Atomic Red Team™ What is ATT&CK coverage, anyway?
  • 2.
    Former Journalist turnedCTI analyst Dabbled briefly in marketing ○ (was bad at it) Fourth time attending; third time speaking ○ (I’m sorry) Write, research, talk about security Lives nearby ATT&CKcon Brian Donohue Principal Security Specialist RED CANARY @TheBrianDonohue Presenter
  • 3.
    Background in… ○ Privacy& Encryption ○ Adversary Simulation/Emulation Soul of a Product Manager Primary objective: be an enabler 💜’s Purple Teaming Write enough code to be dangerous Lives far from ATT&CKcon Adam Mashinchi Director, Open Source Programs RED CANARY @Adam_Mashinchi Co-Author
  • 4.
    Preface Slide MITRE ATT&CK A“common language” for (cyber) security practitioners, executives, and stakeholders. Classification system for the library of Atomic Red Team examples. Atomic Red Team An open source library of simple, focused tests mapped to the MITRE ATT&CK® matrix. Each test runs in five minutes or less, and many tests come with easy-to-use configuration and cleanup commands. [Audience Interactive Slide: If you know what the “Dewey Decimal System” is, please feel old now!!!]
  • 5.
    This is themain title of your talk This is a sub-header: Should be a sentence and not a paragraph. What is “coverage” in the context of Atomic Red Team & MITRE ATT&CK? Problem #1
  • 6.
    A (naive) approachto “breadth” coverage… Atomic test for technique ID? → “Done!” Breakdown by “all” and each platform But what about “depth”? How well do a group of tests cover a technique? How difficult is a test to execute? What about “sub-platforms”? ○ (i.e., IaaS vs. IaaS:AWS) Defining “coverage”
  • 7.
    This is themain title of your talk This is a sub-header: Should be a sentence and not a paragraph. Who will help solve this problem?!?! Problem #B
  • 8.
    Project Maintainers &Contributors These are amazing people!!! ○ “Thanks for everything you do!” Great at small/iterative changes The Atomic Tests are almost 100% community developed & maintained Shameless Plug: If you contribute to Atomic Red Team you get a t-shirt! Learn more at atomicredteam.io! Who works on the (free) stuff?
  • 9.
  • 10.
    DO’s & DON'Ts Internships101 Always: Pay your interns (Part/Full Time, Contractor, etc.) (Offer to) work with educational institutions Treat them like any other teammate Pay. Your. Interns.* Never: Use them for menial tasks (exclusively) Rely on “great culture” as primary benefit * likely the most-important take-away from this talk
  • 11.
    Gets all-the-credit forthe work done!!! Has worked for a few cybersecurity firms An awesome open source contributor Enjoys (hacking) video games Cameron Roberts Open Source Developer Intern (Former) @JrOrOneEquals1 Shout-Out: Cameron The Intern™
  • 12.
    #1: Analysis ofAtomic Red Team coverage Create scripted means of generating report Add visualization of data into web interface #2: Categorization of Atomic Tests by difficulty Define means of sorting Tests Generate data #3: Gap Analysis of Atomic Red Team Review gaps in coverage Fill coverage gaps based on effort/value Atomic Red Team - Coverage We’re Here
  • 13.
    Part #1 Counting Things AKA“Breadth Analysis” First with Python Then later with JavaScript
  • 14.
    More than aFOR loop ● Platforms ● Deprecated stuff ● Sub-techniques ● Python → to CSV ○ Pivot tables! Part #1 - Counting “Run Upon” != “Run From” (target) != (source)
  • 15.
    Part #1 -Visualize After doing this in Python, doing it in JavaScript is easy Advice: Avoid solving, solved problems ○ i.e. Don’t get stuck writing a graphing library—use Google Charts (js lib)
  • 16.
    Part #2 Categorizing Things AKA“Depth Analysis” Finding a (practical) sorting method
  • 17.
    We know if… ○we have test(s), how many, by platform But what if… ○ T9100 has 14 tests, but only 2 for Linux? ○ T9200 has 14 tests, but all are really tricky? And to further complicate things… ○ “System Time Discovery” ← Easy? ○ “Network Sniffing” ← Less-Easy? ○ “Cloud Storage Object Discovery” ← ??? ■ Prerequisites & Paid Providers But how “well covered” is that?
  • 18.
    Scoring Rubric Initial Pass(v1.0) Use only what is in the YAML Multiple IF statements Apply a “Difficulty Score” (0 - 9) With a Difficulty Score Can now auto-apply and check Can even grade on a curve...
  • 19.
    Part #2 Define ascale of “difficulty” MUST BE: ○ Strictly defined ○ Human actionable Bonus Round: ○ Machine parsable ○ Test execution (not here yet) Copy/Paste Work? Has Prerequisites? Default Arguments? Need 2nd Machine? Requires Credentials? Requires Elevation?
  • 20.
    Coverage Gaps! T9300 doesn’thave enough macOS tests… T9400 is the only technique in the tactic with 1 Windows test… With all the Atomic Tests Scored… Difficulty Gaps! T9500 only has “hard” tests… T9600’s has Linux tests, but none are “easy”...
  • 21.
    Part #3 Filling TheGaps AKA “Make this better/easier”
  • 22.
    Part #3 -Goal Heatmap to end all heatmaps Be able to visually identify where we should add atomic tests ○ … at the wholistic, and per-platform, level
  • 23.
    How to prioritizewhere to start? ○ “Most bang for the buck” ○ “Low-hanging fruit” Ye olde Product Manager Trick Effort → 1 == Low Value → 1 == High Score → Lower == Better Options: ○ Fill empty Tests ○ Add more easy Tests Defining Effort-To-Value Ratios ITEM EFFORT VALUE SCORE Alpha 3 3 6 Beta 1 1 2 Gamma 1 2 3
  • 24.
    Going to launcha new cool site Includes interactive charts/graphs Adding more Atomic Tests Write up the how/why in detail So… When do we get to see this stuff?!
  • 25.
    Pay. Your. Interns. Andalso, in Atomic Red Team land… Coverage & gap analysis is underway New site (and tests!) are Coming Soon™ Also… you should contribute! Last Takeaways
  • 26.
    Q&A Everything Atomic RedTeam is at: atomicredteam.io
  • 27.