This document outlines a 5-step process for developing purpose-driven hunt hypotheses to actively search for malicious activity in an environment. The steps include: 1) identifying the tactic and technique, 2) procedures, 3) data collection needs, 4) scope, and 5) excluded factors. It provides a case study demonstrating how to apply the process to develop a hypothesis for detecting golden tickets using Kerberos tickets and logon sessions. The case study walks through collecting the necessary data and provides examples of what was in and out of scope.
2. @jaredcatkinson
▪ Adversary Detection Tech Lead @ SpecterOps
▪ Developer:
▫ PowerForensics
▫ Uproot
▫ ACE
▫ PSReflect-Functions
▪ Microsoft Cloud and Datacenter Management MVP
▪ Former:
▫ U.S. Air Force Hunt Team
▫ Veris Group Adaptive Threat Division
3. @robwinchester3
▪ Adversary Detection Lead @ SpecterOps
▪ Contributor:
▫ Co-author of ACE
▫ HELK
▪ Former:
▫ U.S. Air Force Red Team
▫ Veris Group Adaptive Threat Division
4. Overview
▪ What is “Hunt”?
▪ Attacker TTP
▪ Creating useful hypotheses
▪ Case Study
▫ Detecting Golden Tickets
5. What is “Hunt”?
▪ Actively searching for malicious activity in the environment
that has evaded current in place defenses
▪ Rooted in the Assume Breach mindset
7. Problems with the
“Generic Hunt Process”
▪ Gather data
▫ What data to collect?
▫ Why collect that data?
▫ Splunk is expensive
▫ ELK is technically free… but costs time
▪ Hunt for “bad”
▫ What are you looking for in the gathered data?
▫ What is “good” activity that can be ignored?
▫ How much time do you have to search through the data?
▫ Have to balance time hunting and time investigating
9. Hypothesis driven hunting benefits
▪ Focuses data collection efforts
▪ Provides a specific goal for the hunt
team
▪ Helps eliminate analysis paralysis
▪ Track hypotheses over time
10. Great, so how do I make hypotheses?
▪ This is the focus of this presentation!
▪ We will walk you through how to make a hunt hypothesis that
will actually result in something tangible
▪ Will do practical demonstration on using this process to
create and execute a hypothesis for Golden Tickets
11. First, what are we looking for?
▪ Assume Breach
▫ Tons of organizations have been breached
▫ It’s a matter of WHEN not IF a breach will occur
▪ Focus on post exploitation activity
▫ Most in place defensive tools focus on stopping/detecting the initial attack
from occurring
■ Firewalls, AV, and IDS
▫ If you can stop the attack before the objective is achieved, the attack is still
stopped
13. MITRE ATT&CK Framework
▪ A body of knowledge for cataloging an adversary activity
during the attack cycle
▫ Similar to how OWASP defines application security vulnerabilities
▫ Used as a reference for both offense and defense
▫ Includes Windows, Linux, and MacOS
▪ Categories loosely correspond to the attack cycle
▫ Persistence
▫ Lateral Movement
▫ Credential Access, etc.
▪ https://attack.mitre.org/wiki/Main_Page
15. ▪ Tactics
▫ Sorted by column headers
▪ Techniques
▫ Represented by individual entries in ATTACK Matrix
▪ 217 techniques currently addressed
▪ Windows, Linux and MacOS
Tactics & Techniques
16. Procedures
▪ In the detailed information of each technique specific
examples or threats are included as available
▪ Not all procedures represented, large and growing set of
data
17. Why is this useful?
▪ Focus on detecting the behaviors, not
hashes & specific tool signatures
▪ Reference throughout the hunt
hypothesis process
▪ Tracking hunt history
▪ Plan/chart future hunt activity
▪ Identify areas lacking coverage
19. Our Hunt Process
5 step process to help create meaningful hunt hypothesis
Intended to be used to create hunt hypotheses to be completed
in one week
1.Identify the Tactic & Technique
2.Identify the Procedure(s)
3.Identify Collection Requirements
4.Identify the Scope
5.Document Excluded Factors
20. Phase 1: Identify the Tactic & Technique
▪ High level what you are looking for
▪ Used to track hunt interests over
time in environment (tactics)
▪ Lots of attacks will utilize numerous
Tactics & Technique
▫ Specifically focus your efforts
21. Phase 2: Identify the Procedures
▪ Specific examples and implementations of the selected
technique
▪ Frequently found in APT reports, threat intelligence, etc
▪ Understand and examine the different procedures
▪ What can and cannot be easily changed across all the
procedures?
▪ Perform research to understand the basic concepts of the
procedures
22. Phase 3: Identify Collection Requirements
▪ Bulk of the research time
▪ Replicate malicious activity in lab environment
▪ Identify common behaviors
▪ Identify high false positives
▫ If possible test in small portion of actual network
▪ Should result in a POC which gathers the required data
23. Phase 4: Identify the Scope
▪ Two factors for scope
▫ Time
■ Amount of time to perform the hunt assessment
■ We recommend a week
▫ Number of data sources to collect
■ Can be host or network information
■ How much data can be collected in the timeframe?
■ How much data can be analyzed in the timeframe?
▪ Primarily based on collection requirements
▪ Scope may be limited due to limited collection capability
24. Phase 5: Document Excluded Factors
▪ What things were you unable to include in the hypothesis at
each level?
▫ What TTPs were not able to be researched during this hunt?
▫ Technical collection limitations?
▫ Political limitations?
▫ Scope limitations?
▪ Will feed future hunt hypotheses
▪ Informs future technology purchasing
▪ Quantifies the effects of scope limitation
25. What do we have at the end?
▪ Specific behavior being “hunted” for in
the environment
▪ Understanding of the attack
▪ Knowledge of the data required to detect
the activity
26. Benefit of this process
▪ Focuses hunt efforts to have an
tangible result
▪ Incremental improvement of security
over time
▪ Can identify techniques in use from
threat reports and know what you
have put into place
28. My CISO went to Black Hat
▪ Our situation:
▫ Small Security Budget
▫ No EDR (agent-based monitoring)
▫ Poor Lateral Network Visibility
▫ Lots of Local Admins
▪ Our task:
▫ Can we detect it?
▫ Can we stop it?
▫ Have we been affected by it?
▪ Our reality:
▫ Don’t want to focus on mimikatz itself, but its techniques.
29. Phase 1 - Tactic & Technique
▪ Mimikatz is commonly used for:
▫ Credential Access
■ Credential Dumping (Domain and Local)
■ Account Manipulation
■ Security Support Provider
▫ Lateral Movement
■ Pass the Ticket
■ Pass the Hash
▪ Tactic - Lateral Movement
▫ Mechanism to move between systems after initial access has been gained
▪ Technique - Pass the Ticket
▫ Authenticating to network resources with Kerberos tickets without the account’s
password
33. Phase 2 - Identify the Procedures
▪ Technique - Pass the Ticket
▫ Golden Ticket - Forged Authentication Service (AS) tickets (TGT)
■ Request legitimate service tickets
■ Uses the krbtgt account hash
▫ Silver Ticket - Forged Ticket Granting Service (TGS) tickets
■ Access network resources
■ Uses the computer or service account hash
▪ Procedure - Golden Tickets
▫ A golden ticket allows for the creation of LEGITIMATE service tickets
▫ Common Kill Chain:
■ Privilege Escalation -> Cred Dump -> DCSync -> Golden Ticket -> Legit
Service Ticket -> Lateral Movement
34. Phase 3 - Collection Requirements
▪Interact w/ Mimikatz to see effect on tickets
▪Collect relevant data points
▫ Logon Sessions
▫ Ticket Granting Tickets
35. Ticket Lifetime
▪ Default Valid Ticket Lifetime is 10 hours
▫ Mimikatz default ticket lifetime is 10 years
▪ Default can be set for:
▫ Service Tickets (TGS): 0 - 99,999 minutes
▫ User Tickets (TGT): 0 - 99,999 hours
▪ Ticket Lifetime can be set at:
▫ Domain Security Policy -> Windows Settings -> Security Settings -> Account
Policies -> Kerberos Policy
■ Maximum lifetime for user ticket
■ Maximum lifetime for service ticket
Ticket Lifetime ~10 hours
36. Renewal Lifetime
▪ Default Ticket Renewal Lifetime is 7 days
▫ Mimikatz default is 10 years
▪ Can be set anywhere from 0 - 99,999 days
▪ Ticket Renewal Lifetime can be set at:
▫ Domain Security Policy -> Windows Settings -> Security Settings -> Account
Policies -> Kerberos Policy
■ Maximum lifetime for user ticket renewal
Ticket Lifetime ~7 days
37. Ticket Encryption Type
▪ Portions of the Ticket are Encrypted
▪ Encryption Type used is based on two factors
▫ The Domain Functional Level
▫ The hash type used to create the ticket
▪ AES256-HMAC Encryption
▫ Uses krbtgt AES(128/256) key
▫ The norm for the majority of legitimate modern tickets
▪ RC4 Encryption
▫ Uses krbtgt NTLM hash
▫ Common with Inter-Forest/Domain tickets
Encryption aes256_cts_hmac_sha1_96
38. Logon Session vs. Ticket Client
▪ Each (Kerberos) Logon Session has a Ticket Granting Ticket
▪ Generally speaking, TGT Client should be the same as the
logon session owner
▪ Exception:
▫ New Credential (runas /netonly) Type Logons
■ LogonUser with LOGON32_LOGON_NEW_CREDENTIALS Flag
39. Collection Requirements
▪Enumerate Logon Sessions
▫ User Name
▫ User Principal Name (UPN)
▫ LogonId
▫ LogonType
▫ Authentication Package
▪Enumerate TGT in each Logon Session
▫ Ticket Service == krbtgt
▫ Ticket Client/Domain
▫ Start/End Time
▫ Renew Until Time
▫ Encryption Type
40. Demo 1 (Thanks Sean Metcalf)
https://youtu.be/VymvF2IyTFI
41. Phase 4 - Identify the Scope
▪ Our Timeframe:
▫ One week of execution
▪ Our Environment:
▫ 3 Domains
▫ 1 Linux Computers (non-Domain joined)
▫ 12 Windows Computers
▫ No Sensitive Production Systems
▪ Our Scope:
▫ Windows systems (7)
▫ 2 domains (no creds for 3rd domain)
42. Phase 5 - Document Excluded Factors
▪ Credential Theft Attacks
▫ Credential Dumping
▫ Account Manipulation
▫ Security Support Providers
▪ Lateral Movement
▫ Golden Tickets
■ Event logs
■ Network Traffic
■ Kerberos Tickets on Linux Systems
▫ Silver Tickets
▫ Pass the Hash (Local Authentication)
▪Scope
▫ Lacked credentials for Linux system and 3rd domain
44. Future Developments
▪ Silver Ticket Detection
▫ Encryption Type
▫ Ticket Lifetime
▫ Renewal Window
▪ Klist.exe KDC Called field
45. Parting Thoughts
▪ Don’t bite off more than you can chew
▫ “Detect adversary Golden Ticket use” vs. “find bad guys”
▪ No “Golden Bullet”
▫ Perfectly Forged Tickets
▪ Iterative Process
▫ Adjust detection as you learn
▫ Adjust detection to your environment
▪ Don’t settle for one detection technique
▫ Local Ticket Cache (this case study)
▫ Event Logs
▫ Network Requests
Originated in the Department of Defense
Has become increasingly popular over the last few years
Gather data
Collect as much host and network information as you possibly can
Put that data somewhere for analysis
Hunt for “bad”
Look for “malicious activity” in that set of data
Find anomalies and investigate
Repeat
Continue this process and hope you find something
3 minutes
Create hunt hypothesis
Determine what you want to hunt for more specifically than “bad stuff”
Gather data
Collect data required to investigate the hypothesis
Hunt for evidence of activity based on hypothesis
Search through the data for evidence of the malicious activity described in the hypothesis
5 minutes
MITRE ATT&CK Framework
10-12 min
High Level Direction
Initially, you can't do it all
How to get there
What you need to have along the way
Agent vs Agentless
Currently Collecting vs New data source
20-25 min
Notional Situation
KDC - Key Distribution Center
AS - Authentication Service
TGS - Ticket Granting Service