2. Consider example of the "User
Behavioral Firewall" (UBF)
Product Mission:
1. Detect and Prevent Threats inside Enterprise Networks
2. Protect Active Directory Infrastructure
3. Raise Efficiency of Attack Detection Metrics (only 4% from
16k Alerts per week can be investigated by Security Staff)
3.1. Across Automated “Confirmed Attack” Category
5. Initial Technology Stack Overview for manual testing:
ldapsearch tool - interface to perform LDAP search operations on windows domains
krb5 tool - dump kerberos tickets
tcpdump or Wireshark tool - a network protocol analyzer
SMBclient - client for MS SharePoint service
python command line scripts - such as PyKEK (Python Kerberos Exploitation Kit) & etc.
metasploit - remote shell for Windows workstations
powerview - powershell script to gain network situational awareness on windows domains
Robomongo - MongoDB management tool
6. Approach for
automation
Simulate the different scenarios
of security rules
Pre-conditions for testing:
● User types (human or service)
● Endpoint [EP] roles (workstation
or server)
● User’s associations with EPs
and Services
● Inactive User Accounts
● Stale/Shared EPs1. Have possibility to set pre-conditions manually from
console
2. Working with configuration files and system logs
3. Capture network packages by host (KDC)
7. Test Automation
Framework
One popular implementation of
sequential program composition
is called a Pipeline
We decided to use approach similar to a
Pipeline. This solution ensures possibility to
run each tool as step by step in TAF and
manually by QA engineer.
Pytest Runner => Bash script =>
Metasploit as Remote shell =>
Powershell
1. How to deliver the test to endpoint -> upload file or
load the exploit to memory?
2. Using porting approach (e.g. python libs) or
Third-party tools for each OS?
8. Challenges of Automation
1. QA Environment - endpoints with different roles* and OS
2. Test fixtures for 2 modes: Proxy and Sniffer (communication with linux
services and directly with mongoDB)
3. Using KRBR tokens of different user accounts on windows endpoint
4. Mock Applications for Authorization in Policy Settings
5. Bootstrap script to set up new QA Environment (Windows Active Directory)
6. Mock workstations to emulate Daily Volume Anomaly for user
*Endpoint Roles: managed/unmanaged, resolved/not resolved, IMPERSONATOR / not IMPERSONATOR
9.
10. Conclusion
1. If you're using third-party tools you should add their to bootstrap script
2. Using command line script to set pre-conditions manually
3. Keep own test log with start arguments, e.g. sys.argv[1:]
4. Think about manual debug - e.g. using 'shell' option for your scripts
5. For testing of all date sensitive functionality you should make time travel into
the future or past that ask your customer how you will do that
>e.g. for checking policy rule for inactive user we stop AD-scraper* service and
change lastKnownLogonTime …
6. Synchronize Domain Controllers after changes, e.g. repadmin /syncall
*AD-scraper service keeps data from Active Directory
11. Future plans
Using Asynchronous testing approach with Future[Assertion]*
● Cannot re-use a shared fixture before / after each test
● If one test is aborted with any exception we should stop all suites
● Avoid of concurrent using remote shell connections
● Future[Assertion] like wait some data in MongoDB by query every N sec.
*e.g. using plugin pytest-asyncio