This presentation has been well received the the SANS community and many information security teams I engage with.
It describes how integrating a full content repository to your existing security architecture can decrease incident response time and lead to fast identification of root cause.
I also describe a new way of implementing NetFlow without sampling to provide greater visibility of your network.
Enjoy!
Boni Bruno, CISSP, CISM, CGEIT
www.bonibruno.com
Key to avoid the repetitive events, that cost money, is to identify the root cause issue to begin with – as early as possible in the cycle.
Having the content and not just a log, SIEM record, or report; but having the content will assist in identification of the root cause issue.
In turn this will reduce the frequency of that particular issue reoccurring.
Also limit the scope of impact and lead to faster remediation.
The objective of the security engineering team, the organization processes and tools is to reduce the overall effort. If our tools can identify the root attack then we have chance reducing the frequency of the waves. Reduction in frequency also improves predictability in delivering IT projects and infrastructure reliability. Speed in developing a mitigation strategy reduces the total scope or height of the wave. Finally by reducing the time to comprehensive permanent protection we can reduce the width of the wave.
Organizations have invested in:
Firewalls
IDS’
IPS’
DLP
Endpoint security
Events are generated from these devices and often forwarded to a SIEM.
SIEM will aggregate events into some form of reporting to allow some sense to be made of the vast amount of information.
Noting the Verizon Breach Report
Statistics show that many incidents take weeks to identify a breach after the fact.
Example: Target
Goal should be to minimize the time to identify the security incident– this is necessary as the time it takes is too long.
The analyst needs to go through the SIEM reports and that is a laborious exercise that takes time.
We have found that if you can add a packet storage solution with the ‘Golden Data’ your search time decreases and the time to respond decreases.
We believe the future security architecture will look like this:
Current infrastructure of tools
SIEM
Full content repository at your finger tips – that is power
Ability to scrub the packets:
Search for the packets during the event
Weed out false positives
Send confirmed packets to 3rd party tools for deep analysis or forensics
Implement event-driven ‘snippets’ – we refer to as triggered capture
Fusion Connector
Using our RESTful API we can integrate with various tools to allow rapid data mining from the full content repository described earlier.
The top of the screen shows an example where search parameters have been populated by some tool.
A search is performed across the packet fabric of probes to find packets that match the parameters.
The results at the bottom show all of the possible probes and packet storage files that have content that matches the criteria as well as the combined flows across all probes.
At this point, an analyst can download one or more of these flows as either PCAP or ERF to inspect in a packet decode or as input to a 3rd party tool.
ERF is a format that Endace created when they first engineered the DAG card as it was necessary for nanosecond time stamping. It also provides information related to the capture port.
API integration already available for:
Splunk
Sourcefire
Compuware’s DC RUM (APM) product
Any organization that wishes to use the RESTful API
Visibility & recording infrastructure
Ability to store packets on the probe or on a SAN
The acquisition by Emulex provides direct support for SAN using the Emulex HBA to allow mass storage – petabytes if so desired or required
Vertical markets
Financial
Retail
Content Delivery
Cloud
Enterprise
Government
Service Provider
Global customer base in each of these industries
Business Unit mostly penetrated
Network operations
Security operations
Compliance
Endace provides 100Gbs network recording today in production.
100Gbps
Endace Access
Example of a 100Gb deployment.
Currently in production
Flow safe load balancing is performed by the Endace Access to 12 x 10Gbs egress ports.
Endace 7000 probes are connected to the egress ports, one per port, to allow for recording packets to disk.
Using the CMS, the user will view the array of probes holistically to allow querying of them for specific flows without needing to know what probe contains the packets of interest.
Time stamping is done by the EA and the downstream probes will use this time stamp to avoid any variance due to the load balance function.
This will guarantee that there are no duplicate time stamps that packet ordering is 100% accurate.
Time synchronization is considered of utmost importance to Endace as the founding of the company was based on the most accurate time stamping possible.
This being the case, each DAG and EA is equipped with a PPS input to allow synchronization of every port within a site or globally.
Implementation of the PPS is done by installing a TDS (Time Distribution Server) which takes an input from either a CDMA or GPS timing device.
The TDS will output the PPS to each DAG or EA connected to it with a standard CAT5 wire.
The date sync is done through standard NTP.
The PPS offer much better time sync than NTP with accuracy within 100ns resolution. New DAG cards now support PTP as well!
This timing accuracy is of importance if you have geographically separated probes or multiple probes where you will correlate combined packets in response to an incident/breach or operational issue to assure packet order is correct.
NetFlow in a new way!
Near Real-time and un-sampled NetFlow!
Environments where you have high speed links, i.e. multiple 10Gbs or 100Gbs, going into a border gateway – enabling NetFlow on said gateway is pretty much useless
Sample rates wont be low enough to provide true visibility or an accurate picture of aggregated 10Gbs or 100Gbs links
The Endace NGA (Netflow Generation Appliance) will allow:
Snapping the packets to the necessary number of bytes to generate a flow record.
72 bytes is all that is required.
Snapping the packets allows a significantly higher data rate to have flow records generated.
Example of our 1U appliance:
4 x 10Gb ingress ports
Operate at 75% sustained capacity – 30Gbs
Handle 16M+ flows/second
Output 600k+ flow records/second
An example deployment of capturing packets from multiple high speed links and generating unsampled NetFlow.
Tap your links
Input the monitor feeds either into the NGA directly or into a NPB if one is desired/deployed.
Output from NPB to the NGA if a NPB is deployed.
NGA will generate flow records and export them to the collector/collectors that is configured.
The NGA offers advanced features for the export function:
Filtering based on IP tuple information, CIDR blocks, etc.
Hash Load balancing:
Necessary to allow many collector solutions to scale to significant numbers of flow records
Controlling the number of records to a single collection device as many are licensed based on flow record consumption.
Value proposition:
Getting real-time Netflow in a very cost effective manner.
Combining packet storage and NetFlow provides analysts an effective toolset to identify root cause issues on the network.