SlideShare a Scribd company logo
1 of 35
Download to read offline
© Copyright 2013 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Extractionandlong-term
storageofHPArcSightESM®
Connectorstatistics
Jeff Holland, Senior Network Security Engineer
SAIC
Disclosure
• This ESM connector statistics extraction method is not supported by
ArcSight® Support
• SAIC is not responsible for any damages or data loss that might be
caused by following this statistics extraction method or using any of the
commands or scripts discussed herein
• This presentation is intended as both a discussion of the techniques
used to extract connector statistics from manage.jsp, the porting of
those statistics to a MySQL database, and a discussion of the analysis
of those statistics via use cases
• Cybersecurity solutions portion of SAIC is to be renamed Leidos, Inc.,
consummation of a separation transaction if approved by SAIC board of
directors. Leidos will deliver new perspectives on cyber challenges by
fusing deep domain expertise, advanced Cyber tradecraft with network-
speed detection, processing and analytics.
2
Trademark attributions and acronym definitions on slide 33
The Problem and a Solution
• The standard Connector Statistics dashboard has useful statistics in terms
of EPS Sent to Manager, Estimated Cache, etc. However, having a history
of those statistics would be even more useful, as well as several methods of
analyzing the data.
• The engineered solution is as follows:
• Run a script every hour that extracts all the connector statistics from the
AgentStateTracker page of manage.jsp and saves them to file
(aka screen scraping)
• Parse the useful information (connector stats) from the rest of the data (HTML
tags, return characters, etc) and save that to a file
• Run a shell script with sed commands to reformat the date to a MySQL friendly
format and delimit all fields with commas
3
Trademark attributions and acronym definitions on slide 33
The Problem and a Solution Cont.
• Build a MySQL DB to store all these statistics (on a separate server from the
manager). A script on the MySQL DB server SCP’s the statistics from the manager
every 24 hours and loads them into the DB. Then the process of collecting another
24 hours worth of statistics starts over.
• A set of Perl scripts produces PHP files that allows you to visually analyze the data
using the PHPGraphLib®*** graphing library and a set of canned SQL queries for
both individual connectors, as well as for all connectors
(i.e. sum of estimated cache for all connectors)
• An HTML form-based CGI script allows for web-based SQL queries and displays the
data on a web page
• You can copy and paste the text results from your query results on the web
page to a file and process with Excel or another graphing tool should you desire
*** Free for personal and commercial use. See http://www.ebrueggeman.com/phpgraphlib
4
Trademark attributions and acronym definitions on slide 33
The Manage.jsp Page and the
AgentStateTracker Link
5
Using wget to Scrape the Connector Statistics
• A manage.jsp login script saves the connector stats to a file on the manager
#!/bin/sh
### Use wget to save a cookie from manage.jsp page using your admin’s user
### credentials.
### Use the no-check-certificate option to ignore the cert.
### Also, it was found to be necessary to feed a fake agent string to mange.jsp, so use
### the -U option to do that.
wget --save-cookies ./cookies.txt --keep-session-cookies --no-check-certificate --post-
data 'uid=admin&pwd=abc123&dl=1&origPage=manage.jsp' -U "Mozilla/5.0
(Windows NT 5.1; rv:11.0) Gecko/20100101 Firefox/11.0"
https://192.168.1.11:8443/arcsight/web/login.jsp
### Now use the saved cookie from the command above to login to manage.jsp and
save ### the page to a file in the local directory.
wget -U "Mozilla/5.0 (Windows NT 5.1; rv:11.0) Gecko/20100101 Firefox/11.0" --no-
check-certificate --load-cookies ./cookies.txt
https://192.168.1.11:8443/arcsight/web/manage.jsp?filter=Arcsight%3Aservice%3DA
gentStateTracker%2C*&updateinterval=120
6
Using wget to Scrape the Connector Stats Cont.
• The manage.jsp login script produces three files each time it is run
from cron
[arcsight@ESM6 ~]$ ls -l manage.jsp* cookies.txt
-rw-rw-r--. 1 arcsight arcsight Mar 6 17:10 cookies.txt
-rw-rw-r--. 1 arcsight arcsight Mar 6 17:10 manage.jsp
-rw-rw-r--. 1 arcsight arcsight Mar 6 17:10 manage.jsp?filter=Arcsight:service=AgentStateTracker,*
• The cookies.txt and manage.jsp file are from the first wget command. We will use the
cookies.txt file to login to the manage.jsp AgentStateTracker page.
• The manage.jsp file with the longer file name is the file that contains the connectors stats
that we want, along with HTML tags, extraneous space characters, and other information
from the AgentStateTracker page.
• We’ll run a Perl script to strip all unwanted HTML tags, spaces, and other information from
the data to end up with a nicely formatted file we can upload into the MySQL DB.
7
Connector Stats Format
• The connector stats format after the raw data is parsed from the
HTML page and a script is run with sed commands to re-format the
dates to a MySQL friendly format is as follows:
– NOTE: Field header line and spaces after commas were added below for readability
Name, Connector-ID, Reported_Time, Agent_Time, Post-Aggregation_EPS, Estimated_Cache_Size, Sent_To_Manager_EPS
PaloAlto_1, 3QgwaKD0BABE56WiCjTdw==, 2013-03-04 08:43:51, 2013-03-04 08:43:51, 364.0, 0, 192.2
Tripwire_1, 3KgwaTD0BA9iBD5s1tCjaaw==, 2013-03-04 09:43:51, 2013-03-04 09:43:51, 222.0, 0, 221.9
Snort_IDS 2kL3waBN0BjCDE5s1lCjTdv==, 2013-03-05 09:41:19, 2013-03-05 09:41:19, 0.0, 0, 0.0
McAfee_1, Oiumm0BABDE5s6WiO98sm==, 2013-03-05 09:43:17, 2013-03-05 09:43:17, 92.0, 0, 181.0
Bashlog_1, HgwpKD0ABDimkd8E9akiU==, 2013-03-05 09:46:18, 2013-03-05 09:46:18, 43.0, 38, 412.1
8
Loading the Data into MySQL
• The following script runs daily from cron to upload the data into the MySQL
DB after the stats have been SCP’d from the manager to the MySQL DB
server
#!/bin/sh
rm /root/master_connector_stats.csv
mv /home/hollandje/master_connector_stats.csv /root/master_connector_stats.csv
mysql -t -u root -password=abc123 << eof
use connector_stats;
LOAD DATA LOCAL INFILE '/root/master_connector_stats.csv'
INTO TABLE stats_table
FIELDS TERMINATED BY ','
LINES TERMINATED BY 'n'
(connector_name, connector_id, reported_date, agent_date, post_agg_eps, est_cache_size,
sent_to_mgr_eps)
;
eof
9
Cleaning up old Database Rows
• Only store the last year’s data.
• Delete any rows older than that by running this script daily from
cron. This is of course configurable.
#!/bin/sh
mysql -t -u root -password=abc123 << eof
use connector_stats;
DELETE FROM stats_table WHERE reported_date < TIMESTAMPADD(DAY,-365,NOW())
;
eof
10
Updating the PHP Files
• The PHP files that render the web pages and contain the canned SQL
queries must be updated when connectors are added, deleted or have
their names modified. Use an “updater” Perl script to do the following:
– Retrieve the last 48 hours worth of connector data from the MySQL DB after the
current day’s connector stats have been loaded into the DB and store them in a file.
Run this script manually or via cron.
– Parse out the connector names from the file and run sort and uniq on the list of
connector names and save to another file
– Use this list of connector names and other Perl scripts to regenerate the PHP files that
contain the SQL queries and render the graphs using the PHPGraphLib® library
– chown and chmod the PHP files appropriately and move them to the proper directory
11
The Connector Statistics Web Page
• Uses Apache and listens on TCP/443 using SSL
12
The Connector Statistics Web Page Cont.
• The screenshot on the previous slide was a partial one. The drop
down menus contain a link for each connector under the column
header, and continues on to the right
• The full list of drop down menus that display data for each connector
(or a sum of data for all connectors) is as follows on this slide and
the next:
• 7-day Post Aggregation EPS by Individual Connector
• 30-day Post Aggregation EPS by Individual Connector
• 45-day Post Aggregation EPS by Individual Connector
• 7-day Post Aggregation EPS vs Sent to Manager EPS by Individual Connector
• 30-day Post Aggregation EPS vs Sent to Manager EPS by Individual Connector
• 45-day Post Aggregation EPS vs Sent to Manager EPS by Individual Connector
• 45-day Estimated Cache Size by Individual Connector
13
The Connector Statistics Web Page Cont.
(Continued from last slide)
• 30-day Sum of Post Aggregation EPS for All Connectors
• 30-day Sum of Estimated Cache Size for All Connectors in MB
• 15-day Sum of Post Aggregation EPS vs Sent to Manager EPS Line Graph
• 15-day Sum of Post Aggregation EPS vs Sent to Manager EPS Bar Graph
14
The Connector Statistics Web Page Cont.
• The web page also contains a link for the form-based SQL query
15
The Connector Statistics Web Page Cont.
• Submitting a SQL query in the form leads to the following two screenshots:
16
The Connector Statistics Web Page Cont.
• Enough already. Let’s see some graphs!
• Below is a 7-day Post Aggregation EPS by Individual Connector graph
17
The Connector Statistics Web Page Cont.
• Below is a 7-day Post Aggregation EPS vs Sent to Manager EPS by
Individual Connector
18
The Connector Statistics Web Page Cont.
• 30-day Sum of Estimated Cache Size for All Connectors in MB
19
Use Case #1
• Based upon the previous slide, we saw the last 30 days worth of hourly cache
statistics for all connectors. Which connector had the largest cache over that time
period and when?
• SELECT connector_name, reported_date, est_cache_size FROM stats_table WHERE
connector_name LIKE '%' ORDER BY est_cache_size DESC;
20
Use Case #2
• Get the total hourly EPS sum for all connectors for last 30 days
• SELECT DATE_FORMAT(reported_date,'%Y-%m-%d %H:00') AS Date_Hour,
Sum(post_agg_eps) AS Sum_Post_Agg_EPS FROM stats_table WHERE
reported_date BETWEEN CURRENT_TIMESTAMP - INTERVAL '30' DAY AND
CURRENT_TIMESTAMP GROUP BY Year(reported_date), Month(reported_date),
Day(reported_date), Hour(reported_date);
21
Use Case #3
• Get the first quarter 2013 Sent to Manager EPS statistics (hourly) for all Seville Palo
Alto firewalls (Seville being a customer name within our ESM instance)
• SELECT connector_name, reported_date, post_agg_eps FROM stats_table WHERE
connector_name LIKE 'seville_palo%' AND reported_date BETWEEN '2013-01-01
00:00:01' AND '2013-03-31 23:59:59';
22
Use Case #4
• We want to graph a large portion of the data from the database, say 6 months or a
year’s worth of firewall “Sent to Manager EPS” statistics. How can I do that?
• The PHPGraphLib® library can’t display that much data given the libraries’ limited
capabilities
• However, you can run a custom SQL query, display the data in the browser, and
then highlight/copy and save it to a file
• Or retrieve the file that is being displayed in the web page from the server, and
then parse out spaces, pipes, etc until you come up with a nicely formatted
comma delimited file (aka CSV) and then use MS Excel to graph the data
23
Use Case #4 Continued
24
Use Case #5
• When I graph 30 or 45 day’s worth of data, I can’t read the timestamps on the x-
axis due to there being so many data points. What can I do to fix this?
• We can solve this by modifying the .php file that generates that particular
graph to only graph every 3rd, 5th, 8th, etc timestamp on the x-axis
• The other points on the x-axis will contain the value of the index for the array
that contains the x and y-axis values. This is due to the way the graphing
library uses a single array to store both timestamp and data values (as
shown below):
$data_array=array();
while ($row = mysql_fetch_assoc($result))
{ $time = $row['reported_date'];
$eps = $row['post_agg_eps'];
$data_array[$time] = $eps;
}
25
Use Case #5 Continued
• When I graph 30+ days worth of data, the x-axis looks like this:
26
Use Case #5 Continued
• We will modify the PHP code as shown below:
$cntr = 0;
while ($row1 = mysql_fetch_assoc($result2)) /* contains EPS data */
{
$row2 = mysql_fetch_assoc($result1); /* contains timestamp data */
if (($cntr % 8) == 0) /* Use the modulo function to graph every 8th timestamp */
{ $time = $row2['reported_date'];
}
else { $time = ”$cntr"; /* Use the counter value for time as it’s not on an 8-hour
boundary */
}
$cntr = $cntr + 1;
$eps = $row1['post_agg_eps'];
$data_array[$time] = $eps;
}
27
Use Case #5 Continued
• The graph’s x-axis now looks like this:
28
Notes on the Install/Deployment of the Scripts
• To figure out the parameters for the wget commands (page 6), use
Firefox and the add-on “Live HTTP Headers”
– Note that password strings must be URL-encoded
• The PHPGraphLib library can graph up to 45 days worth of hourly
data points. After that, the graph becomes too small to interpret
and/or you overrun array buffers within the graphing library code and
the graph fails to render.
• The custom query CGI script does not have parameter checking.
Either educate and/or limit who uses it to avoid dropping a table, etc.
Or, come up with your own query parameter filters for the CGI script.
• Software you’ll need to run the scripts on Linux/Unix
– Apache, MySQL and Perl
– Bourne or Bash shell, and the dos2unix command
– CGI.pm Perl module
– PHPGraphLib
– Specific Linux packages (next slide)
29
Notes on the Install/Deployment of
the Scripts Continued
– Install these packages (assuming RHEL 5 or 6)
• yum install mysql mysql-server httpd php php-gd php-mysql php-imap php-
ldap php-mbstring php-odbc php-pear php-xml php-xmlrpc
– Useful References
• http://www.tutorialspoint.com/mysql/mysql-create-tables.htm
• http://www.cyberciti.biz/tips/how-do-i-enable-remote-access-to-mysql-database-
server.html
• http://www.ebrueggeman.com/phpgraphlib
– Other PHP Graphing Library Options if you don’t care for PHPGraphLib® (would
require modifying a significant portion of the scripts if you switched to a new
graphing library)
• pChart®
• JPGraph®
• FusionCharts®
30
Notes on the Install/Deployment of
the Scripts Continued
– Creating the database and table in MySQL:
root# mysql -u root –p
mysql> CREATE DATABASE connector_stats;
mysql> use connector_stats;
mysql> CREATE TABLE stats_table(
stats_table_id INT NOT NULL AUTO_INCREMENT,
connector_name VARCHAR(100) NOT NULL,
connector_id VARCHAR(100) NOT NULL,
reported_date DATETIME,
agent_date DATETIME,
post_agg_eps DECIMAL(5,1) NOT NULL,
est_cache_size BIGINT NOT NULL,
sent_to_mgr_eps DECIMAL(5,1) NOT NULL,
PRIMARY KEY (stats_table_id)
);
31
FAQ
• Q: What versions of ESM will these scripts work with?
– A: Versions 4.5, 5.0, and 6.0c. There are slight differences with the format of the
connector stats scraped from manage.jsp between version 4.5/5.0 and 6.0c.
• Q: Can the MySQL DB run virtually?
– A: Yes, and that is how we run it.
• Q: Are there any licensing fees associated with running the scripts?
– A: No, it is all open source. The only thing you would need is a RedHat license
subscription so you can run “yum.” The graphing library is free for commercial use.
• Q: Are you releasing the scripts for anyone to use? For free?
– A: Yes, the ESM 6.0c version and an installation/configuration document.
• Q: What operating system will the code run on?
– A: It was designed for RedHat Linux. It will run on Solaris, AIX, etc if you port it.
Windows would require Cygwin (or something similar) and would be a little more effort
in terms of porting it.
• Q: Will you be supporting the code for updates, bug fixes, etc?
– A: No. You can maintain and enhance it yourself. The code is well commented.
32
Trademark Attributions and Acronym Definitions
• ArcSight is a registered trademark of Hewlett-Packard Development Company, L.P. in the U.S. and/or other countries.
• Linux is a registered trademark of Linus Torvolds in the U.S. and/or other countries.
• Red Hat is a registered trademark of Red Hat, Inc. in the U.S. and/or other countries.
• VMWare is a registered trademark of VMWare, Inc. in the U.S. and/or other countries.
• DB = Database
• EPD = Events per Day
• EPS = Events per Second
• ESM = Enterprise Security Manager
• GUI = Graphical User Interface
• OS = Operating System
• IP = Intern Protocol (short for “Internet Protocol Address)
• MSSP = Managed Security Service Provider
• OS = Operating System
33
Thank You
• Thanks for Attending
• Questions?
• Contact Info: Jeff Holland, hollandje@saic.com
34
© Copyright 2013 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Security for the new reality

More Related Content

What's hot

Membase East Coast Meetups
Membase East Coast MeetupsMembase East Coast Meetups
Membase East Coast MeetupsMembase
 
Intro to Apache Apex - Next Gen Platform for Ingest and Transform
Intro to Apache Apex - Next Gen Platform for Ingest and TransformIntro to Apache Apex - Next Gen Platform for Ingest and Transform
Intro to Apache Apex - Next Gen Platform for Ingest and TransformApache Apex
 
KSCOPE 2013: Exadata Consolidation Success Story
KSCOPE 2013: Exadata Consolidation Success StoryKSCOPE 2013: Exadata Consolidation Success Story
KSCOPE 2013: Exadata Consolidation Success StoryKristofferson A
 
FOSSASIA 2016 - 7 Tips to design web centric high-performance applications
FOSSASIA 2016 - 7 Tips to design web centric high-performance applicationsFOSSASIA 2016 - 7 Tips to design web centric high-performance applications
FOSSASIA 2016 - 7 Tips to design web centric high-performance applicationsAshnikbiz
 
Whitepaper: Mining the AWR repository for Capacity Planning and Visualization
Whitepaper: Mining the AWR repository for Capacity Planning and VisualizationWhitepaper: Mining the AWR repository for Capacity Planning and Visualization
Whitepaper: Mining the AWR repository for Capacity Planning and VisualizationKristofferson A
 
Starschema Products
Starschema ProductsStarschema Products
Starschema ProductsEndre Adam
 
Intro to Apache Apex @ Women in Big Data
Intro to Apache Apex @ Women in Big DataIntro to Apache Apex @ Women in Big Data
Intro to Apache Apex @ Women in Big DataApache Apex
 
RMOUG2016 - Resource Management (the critical piece of the consolidation puzzle)
RMOUG2016 - Resource Management (the critical piece of the consolidation puzzle)RMOUG2016 - Resource Management (the critical piece of the consolidation puzzle)
RMOUG2016 - Resource Management (the critical piece of the consolidation puzzle)Kristofferson A
 
Apache Big Data EU 2016: Next Gen Big Data Analytics with Apache Apex
Apache Big Data EU 2016: Next Gen Big Data Analytics with Apache ApexApache Big Data EU 2016: Next Gen Big Data Analytics with Apache Apex
Apache Big Data EU 2016: Next Gen Big Data Analytics with Apache ApexApache Apex
 
ApacheCon: Apache Flink - Fast and Reliable Large-Scale Data Processing
ApacheCon: Apache Flink - Fast and Reliable Large-Scale Data ProcessingApacheCon: Apache Flink - Fast and Reliable Large-Scale Data Processing
ApacheCon: Apache Flink - Fast and Reliable Large-Scale Data ProcessingFabian Hueske
 
Introduction to Real-Time Data Processing
Introduction to Real-Time Data ProcessingIntroduction to Real-Time Data Processing
Introduction to Real-Time Data ProcessingApache Apex
 

What's hot (11)

Membase East Coast Meetups
Membase East Coast MeetupsMembase East Coast Meetups
Membase East Coast Meetups
 
Intro to Apache Apex - Next Gen Platform for Ingest and Transform
Intro to Apache Apex - Next Gen Platform for Ingest and TransformIntro to Apache Apex - Next Gen Platform for Ingest and Transform
Intro to Apache Apex - Next Gen Platform for Ingest and Transform
 
KSCOPE 2013: Exadata Consolidation Success Story
KSCOPE 2013: Exadata Consolidation Success StoryKSCOPE 2013: Exadata Consolidation Success Story
KSCOPE 2013: Exadata Consolidation Success Story
 
FOSSASIA 2016 - 7 Tips to design web centric high-performance applications
FOSSASIA 2016 - 7 Tips to design web centric high-performance applicationsFOSSASIA 2016 - 7 Tips to design web centric high-performance applications
FOSSASIA 2016 - 7 Tips to design web centric high-performance applications
 
Whitepaper: Mining the AWR repository for Capacity Planning and Visualization
Whitepaper: Mining the AWR repository for Capacity Planning and VisualizationWhitepaper: Mining the AWR repository for Capacity Planning and Visualization
Whitepaper: Mining the AWR repository for Capacity Planning and Visualization
 
Starschema Products
Starschema ProductsStarschema Products
Starschema Products
 
Intro to Apache Apex @ Women in Big Data
Intro to Apache Apex @ Women in Big DataIntro to Apache Apex @ Women in Big Data
Intro to Apache Apex @ Women in Big Data
 
RMOUG2016 - Resource Management (the critical piece of the consolidation puzzle)
RMOUG2016 - Resource Management (the critical piece of the consolidation puzzle)RMOUG2016 - Resource Management (the critical piece of the consolidation puzzle)
RMOUG2016 - Resource Management (the critical piece of the consolidation puzzle)
 
Apache Big Data EU 2016: Next Gen Big Data Analytics with Apache Apex
Apache Big Data EU 2016: Next Gen Big Data Analytics with Apache ApexApache Big Data EU 2016: Next Gen Big Data Analytics with Apache Apex
Apache Big Data EU 2016: Next Gen Big Data Analytics with Apache Apex
 
ApacheCon: Apache Flink - Fast and Reliable Large-Scale Data Processing
ApacheCon: Apache Flink - Fast and Reliable Large-Scale Data ProcessingApacheCon: Apache Flink - Fast and Reliable Large-Scale Data Processing
ApacheCon: Apache Flink - Fast and Reliable Large-Scale Data Processing
 
Introduction to Real-Time Data Processing
Introduction to Real-Time Data ProcessingIntroduction to Real-Time Data Processing
Introduction to Real-Time Data Processing
 

Viewers also liked

Presentación magnetmania sin precio
Presentación magnetmania sin precioPresentación magnetmania sin precio
Presentación magnetmania sin precioAlan Ortega
 
Sample Menu and Pricing Cards
Sample Menu and Pricing CardsSample Menu and Pricing Cards
Sample Menu and Pricing CardsJennifer Clemente
 
Kelompok j
Kelompok jKelompok j
Kelompok jHaidar S
 
Relaciones
RelacionesRelaciones
RelacionesDiego M
 
Codes and Conventions of Music Videos
Codes and Conventions of Music VideosCodes and Conventions of Music Videos
Codes and Conventions of Music VideosLucy Prew
 
Smart Kalasatama Veera mustonen 12012016
Smart Kalasatama Veera mustonen  12012016Smart Kalasatama Veera mustonen  12012016
Smart Kalasatama Veera mustonen 12012016fiksukalasatama
 
Présentation générale clients
Présentation générale clientsPrésentation générale clients
Présentation générale clientsBEEZEN
 
Présentation générale clients
Présentation générale clientsPrésentation générale clients
Présentation générale clientsBEEZEN
 
Growth Through Data Science
Growth Through Data ScienceGrowth Through Data Science
Growth Through Data ScienceTeguh Nugraha
 

Viewers also liked (16)

Presentación magnetmania sin precio
Presentación magnetmania sin precioPresentación magnetmania sin precio
Presentación magnetmania sin precio
 
DILLONY 10 slides
DILLONY 10 slidesDILLONY 10 slides
DILLONY 10 slides
 
Sample Menu and Pricing Cards
Sample Menu and Pricing CardsSample Menu and Pricing Cards
Sample Menu and Pricing Cards
 
resume ECHN LTP316
resume ECHN LTP316resume ECHN LTP316
resume ECHN LTP316
 
Aliment acao pediatria
Aliment acao pediatriaAliment acao pediatria
Aliment acao pediatria
 
Kelompok j
Kelompok jKelompok j
Kelompok j
 
Relaciones
RelacionesRelaciones
Relaciones
 
CTXT
CTXTCTXT
CTXT
 
Codes and Conventions of Music Videos
Codes and Conventions of Music VideosCodes and Conventions of Music Videos
Codes and Conventions of Music Videos
 
Smart Kalasatama Veera mustonen 12012016
Smart Kalasatama Veera mustonen  12012016Smart Kalasatama Veera mustonen  12012016
Smart Kalasatama Veera mustonen 12012016
 
Présentation générale clients
Présentation générale clientsPrésentation générale clients
Présentation générale clients
 
Présentation générale clients
Présentation générale clientsPrésentation générale clients
Présentation générale clients
 
Esquema comentario de climograma
Esquema comentario de climogramaEsquema comentario de climograma
Esquema comentario de climograma
 
Growth Through Data Science
Growth Through Data ScienceGrowth Through Data Science
Growth Through Data Science
 
FundamentalVR Consultancy offer
FundamentalVR Consultancy offer FundamentalVR Consultancy offer
FundamentalVR Consultancy offer
 
IIgm
IIgmIIgm
IIgm
 

Similar to 2013_protect_presentation

Web and Android App Development
Web and Android App DevelopmentWeb and Android App Development
Web and Android App DevelopmentGaurav Gopal Gupta
 
SQL Server 2019 Big Data Cluster
SQL Server 2019 Big Data ClusterSQL Server 2019 Big Data Cluster
SQL Server 2019 Big Data ClusterMaximiliano Accotto
 
SQL to NoSQL: Top 6 Questions
SQL to NoSQL: Top 6 QuestionsSQL to NoSQL: Top 6 Questions
SQL to NoSQL: Top 6 QuestionsMike Broberg
 
MongoDB World 2019: Why NBCUniversal Migrated to MongoDB Atlas
MongoDB World 2019: Why NBCUniversal Migrated to MongoDB AtlasMongoDB World 2019: Why NBCUniversal Migrated to MongoDB Atlas
MongoDB World 2019: Why NBCUniversal Migrated to MongoDB AtlasMongoDB
 
6 tips for improving ruby performance
6 tips for improving ruby performance6 tips for improving ruby performance
6 tips for improving ruby performanceEngine Yard
 
Why NBC Universal Migrated to MongoDB Atlas
Why NBC Universal Migrated to MongoDB AtlasWhy NBC Universal Migrated to MongoDB Atlas
Why NBC Universal Migrated to MongoDB AtlasDatavail
 
Sql portfolio admin_practicals
Sql portfolio admin_practicalsSql portfolio admin_practicals
Sql portfolio admin_practicalsShelli Ciaschini
 
High Performance SSRS
High Performance SSRSHigh Performance SSRS
High Performance SSRSBert Wagner
 
Microservices @ Work - A Practice Report of Developing Microservices
Microservices @ Work - A Practice Report of Developing MicroservicesMicroservices @ Work - A Practice Report of Developing Microservices
Microservices @ Work - A Practice Report of Developing MicroservicesQAware GmbH
 
Mastering the move
Mastering the moveMastering the move
Mastering the moveTrivadis
 
PHP UK 2020 Tutorial: MySQL Indexes, Histograms And other ways To Speed Up Yo...
PHP UK 2020 Tutorial: MySQL Indexes, Histograms And other ways To Speed Up Yo...PHP UK 2020 Tutorial: MySQL Indexes, Histograms And other ways To Speed Up Yo...
PHP UK 2020 Tutorial: MySQL Indexes, Histograms And other ways To Speed Up Yo...Dave Stokes
 
EEDC 2010. Scaling Web Applications
EEDC 2010. Scaling Web ApplicationsEEDC 2010. Scaling Web Applications
EEDC 2010. Scaling Web ApplicationsExpertos en TI
 
IEEE 2014 DOTNET DATA MINING PROJECTS Trusted db a-trusted-hardware-based-dat...
IEEE 2014 DOTNET DATA MINING PROJECTS Trusted db a-trusted-hardware-based-dat...IEEE 2014 DOTNET DATA MINING PROJECTS Trusted db a-trusted-hardware-based-dat...
IEEE 2014 DOTNET DATA MINING PROJECTS Trusted db a-trusted-hardware-based-dat...IEEEMEMTECHSTUDENTPROJECTS
 
2014 IEEE DOTNET DATA MINING PROJECT Trusteddb a-trusted-hardware-based-datab...
2014 IEEE DOTNET DATA MINING PROJECT Trusteddb a-trusted-hardware-based-datab...2014 IEEE DOTNET DATA MINING PROJECT Trusteddb a-trusted-hardware-based-datab...
2014 IEEE DOTNET DATA MINING PROJECT Trusteddb a-trusted-hardware-based-datab...IEEEMEMTECHSTUDENTSPROJECTS
 
Netezza fundamentals for developers
Netezza fundamentals for developersNetezza fundamentals for developers
Netezza fundamentals for developersBiju Nair
 
ME_Snowflake_Introduction_for new students.pptx
ME_Snowflake_Introduction_for new students.pptxME_Snowflake_Introduction_for new students.pptx
ME_Snowflake_Introduction_for new students.pptxSamuel168738
 
Oracle to Azure PostgreSQL database migration webinar
Oracle to Azure PostgreSQL database migration webinarOracle to Azure PostgreSQL database migration webinar
Oracle to Azure PostgreSQL database migration webinarMinnie Seungmin Cho
 
Checklist for Upgrades and Migrations
Checklist for Upgrades and MigrationsChecklist for Upgrades and Migrations
Checklist for Upgrades and MigrationsMarkus Flechtner
 
Esc 209 slides-doin
Esc 209 slides-doinEsc 209 slides-doin
Esc 209 slides-doinJonny Doin
 

Similar to 2013_protect_presentation (20)

Web and Android App Development
Web and Android App DevelopmentWeb and Android App Development
Web and Android App Development
 
SQL Server 2019 Big Data Cluster
SQL Server 2019 Big Data ClusterSQL Server 2019 Big Data Cluster
SQL Server 2019 Big Data Cluster
 
SQL Injection
SQL InjectionSQL Injection
SQL Injection
 
SQL to NoSQL: Top 6 Questions
SQL to NoSQL: Top 6 QuestionsSQL to NoSQL: Top 6 Questions
SQL to NoSQL: Top 6 Questions
 
MongoDB World 2019: Why NBCUniversal Migrated to MongoDB Atlas
MongoDB World 2019: Why NBCUniversal Migrated to MongoDB AtlasMongoDB World 2019: Why NBCUniversal Migrated to MongoDB Atlas
MongoDB World 2019: Why NBCUniversal Migrated to MongoDB Atlas
 
6 tips for improving ruby performance
6 tips for improving ruby performance6 tips for improving ruby performance
6 tips for improving ruby performance
 
Why NBC Universal Migrated to MongoDB Atlas
Why NBC Universal Migrated to MongoDB AtlasWhy NBC Universal Migrated to MongoDB Atlas
Why NBC Universal Migrated to MongoDB Atlas
 
Sql portfolio admin_practicals
Sql portfolio admin_practicalsSql portfolio admin_practicals
Sql portfolio admin_practicals
 
High Performance SSRS
High Performance SSRSHigh Performance SSRS
High Performance SSRS
 
Microservices @ Work - A Practice Report of Developing Microservices
Microservices @ Work - A Practice Report of Developing MicroservicesMicroservices @ Work - A Practice Report of Developing Microservices
Microservices @ Work - A Practice Report of Developing Microservices
 
Mastering the move
Mastering the moveMastering the move
Mastering the move
 
PHP UK 2020 Tutorial: MySQL Indexes, Histograms And other ways To Speed Up Yo...
PHP UK 2020 Tutorial: MySQL Indexes, Histograms And other ways To Speed Up Yo...PHP UK 2020 Tutorial: MySQL Indexes, Histograms And other ways To Speed Up Yo...
PHP UK 2020 Tutorial: MySQL Indexes, Histograms And other ways To Speed Up Yo...
 
EEDC 2010. Scaling Web Applications
EEDC 2010. Scaling Web ApplicationsEEDC 2010. Scaling Web Applications
EEDC 2010. Scaling Web Applications
 
IEEE 2014 DOTNET DATA MINING PROJECTS Trusted db a-trusted-hardware-based-dat...
IEEE 2014 DOTNET DATA MINING PROJECTS Trusted db a-trusted-hardware-based-dat...IEEE 2014 DOTNET DATA MINING PROJECTS Trusted db a-trusted-hardware-based-dat...
IEEE 2014 DOTNET DATA MINING PROJECTS Trusted db a-trusted-hardware-based-dat...
 
2014 IEEE DOTNET DATA MINING PROJECT Trusteddb a-trusted-hardware-based-datab...
2014 IEEE DOTNET DATA MINING PROJECT Trusteddb a-trusted-hardware-based-datab...2014 IEEE DOTNET DATA MINING PROJECT Trusteddb a-trusted-hardware-based-datab...
2014 IEEE DOTNET DATA MINING PROJECT Trusteddb a-trusted-hardware-based-datab...
 
Netezza fundamentals for developers
Netezza fundamentals for developersNetezza fundamentals for developers
Netezza fundamentals for developers
 
ME_Snowflake_Introduction_for new students.pptx
ME_Snowflake_Introduction_for new students.pptxME_Snowflake_Introduction_for new students.pptx
ME_Snowflake_Introduction_for new students.pptx
 
Oracle to Azure PostgreSQL database migration webinar
Oracle to Azure PostgreSQL database migration webinarOracle to Azure PostgreSQL database migration webinar
Oracle to Azure PostgreSQL database migration webinar
 
Checklist for Upgrades and Migrations
Checklist for Upgrades and MigrationsChecklist for Upgrades and Migrations
Checklist for Upgrades and Migrations
 
Esc 209 slides-doin
Esc 209 slides-doinEsc 209 slides-doin
Esc 209 slides-doin
 

2013_protect_presentation

  • 1. © Copyright 2013 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. Extractionandlong-term storageofHPArcSightESM® Connectorstatistics Jeff Holland, Senior Network Security Engineer SAIC
  • 2. Disclosure • This ESM connector statistics extraction method is not supported by ArcSight® Support • SAIC is not responsible for any damages or data loss that might be caused by following this statistics extraction method or using any of the commands or scripts discussed herein • This presentation is intended as both a discussion of the techniques used to extract connector statistics from manage.jsp, the porting of those statistics to a MySQL database, and a discussion of the analysis of those statistics via use cases • Cybersecurity solutions portion of SAIC is to be renamed Leidos, Inc., consummation of a separation transaction if approved by SAIC board of directors. Leidos will deliver new perspectives on cyber challenges by fusing deep domain expertise, advanced Cyber tradecraft with network- speed detection, processing and analytics. 2 Trademark attributions and acronym definitions on slide 33
  • 3. The Problem and a Solution • The standard Connector Statistics dashboard has useful statistics in terms of EPS Sent to Manager, Estimated Cache, etc. However, having a history of those statistics would be even more useful, as well as several methods of analyzing the data. • The engineered solution is as follows: • Run a script every hour that extracts all the connector statistics from the AgentStateTracker page of manage.jsp and saves them to file (aka screen scraping) • Parse the useful information (connector stats) from the rest of the data (HTML tags, return characters, etc) and save that to a file • Run a shell script with sed commands to reformat the date to a MySQL friendly format and delimit all fields with commas 3 Trademark attributions and acronym definitions on slide 33
  • 4. The Problem and a Solution Cont. • Build a MySQL DB to store all these statistics (on a separate server from the manager). A script on the MySQL DB server SCP’s the statistics from the manager every 24 hours and loads them into the DB. Then the process of collecting another 24 hours worth of statistics starts over. • A set of Perl scripts produces PHP files that allows you to visually analyze the data using the PHPGraphLib®*** graphing library and a set of canned SQL queries for both individual connectors, as well as for all connectors (i.e. sum of estimated cache for all connectors) • An HTML form-based CGI script allows for web-based SQL queries and displays the data on a web page • You can copy and paste the text results from your query results on the web page to a file and process with Excel or another graphing tool should you desire *** Free for personal and commercial use. See http://www.ebrueggeman.com/phpgraphlib 4 Trademark attributions and acronym definitions on slide 33
  • 5. The Manage.jsp Page and the AgentStateTracker Link 5
  • 6. Using wget to Scrape the Connector Statistics • A manage.jsp login script saves the connector stats to a file on the manager #!/bin/sh ### Use wget to save a cookie from manage.jsp page using your admin’s user ### credentials. ### Use the no-check-certificate option to ignore the cert. ### Also, it was found to be necessary to feed a fake agent string to mange.jsp, so use ### the -U option to do that. wget --save-cookies ./cookies.txt --keep-session-cookies --no-check-certificate --post- data 'uid=admin&pwd=abc123&dl=1&origPage=manage.jsp' -U "Mozilla/5.0 (Windows NT 5.1; rv:11.0) Gecko/20100101 Firefox/11.0" https://192.168.1.11:8443/arcsight/web/login.jsp ### Now use the saved cookie from the command above to login to manage.jsp and save ### the page to a file in the local directory. wget -U "Mozilla/5.0 (Windows NT 5.1; rv:11.0) Gecko/20100101 Firefox/11.0" --no- check-certificate --load-cookies ./cookies.txt https://192.168.1.11:8443/arcsight/web/manage.jsp?filter=Arcsight%3Aservice%3DA gentStateTracker%2C*&updateinterval=120 6
  • 7. Using wget to Scrape the Connector Stats Cont. • The manage.jsp login script produces three files each time it is run from cron [arcsight@ESM6 ~]$ ls -l manage.jsp* cookies.txt -rw-rw-r--. 1 arcsight arcsight Mar 6 17:10 cookies.txt -rw-rw-r--. 1 arcsight arcsight Mar 6 17:10 manage.jsp -rw-rw-r--. 1 arcsight arcsight Mar 6 17:10 manage.jsp?filter=Arcsight:service=AgentStateTracker,* • The cookies.txt and manage.jsp file are from the first wget command. We will use the cookies.txt file to login to the manage.jsp AgentStateTracker page. • The manage.jsp file with the longer file name is the file that contains the connectors stats that we want, along with HTML tags, extraneous space characters, and other information from the AgentStateTracker page. • We’ll run a Perl script to strip all unwanted HTML tags, spaces, and other information from the data to end up with a nicely formatted file we can upload into the MySQL DB. 7
  • 8. Connector Stats Format • The connector stats format after the raw data is parsed from the HTML page and a script is run with sed commands to re-format the dates to a MySQL friendly format is as follows: – NOTE: Field header line and spaces after commas were added below for readability Name, Connector-ID, Reported_Time, Agent_Time, Post-Aggregation_EPS, Estimated_Cache_Size, Sent_To_Manager_EPS PaloAlto_1, 3QgwaKD0BABE56WiCjTdw==, 2013-03-04 08:43:51, 2013-03-04 08:43:51, 364.0, 0, 192.2 Tripwire_1, 3KgwaTD0BA9iBD5s1tCjaaw==, 2013-03-04 09:43:51, 2013-03-04 09:43:51, 222.0, 0, 221.9 Snort_IDS 2kL3waBN0BjCDE5s1lCjTdv==, 2013-03-05 09:41:19, 2013-03-05 09:41:19, 0.0, 0, 0.0 McAfee_1, Oiumm0BABDE5s6WiO98sm==, 2013-03-05 09:43:17, 2013-03-05 09:43:17, 92.0, 0, 181.0 Bashlog_1, HgwpKD0ABDimkd8E9akiU==, 2013-03-05 09:46:18, 2013-03-05 09:46:18, 43.0, 38, 412.1 8
  • 9. Loading the Data into MySQL • The following script runs daily from cron to upload the data into the MySQL DB after the stats have been SCP’d from the manager to the MySQL DB server #!/bin/sh rm /root/master_connector_stats.csv mv /home/hollandje/master_connector_stats.csv /root/master_connector_stats.csv mysql -t -u root -password=abc123 << eof use connector_stats; LOAD DATA LOCAL INFILE '/root/master_connector_stats.csv' INTO TABLE stats_table FIELDS TERMINATED BY ',' LINES TERMINATED BY 'n' (connector_name, connector_id, reported_date, agent_date, post_agg_eps, est_cache_size, sent_to_mgr_eps) ; eof 9
  • 10. Cleaning up old Database Rows • Only store the last year’s data. • Delete any rows older than that by running this script daily from cron. This is of course configurable. #!/bin/sh mysql -t -u root -password=abc123 << eof use connector_stats; DELETE FROM stats_table WHERE reported_date < TIMESTAMPADD(DAY,-365,NOW()) ; eof 10
  • 11. Updating the PHP Files • The PHP files that render the web pages and contain the canned SQL queries must be updated when connectors are added, deleted or have their names modified. Use an “updater” Perl script to do the following: – Retrieve the last 48 hours worth of connector data from the MySQL DB after the current day’s connector stats have been loaded into the DB and store them in a file. Run this script manually or via cron. – Parse out the connector names from the file and run sort and uniq on the list of connector names and save to another file – Use this list of connector names and other Perl scripts to regenerate the PHP files that contain the SQL queries and render the graphs using the PHPGraphLib® library – chown and chmod the PHP files appropriately and move them to the proper directory 11
  • 12. The Connector Statistics Web Page • Uses Apache and listens on TCP/443 using SSL 12
  • 13. The Connector Statistics Web Page Cont. • The screenshot on the previous slide was a partial one. The drop down menus contain a link for each connector under the column header, and continues on to the right • The full list of drop down menus that display data for each connector (or a sum of data for all connectors) is as follows on this slide and the next: • 7-day Post Aggregation EPS by Individual Connector • 30-day Post Aggregation EPS by Individual Connector • 45-day Post Aggregation EPS by Individual Connector • 7-day Post Aggregation EPS vs Sent to Manager EPS by Individual Connector • 30-day Post Aggregation EPS vs Sent to Manager EPS by Individual Connector • 45-day Post Aggregation EPS vs Sent to Manager EPS by Individual Connector • 45-day Estimated Cache Size by Individual Connector 13
  • 14. The Connector Statistics Web Page Cont. (Continued from last slide) • 30-day Sum of Post Aggregation EPS for All Connectors • 30-day Sum of Estimated Cache Size for All Connectors in MB • 15-day Sum of Post Aggregation EPS vs Sent to Manager EPS Line Graph • 15-day Sum of Post Aggregation EPS vs Sent to Manager EPS Bar Graph 14
  • 15. The Connector Statistics Web Page Cont. • The web page also contains a link for the form-based SQL query 15
  • 16. The Connector Statistics Web Page Cont. • Submitting a SQL query in the form leads to the following two screenshots: 16
  • 17. The Connector Statistics Web Page Cont. • Enough already. Let’s see some graphs! • Below is a 7-day Post Aggregation EPS by Individual Connector graph 17
  • 18. The Connector Statistics Web Page Cont. • Below is a 7-day Post Aggregation EPS vs Sent to Manager EPS by Individual Connector 18
  • 19. The Connector Statistics Web Page Cont. • 30-day Sum of Estimated Cache Size for All Connectors in MB 19
  • 20. Use Case #1 • Based upon the previous slide, we saw the last 30 days worth of hourly cache statistics for all connectors. Which connector had the largest cache over that time period and when? • SELECT connector_name, reported_date, est_cache_size FROM stats_table WHERE connector_name LIKE '%' ORDER BY est_cache_size DESC; 20
  • 21. Use Case #2 • Get the total hourly EPS sum for all connectors for last 30 days • SELECT DATE_FORMAT(reported_date,'%Y-%m-%d %H:00') AS Date_Hour, Sum(post_agg_eps) AS Sum_Post_Agg_EPS FROM stats_table WHERE reported_date BETWEEN CURRENT_TIMESTAMP - INTERVAL '30' DAY AND CURRENT_TIMESTAMP GROUP BY Year(reported_date), Month(reported_date), Day(reported_date), Hour(reported_date); 21
  • 22. Use Case #3 • Get the first quarter 2013 Sent to Manager EPS statistics (hourly) for all Seville Palo Alto firewalls (Seville being a customer name within our ESM instance) • SELECT connector_name, reported_date, post_agg_eps FROM stats_table WHERE connector_name LIKE 'seville_palo%' AND reported_date BETWEEN '2013-01-01 00:00:01' AND '2013-03-31 23:59:59'; 22
  • 23. Use Case #4 • We want to graph a large portion of the data from the database, say 6 months or a year’s worth of firewall “Sent to Manager EPS” statistics. How can I do that? • The PHPGraphLib® library can’t display that much data given the libraries’ limited capabilities • However, you can run a custom SQL query, display the data in the browser, and then highlight/copy and save it to a file • Or retrieve the file that is being displayed in the web page from the server, and then parse out spaces, pipes, etc until you come up with a nicely formatted comma delimited file (aka CSV) and then use MS Excel to graph the data 23
  • 24. Use Case #4 Continued 24
  • 25. Use Case #5 • When I graph 30 or 45 day’s worth of data, I can’t read the timestamps on the x- axis due to there being so many data points. What can I do to fix this? • We can solve this by modifying the .php file that generates that particular graph to only graph every 3rd, 5th, 8th, etc timestamp on the x-axis • The other points on the x-axis will contain the value of the index for the array that contains the x and y-axis values. This is due to the way the graphing library uses a single array to store both timestamp and data values (as shown below): $data_array=array(); while ($row = mysql_fetch_assoc($result)) { $time = $row['reported_date']; $eps = $row['post_agg_eps']; $data_array[$time] = $eps; } 25
  • 26. Use Case #5 Continued • When I graph 30+ days worth of data, the x-axis looks like this: 26
  • 27. Use Case #5 Continued • We will modify the PHP code as shown below: $cntr = 0; while ($row1 = mysql_fetch_assoc($result2)) /* contains EPS data */ { $row2 = mysql_fetch_assoc($result1); /* contains timestamp data */ if (($cntr % 8) == 0) /* Use the modulo function to graph every 8th timestamp */ { $time = $row2['reported_date']; } else { $time = ”$cntr"; /* Use the counter value for time as it’s not on an 8-hour boundary */ } $cntr = $cntr + 1; $eps = $row1['post_agg_eps']; $data_array[$time] = $eps; } 27
  • 28. Use Case #5 Continued • The graph’s x-axis now looks like this: 28
  • 29. Notes on the Install/Deployment of the Scripts • To figure out the parameters for the wget commands (page 6), use Firefox and the add-on “Live HTTP Headers” – Note that password strings must be URL-encoded • The PHPGraphLib library can graph up to 45 days worth of hourly data points. After that, the graph becomes too small to interpret and/or you overrun array buffers within the graphing library code and the graph fails to render. • The custom query CGI script does not have parameter checking. Either educate and/or limit who uses it to avoid dropping a table, etc. Or, come up with your own query parameter filters for the CGI script. • Software you’ll need to run the scripts on Linux/Unix – Apache, MySQL and Perl – Bourne or Bash shell, and the dos2unix command – CGI.pm Perl module – PHPGraphLib – Specific Linux packages (next slide) 29
  • 30. Notes on the Install/Deployment of the Scripts Continued – Install these packages (assuming RHEL 5 or 6) • yum install mysql mysql-server httpd php php-gd php-mysql php-imap php- ldap php-mbstring php-odbc php-pear php-xml php-xmlrpc – Useful References • http://www.tutorialspoint.com/mysql/mysql-create-tables.htm • http://www.cyberciti.biz/tips/how-do-i-enable-remote-access-to-mysql-database- server.html • http://www.ebrueggeman.com/phpgraphlib – Other PHP Graphing Library Options if you don’t care for PHPGraphLib® (would require modifying a significant portion of the scripts if you switched to a new graphing library) • pChart® • JPGraph® • FusionCharts® 30
  • 31. Notes on the Install/Deployment of the Scripts Continued – Creating the database and table in MySQL: root# mysql -u root –p mysql> CREATE DATABASE connector_stats; mysql> use connector_stats; mysql> CREATE TABLE stats_table( stats_table_id INT NOT NULL AUTO_INCREMENT, connector_name VARCHAR(100) NOT NULL, connector_id VARCHAR(100) NOT NULL, reported_date DATETIME, agent_date DATETIME, post_agg_eps DECIMAL(5,1) NOT NULL, est_cache_size BIGINT NOT NULL, sent_to_mgr_eps DECIMAL(5,1) NOT NULL, PRIMARY KEY (stats_table_id) ); 31
  • 32. FAQ • Q: What versions of ESM will these scripts work with? – A: Versions 4.5, 5.0, and 6.0c. There are slight differences with the format of the connector stats scraped from manage.jsp between version 4.5/5.0 and 6.0c. • Q: Can the MySQL DB run virtually? – A: Yes, and that is how we run it. • Q: Are there any licensing fees associated with running the scripts? – A: No, it is all open source. The only thing you would need is a RedHat license subscription so you can run “yum.” The graphing library is free for commercial use. • Q: Are you releasing the scripts for anyone to use? For free? – A: Yes, the ESM 6.0c version and an installation/configuration document. • Q: What operating system will the code run on? – A: It was designed for RedHat Linux. It will run on Solaris, AIX, etc if you port it. Windows would require Cygwin (or something similar) and would be a little more effort in terms of porting it. • Q: Will you be supporting the code for updates, bug fixes, etc? – A: No. You can maintain and enhance it yourself. The code is well commented. 32
  • 33. Trademark Attributions and Acronym Definitions • ArcSight is a registered trademark of Hewlett-Packard Development Company, L.P. in the U.S. and/or other countries. • Linux is a registered trademark of Linus Torvolds in the U.S. and/or other countries. • Red Hat is a registered trademark of Red Hat, Inc. in the U.S. and/or other countries. • VMWare is a registered trademark of VMWare, Inc. in the U.S. and/or other countries. • DB = Database • EPD = Events per Day • EPS = Events per Second • ESM = Enterprise Security Manager • GUI = Graphical User Interface • OS = Operating System • IP = Intern Protocol (short for “Internet Protocol Address) • MSSP = Managed Security Service Provider • OS = Operating System 33
  • 34. Thank You • Thanks for Attending • Questions? • Contact Info: Jeff Holland, hollandje@saic.com 34
  • 35. © Copyright 2013 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. Security for the new reality