The document does not specify which versions of ESM the scripts were designed to work with. However, based on the references to "manage.jsp" pages, it seems these scripts were designed to work with older versions of ESM that used the Java Server Pages (JSP) technology for the web interface, rather than the current REST API-based interface. So these scripts may only work with older 5.x or 6.x versions of ESM. Modifications would likely be needed to interface with newer versions that use a different method for retrieving connector statistics programmatically.
FOSSASIA 2015 - 10 Features your developers are missing when stuck with Propr...Ashnikbiz
Ashnik Database Solution Architect, Sameer Kumar, an Open Source evangelist presented at FOSSASIA 2015 about the features of open source database like PostgreSQL which are missed by developers stuck on proprietary databases.
10 Features you would love as an Open Source developer!
- New JSON Datatype
- Vast set of datatypes supported
- Rich support for foreign Data Wrap
- User Defined Operators
- User Defined Extensions
- Filter Based Indexes or Partial Indexes
- Granular control of parameters at User, Database, Connection or Transaction Level
- Use of indexes to get statistics
- JDBC API for COPY -Command
- Full Text Search
How to boost performance of your rails app using dynamo db and memcachedAndolasoft Inc
DynamoDB and Memcached is a powerful combination for your Rails app. If you're looking to improve the performance of your Rails application, this may be the solution for you.
This ppt is an quick introduction to sqlmap which is a tool used in ethical hacking for detecting and exploiting sql injection flaws and taking over of database servers. This slide covers the history of sqlmap, how it works and important sqlmap queries.
Pg_upgrade allows data to be transferred between major Postgres versions without a costly dump/restore. This occurs by transferring the user data and version-dependent data separately. This presentation explains the internal workings of pg_upgrade and includes a pg_upgrade demonstration.
To listen to the recording please visit www.EnterpriseDB.com > Resources > Webcasts > On-demand webcasts
For more information about Postgres Plus Advanced Server you can email sales@enterprisedb.com
Introduction to Prometheus Monitoring (Singapore Meetup) Arseny Chernov
Presented at inaugural Singapore Prometheus Meetup, videos on https://www.meetup.com/Singapore-Prometheus-Meetup/events/240844291/
Links to original slides from various blogposts provided.
FOSSASIA 2015 - 10 Features your developers are missing when stuck with Propr...Ashnikbiz
Ashnik Database Solution Architect, Sameer Kumar, an Open Source evangelist presented at FOSSASIA 2015 about the features of open source database like PostgreSQL which are missed by developers stuck on proprietary databases.
10 Features you would love as an Open Source developer!
- New JSON Datatype
- Vast set of datatypes supported
- Rich support for foreign Data Wrap
- User Defined Operators
- User Defined Extensions
- Filter Based Indexes or Partial Indexes
- Granular control of parameters at User, Database, Connection or Transaction Level
- Use of indexes to get statistics
- JDBC API for COPY -Command
- Full Text Search
How to boost performance of your rails app using dynamo db and memcachedAndolasoft Inc
DynamoDB and Memcached is a powerful combination for your Rails app. If you're looking to improve the performance of your Rails application, this may be the solution for you.
This ppt is an quick introduction to sqlmap which is a tool used in ethical hacking for detecting and exploiting sql injection flaws and taking over of database servers. This slide covers the history of sqlmap, how it works and important sqlmap queries.
Pg_upgrade allows data to be transferred between major Postgres versions without a costly dump/restore. This occurs by transferring the user data and version-dependent data separately. This presentation explains the internal workings of pg_upgrade and includes a pg_upgrade demonstration.
To listen to the recording please visit www.EnterpriseDB.com > Resources > Webcasts > On-demand webcasts
For more information about Postgres Plus Advanced Server you can email sales@enterprisedb.com
Introduction to Prometheus Monitoring (Singapore Meetup) Arseny Chernov
Presented at inaugural Singapore Prometheus Meetup, videos on https://www.meetup.com/Singapore-Prometheus-Meetup/events/240844291/
Links to original slides from various blogposts provided.
Intro to Apache Apex - Next Gen Platform for Ingest and TransformApache Apex
Introduction to Apache Apex - The next generation native Hadoop platform. This talk will cover details about how Apache Apex can be used as a powerful and versatile platform for big data processing. Common usage of Apache Apex includes big data ingestion, streaming analytics, ETL, fast batch alerts, real-time actions, threat detection, etc.
Bio:
Pramod Immaneni is Apache Apex PMC member and senior architect at DataTorrent, where he works on Apache Apex and specializes in big data platform and applications. Prior to DataTorrent, he was a co-founder and CTO of Leaf Networks LLC, eventually acquired by Netgear Inc, where he built products in core networking space and was granted patents in peer-to-peer VPNs.
FOSSASIA 2016 - 7 Tips to design web centric high-performance applicationsAshnikbiz
Ashnik Database Solution Architect, Sameer Kumar, an Open Source evangelist shared some tips at FOSSASIA 2016 about how to design web-centric high-performance applications.
Apache Big Data EU 2016: Next Gen Big Data Analytics with Apache ApexApache Apex
Stream data processing is becoming increasingly important to support business needs for faster time to insight and action with growing volume of information from more sources. Apache Apex (http://apex.apache.org/) is a unified big data in motion processing platform for the Apache Hadoop ecosystem. Apex supports demanding use cases with:
* Architecture for high throughput, low latency and exactly-once processing semantics.
* Comprehensive library of building blocks including connectors for Kafka, Files, Cassandra, HBase and many more
* Java based with unobtrusive API to build real-time and batch applications and implement custom business logic.
* Advanced engine features for auto-scaling, dynamic changes, compute locality.
Apex was developed since 2012 and is used in production in various industries like online advertising, Internet of Things (IoT) and financial services.
The new Kalasatama area of Helsinki is an experimental innovation platform to co-create smart urban infrastructure and services. This centrally located old harbour area is developed flexibly and through piloting, in close co-operation with residents, companies, city officials and other stakeholders. The vision of Kalasatama is to become so resource-wise that residents will gain an extra hour of own time every day.
Intro to Apache Apex - Next Gen Platform for Ingest and TransformApache Apex
Introduction to Apache Apex - The next generation native Hadoop platform. This talk will cover details about how Apache Apex can be used as a powerful and versatile platform for big data processing. Common usage of Apache Apex includes big data ingestion, streaming analytics, ETL, fast batch alerts, real-time actions, threat detection, etc.
Bio:
Pramod Immaneni is Apache Apex PMC member and senior architect at DataTorrent, where he works on Apache Apex and specializes in big data platform and applications. Prior to DataTorrent, he was a co-founder and CTO of Leaf Networks LLC, eventually acquired by Netgear Inc, where he built products in core networking space and was granted patents in peer-to-peer VPNs.
FOSSASIA 2016 - 7 Tips to design web centric high-performance applicationsAshnikbiz
Ashnik Database Solution Architect, Sameer Kumar, an Open Source evangelist shared some tips at FOSSASIA 2016 about how to design web-centric high-performance applications.
Apache Big Data EU 2016: Next Gen Big Data Analytics with Apache ApexApache Apex
Stream data processing is becoming increasingly important to support business needs for faster time to insight and action with growing volume of information from more sources. Apache Apex (http://apex.apache.org/) is a unified big data in motion processing platform for the Apache Hadoop ecosystem. Apex supports demanding use cases with:
* Architecture for high throughput, low latency and exactly-once processing semantics.
* Comprehensive library of building blocks including connectors for Kafka, Files, Cassandra, HBase and many more
* Java based with unobtrusive API to build real-time and batch applications and implement custom business logic.
* Advanced engine features for auto-scaling, dynamic changes, compute locality.
Apex was developed since 2012 and is used in production in various industries like online advertising, Internet of Things (IoT) and financial services.
The new Kalasatama area of Helsinki is an experimental innovation platform to co-create smart urban infrastructure and services. This centrally located old harbour area is developed flexibly and through piloting, in close co-operation with residents, companies, city officials and other stakeholders. The vision of Kalasatama is to become so resource-wise that residents will gain an extra hour of own time every day.
Virtual, augmented, mixed reality will impact your business, its no longer a question of if, but when. Build a strategy and have a plan, FundamentalVR can be your guide.
Hear Ryan Millay, IBM Cloudant software development manager, discuss what you need to consider when moving from world of relational databases to a NoSQL document store.
You'll learn about the key differences between relational databases and JSON document stores like Cloudant, as well as how to dodge the pitfalls of migrating from a relational database to NoSQL.
MongoDB World 2019: Why NBCUniversal Migrated to MongoDB AtlasMongoDB
NBCUniversal, a worldwide mass media corporation, was looking for a more affordable and easier way to manage their database solution that hosts their extensive online digital assets. With Datavail’s assistance, NBCUniversal, made the move from MongoDB 3.6 to MongoDB Atlas on AWS.
This presentation was prepared for a Webcast where John Yerhot, Engine Yard US Support Lead, and Chris Kelly, Technical Evangelist at New Relic discussed how you can scale and improve the performance of your Ruby web apps. They shared detailed guidance on issues like:
Caching strategies
Slow database queries
Background processing
Profiling Ruby applications
Picking the right Ruby web server
Sharding data
Attendees will learn how to:
Gain visibility on site performance
Improve scalability and uptime
Find and fix key bottlenecks
See the on-demand replay:
http://pages.engineyard.com/6TipsforImprovingRubyApplicationPerformance.html
Why NBC Universal Migrated to MongoDB AtlasDatavail
NBCUniversal, a worldwide mass media corporation, was looking for a more affordable and easier way to manage their database solution that hosts their extensive online digital assets. With Datavail’s assistance, NBCUniversal made the move from MongoDB 3.6 to MongoDB Atlas on AWS.
In this presentation, learn how making this move enabled the entertainment titan to reduce overhead and labor costs associated with managing its database environment.
SQL Server Reporting Services (SSRS) is an easy-to-use tool for automating reports and creating highly visual dashboards.
Although SSRS is easy to learn there are many tips and tricks that can improve your report building experience, not to mention make your reports run blazing fast!
This rapid-fire session goes over my learnings from the past six years of developing high-performance SSRS reports, including topics like multivalue parameter efficiencies, how to best utilize subreports, and performing SQL CRUD operations with SSRS.
Each rapid-fire topic includes sample data and an SSRS reporting example that users will be able to try out for themselves.
Microservices @ Work - A Practice Report of Developing MicroservicesQAware GmbH
Cloud Native Night October 2016, Mainz: Talk by Simon Bäumler (Technical Chief Designer at QAware).
Join our Meetup: www.meetup.com/cloud-native-night
Abstract: This talk takes a practice oriented approach to examine microservice oriented architecture. It will show two real systems, one build from scratch in a microservice architecture, the other migrated from a monolithic system to a microservice architecture.
With the example of these two systems the pittfalls, advantages and lessons learned using microservice oriented architectures will be discussed.
While both systems use the java stack, including spring boot and spring cloud many topics will be kept general and will be of interest for all developers.
PHP UK 2020 Tutorial: MySQL Indexes, Histograms And other ways To Speed Up Yo...Dave Stokes
Slow query? Add an index or two! But things are suddenly even slower! Indexes are great tools to speed data lookup but have overhead issues. Histograms don’t have that overhead but may not be suited. And how you lock rows also effects performance. So what do you do to speed up queries smartly?
Seminario realizado en el marco del master CANS en la Facultad de Informática de Barcelona.
Anatomia de una aplicación Web
Demasiadas escrituras en la BD, ¿qué puedo hacer?
¿Cómo puedo aprovechar el "Cloud"?
Optimizando aplicaciones Facebook
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
IEEE 2014 DOTNET DATA MINING PROJECTS Trusted db a-trusted-hardware-based-dat...IEEEMEMTECHSTUDENTPROJECTS
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
Embedded Web Servers built around HTML5 SSE servers, with JSON streams for multiple data IDs are very efficient. We show a real implementation that uses less than 200 bytes per session, with a fast update rate and using less than half the bandwidth of a comparable HTML4 AJAX implementation.
2. Disclosure
• This ESM connector statistics extraction method is not supported by
ArcSight® Support
• SAIC is not responsible for any damages or data loss that might be
caused by following this statistics extraction method or using any of the
commands or scripts discussed herein
• This presentation is intended as both a discussion of the techniques
used to extract connector statistics from manage.jsp, the porting of
those statistics to a MySQL database, and a discussion of the analysis
of those statistics via use cases
• Cybersecurity solutions portion of SAIC is to be renamed Leidos, Inc.,
consummation of a separation transaction if approved by SAIC board of
directors. Leidos will deliver new perspectives on cyber challenges by
fusing deep domain expertise, advanced Cyber tradecraft with network-
speed detection, processing and analytics.
2
Trademark attributions and acronym definitions on slide 33
3. The Problem and a Solution
• The standard Connector Statistics dashboard has useful statistics in terms
of EPS Sent to Manager, Estimated Cache, etc. However, having a history
of those statistics would be even more useful, as well as several methods of
analyzing the data.
• The engineered solution is as follows:
• Run a script every hour that extracts all the connector statistics from the
AgentStateTracker page of manage.jsp and saves them to file
(aka screen scraping)
• Parse the useful information (connector stats) from the rest of the data (HTML
tags, return characters, etc) and save that to a file
• Run a shell script with sed commands to reformat the date to a MySQL friendly
format and delimit all fields with commas
3
Trademark attributions and acronym definitions on slide 33
4. The Problem and a Solution Cont.
• Build a MySQL DB to store all these statistics (on a separate server from the
manager). A script on the MySQL DB server SCP’s the statistics from the manager
every 24 hours and loads them into the DB. Then the process of collecting another
24 hours worth of statistics starts over.
• A set of Perl scripts produces PHP files that allows you to visually analyze the data
using the PHPGraphLib®*** graphing library and a set of canned SQL queries for
both individual connectors, as well as for all connectors
(i.e. sum of estimated cache for all connectors)
• An HTML form-based CGI script allows for web-based SQL queries and displays the
data on a web page
• You can copy and paste the text results from your query results on the web
page to a file and process with Excel or another graphing tool should you desire
*** Free for personal and commercial use. See http://www.ebrueggeman.com/phpgraphlib
4
Trademark attributions and acronym definitions on slide 33
6. Using wget to Scrape the Connector Statistics
• A manage.jsp login script saves the connector stats to a file on the manager
#!/bin/sh
### Use wget to save a cookie from manage.jsp page using your admin’s user
### credentials.
### Use the no-check-certificate option to ignore the cert.
### Also, it was found to be necessary to feed a fake agent string to mange.jsp, so use
### the -U option to do that.
wget --save-cookies ./cookies.txt --keep-session-cookies --no-check-certificate --post-
data 'uid=admin&pwd=abc123&dl=1&origPage=manage.jsp' -U "Mozilla/5.0
(Windows NT 5.1; rv:11.0) Gecko/20100101 Firefox/11.0"
https://192.168.1.11:8443/arcsight/web/login.jsp
### Now use the saved cookie from the command above to login to manage.jsp and
save ### the page to a file in the local directory.
wget -U "Mozilla/5.0 (Windows NT 5.1; rv:11.0) Gecko/20100101 Firefox/11.0" --no-
check-certificate --load-cookies ./cookies.txt
https://192.168.1.11:8443/arcsight/web/manage.jsp?filter=Arcsight%3Aservice%3DA
gentStateTracker%2C*&updateinterval=120
6
7. Using wget to Scrape the Connector Stats Cont.
• The manage.jsp login script produces three files each time it is run
from cron
[arcsight@ESM6 ~]$ ls -l manage.jsp* cookies.txt
-rw-rw-r--. 1 arcsight arcsight Mar 6 17:10 cookies.txt
-rw-rw-r--. 1 arcsight arcsight Mar 6 17:10 manage.jsp
-rw-rw-r--. 1 arcsight arcsight Mar 6 17:10 manage.jsp?filter=Arcsight:service=AgentStateTracker,*
• The cookies.txt and manage.jsp file are from the first wget command. We will use the
cookies.txt file to login to the manage.jsp AgentStateTracker page.
• The manage.jsp file with the longer file name is the file that contains the connectors stats
that we want, along with HTML tags, extraneous space characters, and other information
from the AgentStateTracker page.
• We’ll run a Perl script to strip all unwanted HTML tags, spaces, and other information from
the data to end up with a nicely formatted file we can upload into the MySQL DB.
7
8. Connector Stats Format
• The connector stats format after the raw data is parsed from the
HTML page and a script is run with sed commands to re-format the
dates to a MySQL friendly format is as follows:
– NOTE: Field header line and spaces after commas were added below for readability
Name, Connector-ID, Reported_Time, Agent_Time, Post-Aggregation_EPS, Estimated_Cache_Size, Sent_To_Manager_EPS
PaloAlto_1, 3QgwaKD0BABE56WiCjTdw==, 2013-03-04 08:43:51, 2013-03-04 08:43:51, 364.0, 0, 192.2
Tripwire_1, 3KgwaTD0BA9iBD5s1tCjaaw==, 2013-03-04 09:43:51, 2013-03-04 09:43:51, 222.0, 0, 221.9
Snort_IDS 2kL3waBN0BjCDE5s1lCjTdv==, 2013-03-05 09:41:19, 2013-03-05 09:41:19, 0.0, 0, 0.0
McAfee_1, Oiumm0BABDE5s6WiO98sm==, 2013-03-05 09:43:17, 2013-03-05 09:43:17, 92.0, 0, 181.0
Bashlog_1, HgwpKD0ABDimkd8E9akiU==, 2013-03-05 09:46:18, 2013-03-05 09:46:18, 43.0, 38, 412.1
8
9. Loading the Data into MySQL
• The following script runs daily from cron to upload the data into the MySQL
DB after the stats have been SCP’d from the manager to the MySQL DB
server
#!/bin/sh
rm /root/master_connector_stats.csv
mv /home/hollandje/master_connector_stats.csv /root/master_connector_stats.csv
mysql -t -u root -password=abc123 << eof
use connector_stats;
LOAD DATA LOCAL INFILE '/root/master_connector_stats.csv'
INTO TABLE stats_table
FIELDS TERMINATED BY ','
LINES TERMINATED BY 'n'
(connector_name, connector_id, reported_date, agent_date, post_agg_eps, est_cache_size,
sent_to_mgr_eps)
;
eof
9
10. Cleaning up old Database Rows
• Only store the last year’s data.
• Delete any rows older than that by running this script daily from
cron. This is of course configurable.
#!/bin/sh
mysql -t -u root -password=abc123 << eof
use connector_stats;
DELETE FROM stats_table WHERE reported_date < TIMESTAMPADD(DAY,-365,NOW())
;
eof
10
11. Updating the PHP Files
• The PHP files that render the web pages and contain the canned SQL
queries must be updated when connectors are added, deleted or have
their names modified. Use an “updater” Perl script to do the following:
– Retrieve the last 48 hours worth of connector data from the MySQL DB after the
current day’s connector stats have been loaded into the DB and store them in a file.
Run this script manually or via cron.
– Parse out the connector names from the file and run sort and uniq on the list of
connector names and save to another file
– Use this list of connector names and other Perl scripts to regenerate the PHP files that
contain the SQL queries and render the graphs using the PHPGraphLib® library
– chown and chmod the PHP files appropriately and move them to the proper directory
11
13. The Connector Statistics Web Page Cont.
• The screenshot on the previous slide was a partial one. The drop
down menus contain a link for each connector under the column
header, and continues on to the right
• The full list of drop down menus that display data for each connector
(or a sum of data for all connectors) is as follows on this slide and
the next:
• 7-day Post Aggregation EPS by Individual Connector
• 30-day Post Aggregation EPS by Individual Connector
• 45-day Post Aggregation EPS by Individual Connector
• 7-day Post Aggregation EPS vs Sent to Manager EPS by Individual Connector
• 30-day Post Aggregation EPS vs Sent to Manager EPS by Individual Connector
• 45-day Post Aggregation EPS vs Sent to Manager EPS by Individual Connector
• 45-day Estimated Cache Size by Individual Connector
13
14. The Connector Statistics Web Page Cont.
(Continued from last slide)
• 30-day Sum of Post Aggregation EPS for All Connectors
• 30-day Sum of Estimated Cache Size for All Connectors in MB
• 15-day Sum of Post Aggregation EPS vs Sent to Manager EPS Line Graph
• 15-day Sum of Post Aggregation EPS vs Sent to Manager EPS Bar Graph
14
15. The Connector Statistics Web Page Cont.
• The web page also contains a link for the form-based SQL query
15
16. The Connector Statistics Web Page Cont.
• Submitting a SQL query in the form leads to the following two screenshots:
16
17. The Connector Statistics Web Page Cont.
• Enough already. Let’s see some graphs!
• Below is a 7-day Post Aggregation EPS by Individual Connector graph
17
18. The Connector Statistics Web Page Cont.
• Below is a 7-day Post Aggregation EPS vs Sent to Manager EPS by
Individual Connector
18
19. The Connector Statistics Web Page Cont.
• 30-day Sum of Estimated Cache Size for All Connectors in MB
19
20. Use Case #1
• Based upon the previous slide, we saw the last 30 days worth of hourly cache
statistics for all connectors. Which connector had the largest cache over that time
period and when?
• SELECT connector_name, reported_date, est_cache_size FROM stats_table WHERE
connector_name LIKE '%' ORDER BY est_cache_size DESC;
20
21. Use Case #2
• Get the total hourly EPS sum for all connectors for last 30 days
• SELECT DATE_FORMAT(reported_date,'%Y-%m-%d %H:00') AS Date_Hour,
Sum(post_agg_eps) AS Sum_Post_Agg_EPS FROM stats_table WHERE
reported_date BETWEEN CURRENT_TIMESTAMP - INTERVAL '30' DAY AND
CURRENT_TIMESTAMP GROUP BY Year(reported_date), Month(reported_date),
Day(reported_date), Hour(reported_date);
21
22. Use Case #3
• Get the first quarter 2013 Sent to Manager EPS statistics (hourly) for all Seville Palo
Alto firewalls (Seville being a customer name within our ESM instance)
• SELECT connector_name, reported_date, post_agg_eps FROM stats_table WHERE
connector_name LIKE 'seville_palo%' AND reported_date BETWEEN '2013-01-01
00:00:01' AND '2013-03-31 23:59:59';
22
23. Use Case #4
• We want to graph a large portion of the data from the database, say 6 months or a
year’s worth of firewall “Sent to Manager EPS” statistics. How can I do that?
• The PHPGraphLib® library can’t display that much data given the libraries’ limited
capabilities
• However, you can run a custom SQL query, display the data in the browser, and
then highlight/copy and save it to a file
• Or retrieve the file that is being displayed in the web page from the server, and
then parse out spaces, pipes, etc until you come up with a nicely formatted
comma delimited file (aka CSV) and then use MS Excel to graph the data
23
25. Use Case #5
• When I graph 30 or 45 day’s worth of data, I can’t read the timestamps on the x-
axis due to there being so many data points. What can I do to fix this?
• We can solve this by modifying the .php file that generates that particular
graph to only graph every 3rd, 5th, 8th, etc timestamp on the x-axis
• The other points on the x-axis will contain the value of the index for the array
that contains the x and y-axis values. This is due to the way the graphing
library uses a single array to store both timestamp and data values (as
shown below):
$data_array=array();
while ($row = mysql_fetch_assoc($result))
{ $time = $row['reported_date'];
$eps = $row['post_agg_eps'];
$data_array[$time] = $eps;
}
25
26. Use Case #5 Continued
• When I graph 30+ days worth of data, the x-axis looks like this:
26
27. Use Case #5 Continued
• We will modify the PHP code as shown below:
$cntr = 0;
while ($row1 = mysql_fetch_assoc($result2)) /* contains EPS data */
{
$row2 = mysql_fetch_assoc($result1); /* contains timestamp data */
if (($cntr % 8) == 0) /* Use the modulo function to graph every 8th timestamp */
{ $time = $row2['reported_date'];
}
else { $time = ”$cntr"; /* Use the counter value for time as it’s not on an 8-hour
boundary */
}
$cntr = $cntr + 1;
$eps = $row1['post_agg_eps'];
$data_array[$time] = $eps;
}
27
28. Use Case #5 Continued
• The graph’s x-axis now looks like this:
28
29. Notes on the Install/Deployment of the Scripts
• To figure out the parameters for the wget commands (page 6), use
Firefox and the add-on “Live HTTP Headers”
– Note that password strings must be URL-encoded
• The PHPGraphLib library can graph up to 45 days worth of hourly
data points. After that, the graph becomes too small to interpret
and/or you overrun array buffers within the graphing library code and
the graph fails to render.
• The custom query CGI script does not have parameter checking.
Either educate and/or limit who uses it to avoid dropping a table, etc.
Or, come up with your own query parameter filters for the CGI script.
• Software you’ll need to run the scripts on Linux/Unix
– Apache, MySQL and Perl
– Bourne or Bash shell, and the dos2unix command
– CGI.pm Perl module
– PHPGraphLib
– Specific Linux packages (next slide)
29
30. Notes on the Install/Deployment of
the Scripts Continued
– Install these packages (assuming RHEL 5 or 6)
• yum install mysql mysql-server httpd php php-gd php-mysql php-imap php-
ldap php-mbstring php-odbc php-pear php-xml php-xmlrpc
– Useful References
• http://www.tutorialspoint.com/mysql/mysql-create-tables.htm
• http://www.cyberciti.biz/tips/how-do-i-enable-remote-access-to-mysql-database-
server.html
• http://www.ebrueggeman.com/phpgraphlib
– Other PHP Graphing Library Options if you don’t care for PHPGraphLib® (would
require modifying a significant portion of the scripts if you switched to a new
graphing library)
• pChart®
• JPGraph®
• FusionCharts®
30
31. Notes on the Install/Deployment of
the Scripts Continued
– Creating the database and table in MySQL:
root# mysql -u root –p
mysql> CREATE DATABASE connector_stats;
mysql> use connector_stats;
mysql> CREATE TABLE stats_table(
stats_table_id INT NOT NULL AUTO_INCREMENT,
connector_name VARCHAR(100) NOT NULL,
connector_id VARCHAR(100) NOT NULL,
reported_date DATETIME,
agent_date DATETIME,
post_agg_eps DECIMAL(5,1) NOT NULL,
est_cache_size BIGINT NOT NULL,
sent_to_mgr_eps DECIMAL(5,1) NOT NULL,
PRIMARY KEY (stats_table_id)
);
31
32. FAQ
• Q: What versions of ESM will these scripts work with?
– A: Versions 4.5, 5.0, and 6.0c. There are slight differences with the format of the
connector stats scraped from manage.jsp between version 4.5/5.0 and 6.0c.
• Q: Can the MySQL DB run virtually?
– A: Yes, and that is how we run it.
• Q: Are there any licensing fees associated with running the scripts?
– A: No, it is all open source. The only thing you would need is a RedHat license
subscription so you can run “yum.” The graphing library is free for commercial use.
• Q: Are you releasing the scripts for anyone to use? For free?
– A: Yes, the ESM 6.0c version and an installation/configuration document.
• Q: What operating system will the code run on?
– A: It was designed for RedHat Linux. It will run on Solaris, AIX, etc if you port it.
Windows would require Cygwin (or something similar) and would be a little more effort
in terms of porting it.
• Q: Will you be supporting the code for updates, bug fixes, etc?
– A: No. You can maintain and enhance it yourself. The code is well commented.
32
33. Trademark Attributions and Acronym Definitions
• ArcSight is a registered trademark of Hewlett-Packard Development Company, L.P. in the U.S. and/or other countries.
• Linux is a registered trademark of Linus Torvolds in the U.S. and/or other countries.
• Red Hat is a registered trademark of Red Hat, Inc. in the U.S. and/or other countries.
• VMWare is a registered trademark of VMWare, Inc. in the U.S. and/or other countries.
• DB = Database
• EPD = Events per Day
• EPS = Events per Second
• ESM = Enterprise Security Manager
• GUI = Graphical User Interface
• OS = Operating System
• IP = Intern Protocol (short for “Internet Protocol Address)
• MSSP = Managed Security Service Provider
• OS = Operating System
33
34. Thank You
• Thanks for Attending
• Questions?
• Contact Info: Jeff Holland, hollandje@saic.com
34