SlideShare a Scribd company logo
1 of 171
Download to read offline
173 | P a g e
Main Functionalities:
 Real-time, subnet-level tracking of unmanaged, networked devices
 Detailed hardware information including slot description, memory configuration and
network adaptor configuration
 Extended plug-and-play monitor data including secondary monitor information
 Detailed asset-tag and serial number information, as well as embedded pointed device,
fixed drive and CD-ROM data
 Multi-layer information model – the idea is to represent the same equipment and
connections in several layers, with technology-specific information included in the
dedicated layer providing a consistent view of the network for the operator without an
information overflow.
 The layers represent both physical and logical information of managed network,
including: physical network resources, infrastructure, physical connections, digital
transmission layer (SDH/SONET (STM-n, VC-4, VC-12, OC-n), PDH (E1, T1)),
174 | P a g e
telephony layer, IP-related layers, GSM/CDMA/UMTS-related layers as well as ATM
and FR layers
 History tracking - inventory objects (equipment, connections, numbering resources etc.)
are stored with full history of changes which enables change tracking; a new history entry
is made in three cases: object creation (the first history entry is made); object
modification (for each modification a new entry is added); and object removal (the last
history entry is made)
 Auto-discovery and reconciliation – enables to keep the stored information up-to-date
with the changes occurring in the network. The auto-discovery tool enables adding new
network elements to the inventory database, removing existing network elements from
the inventory database as well as updating the inventory database due to changed cards,
ports or interfaces
 Network planning – future object planning support (storing future changes in the
equipment, switches configuration, connections, etc.); plans are executed or applied by
the system logic – object creation / changing actually take place and planned objects
become active in the inventory system; enables visualization of the network state in the
future
 Inventory-Based Billing enables accurate calculations of customer charges for inventory
products and services (e.g. equipment, locations, connections, capacity); this module is
able to calculate charges for services leased from another operator (vendor) and resold
(with profit) to customers, and to generate invoices
 Inventory and Console Tools allow user-friendly management of important objects used
in the application (creating templates (Logical View, Report Management, Charts),
editing symbols and links, searching for objects, encrypting passwords and notifying
users of various actions/events)
 Wizards and templates provide flexibility but do not allow for inconsistent manipulation
of data; new objects are created with an object creation wizard (so called template),
which enables defining all attributes and necessary referential object (path details for
connections, detailed elements (cards, ports) for equipment etc.); the user can define
which attributes of an object should be mandatory / predefined and if they should have a
constant value
 Process-driven Inventory – by introduction of automated processes, all user tasks related
to inventory data are done in the context of a process instance; changing the state of the
network (e.g. by provisioning a new service) cannot be done without updating
information in the inventory; this assures real-time accuracy of the inventory database
 Information theft – A network inventory management system not only keeps track of
your hardware but also your software. It also shows you who has access to that software.
A regular check of your system's inventory will let you know who has downloaded and
used software they may not be authorized to use.
 Equipment theft – A network management system will automatically detect every piece
of equipment and software connected to your system. And it will also let you know which
items are not working properly, which items need to be replaced, and which items have
175 | P a g e
mysteriously disappeared. Eliminate workplace theft simply by running a regularly
scheduled inventory check.
 Licensing agreements – An inventory of your software and licensing agreements will let
you know if you've got the necessary licensing agreements for all your software.
Insufficient licensing can cost you usage fees and fines and duplicating software that you
already have is an unnecessary expense.
 System Upgrades – Outdated equipment and software can cost your company time,
money, and resources. Downtime and slow response times are two of the biggest time
killers for your business. Set filters on your network inventory management system to
alert you when it's time to upgrade software or replace hardware with newer technology
to keep your system running as smoothly and efficiently as possible.
Benefits:
 End-to-end view of multi-vendor, multi-technology networks
 Reduced network operating cost
 Improved utilization of existing resources
 Quicker, more efficient change management
 Visualization and control of distributed resources
 Seamless integration within the existing environment
 Automatically discovers and diagrams network topology
 Automatic generation of network maps in Microsoft Office Visio
 Automatically detects new devices and changes to network topology
 Simplifies inventory management for hardware and software assets
 Addresses reporting needs for PCI compliance and other regulatory requirements
powerful capabilities, including:
o Inventory management for all systems
o Direct access to Windows, Macintosh and Linux devices
o Automatically save hardware and software configuration information in a SQL
database
o Generate systems continuity and backup profiler reports
o Use remote management capabilities to shut down, restart and launch
applications
176 | P a g e
Completing the gaps with scripts
177 | P a g e
Creating Device Groups (Security Level, Same
Version…)
Creating Policies
Microsoft released Security Compliance Manager along with a heap of new
security baseline for you to use to compare against your environment. In case you
are not familiar with SCM then it is a great product from Microsoft that
consolidates all the best practice for their software with in-depth explanation for
each setting.
Notably this new version has security baselines for Exchange Server 2010 and 2007. These baselines are also
customized for the specific role of the server. Also interesting is the baseline settings not only include group policy
computer settings but also PowerShell command to configured aspects of the product that are not as simply to make as
a registry key change.
178 | P a g e
179 | P a g e
As you can see from the image below the PowerShell script to perform the required configuration is listed in the detail
pain…
Attachments and Guidelines
Another new feature you might notice is that there is now a section called Attachments and
Guidelines that has a lot of support documentation that relate to the Security baseline. This
section also allows you to add your own supporting documentation to your custom baseline
templates.
180 | P a g e
How to Import an existing GPO into Microsoft Security Compliance Manager v2
To start you simply need to make a backup of the existing Group Policy Object via the Group
Policy Management Console and then import it by selecting the “Import GPO” option in the new
tool at the top right corner (see image below).
181 | P a g e
Select the path to the backup of individual GPO (see image below).
182 | P a g e
Once you click OK the policy will then import into the SCM tool.
Once the GPO is imported the tool will look at the registry path and if it is a known value it will
then match it up with the additional information already contained in the SCM database (very
smart).
183 | P a g e
Now that you have the GPO imported into the SCM tool you can use the “compare” to see the
differences between this and the other baselines.
How to compare Baseline setting in the Security Compliance Manager tool
Simply select the policy you want to compare on the left hand column and then select the
“Compare” option on the right hand side (see image below).
184 | P a g e
Now select the Baseline policy you want to do the comparison with and press OK.
185 | P a g e
The result is a reporting showing the setting and values that are different between the two
policies.
186 | P a g e
The values tab will show you all the common settings between the policies that have different
values and the other tab will show you all the settings that are uniquely configured in either
policy.
187 | P a g e
Auditing to verify security in practice
How to avoid risk from inconsistent network and security
configuration practices?
Regulations define specific traffic and firewall policies that must be deployed, monitored,
audited, and enforced. Unfortunately, due to the silos organizations often lack the ability to
seamlessly assess when a network configuration allows traffic that is "out of policy" per
compliance, corporate mandate, or industry best practice.
Configuration Audit:
Configuration Audit tools provide automated collection, monitoring, and audit of configuration
across an organization's switches, routers, firewalls, and IDS/IPS. Through a unique ability to
normalize multi-vendor device configuration, provides a detailed and intuitive assessment of how
devices are configured, including defined firewall rules, security policy, and network hierarchy.
These solutions maintain a history of configuration changes, audit configuration rules on a
device, and compare this across devices. Intelligently integrated with network activity data,
device configuration data is instrumental in building an enterprise-wide representation of a
networks topology. This topology mapping helps an organization to understand allowed and
188 | P a g e
denied activity across the entire network, resulting in improved consistency of device
configuration and flagged configuration changes that introduce risk to the network.
Configuration Auditing Solution Vary To the following types:
1. Configuration Management Software – Usually provides a comparison between two
configuration sets and also a comparison a specific compliance template
2. Configuration Analyzers – Mostly common in analyzing Firewall configurations known
as “Firewall Analyzer” and “Firewall Configuration Analyzer”
3. Local Security Compliance Scanners – Tools such as “MBSA” Microsoft Baseline
Security Analyzer tools provide local system configuration analysis
4. Vulnerability Assessment Products – aka “Security Scanners”
Vulnerability scanners can be used to audit the settings and configuration of operating systems,
applications, databases and network devices. Unlike vulnerability testing, an audit policy is used
to check various values to ensure that they are configured to the correct policy. Example policies
for auditing include password complexity, ensuring the logging is enabled and testing that anti-
virus software is installed properly.
Audit policies of common vulnerability scanners have been certified by the US Government or
Center for Internet Security to ensure that the auditing tool accurately tests for best practice and
required configuration settings.
When combined with vulnerability scanning and real-time monitoring with the auditing
tools offer some powerful features such as:
 Detecting system change events in real-time and then performing a configuration audit
 Ensuring that logging is configured correctly for Windows and Unix hosts
 Auditing the configuration of a web application's operating system, application and SQL
database
Audit policies may also be deployed to search for documents that contain sensitive data such as
credit card or Social Security numbers. A basic tenet of most IT management practices is to
minimize variance. Even though your organization may consist of certain types of operating
systems and hardware, small changes in drivers, software, security policies, patch updates and
sometimes even usage can have dramatic effects on the underlying configuration. As time goes
by, these servers and desktop computers can have their configuration drift further away from a
"known good" standard, which makes maintaining them more difficult.
The following are the most common types of auditing provided by security auditing tools:
 Application Auditing
Configuration settings of applications such as web servers and anti-virus can be tested
against a policy.
 Content Auditing
Office documents and files can be searched for credit card numbers and other sensitive
content.
 Database Auditing
189 | P a g e
SQL database settings as well as setting so the host operating systems can be tested for
compliance.
 Operating System Auditing
Access control, system hardening, error reporting, security settings and more can be
tested against many types of industry and government policies.
 Router Auditing
Authentication, security and configuration settings can be audited against a policy.
Agentless vs. Agent-Based Security Auditing Solutions
The chart below provides a high level view of agent-based versus agentless systems; details
follow.
Solution Characteristic Agentless Agent-Based
Asset Discovery Advantage None/Limited
Asset Coverage Advantage Limited
Audit Comprehensiveness Par Par
Target System Impact Advantage Variable
Target System Security Advantage Variable
Network Impact Variable/Low Low
Cost of Deployment Advantage High
Cost of Ownership Advantage High
Scalability Advantage Limited
Functionalities:
1. Asset Discovery: the ability to discover and maintain an accurate inventory of IT
assets and applications.
Agentless solutions typically have broader discovery capabilities – including both active
and passive technologies – that permit them to discover a wider range of assets. This
includes discovery of assets that may be unknown to administrators or should not be on
your network.
2. Asset Coverage: the breadth of IT assets and applications that can be assessed.
Many IT assets that need to be audited simply cannot accept agent software. Examples
include network devices like routers and switches, point-of-sale systems, IP phones and
many firewalls.
3. Audit Comprehensiveness: the degree of completeness with which the auditing
system can assess the target system’s security and compliance status.
Using credentialed access, agentless solutions can assess any configuration or data item
on the target system, including an analysis of system file integrity (file integrity
monitoring).
4. Target System Impact: the impact on the stability and performance of the scan
target.
Agentless solutions use well-defined remote access interfaces to log in and retrieve the
desired data, and as a result have a much more benign impact on the stability of the assets
being scanned than agent-based systems do.
190 | P a g e
5. Target System Security: the impact of the auditing system on the security of the
target system.
Agentless auditing solutions are uniquely positioned to conduct objective and trusted
security analyses because they do not run on the target system.
6. Network Impact: the impact on the performance of the associated network.
Although agentless auditing solutions gather target system configuration information
using a network-based remote login, actual network impact is marginal due to bandwidth
throttling and overall low usage.
7. Cost of Deployment: the time and effort required to make the auditing system
operational.
Since there are no agents to install, getting started with agentless solutions is significantly
faster than with agent-based solutions – typically hours rather than days or weeks.
8. Cost of Ownership: the time and effort required to update and adjust the
configuration of the auditing system.
Agentless solutions typically have much lower costs of ownership than agent-based
systems; deployment is easier and faster, there are fewer components to update and
configuration is centralized on one or two systems.
9. Scalability: the number of target systems that a single instance of the audit system
can reliably audit in a typical audit interval.
Agentless auditing solutions excel in scalability, as auditing scalability is virtually
unlimited simply by increasing the number of management servers.
10. Simplified configuration compliance
Simplifies configuration compliance with drag-and-drop templates for Windows and
Linux operating systems and applications from FDCC, NIST, STIGS, USGCB and
Microsoft. Prioritize and manage risk, audit configurations against internal policy or
external best practice and centralize reporting for monitoring and regulatory purposes.
11. Complete configuration assessment
Provides a comprehensive view of Windows devices by retrieving software configuration
that includes audit settings, security settings, user rights, logging configuration and
hardware information including memory, processors, display adapters, storage devices,
motherboard details, printers, services, and ports in use.
12. Out-of-the-box configuration auditing
Out-of-the-box configuration auditing, reporting, and alerting for common industry
guidelines and best practices to keep your network running, available, and accessible.
13. Datasheet configuration auditing
Compare assets to industry baselines and best practices to check whether any software or
hardware changes were made since the last scan that could impact your security and
compliance objectives.
14. Up-to-date baselines
With this module, a complete configuration compliance benchmark library keeps systems
up-to-date with industry benchmarks including changes to benchmarks and adjustments
for newer operating systems and applications.
15. Customized best practices
Customized best practices for improved policy enforcement and implementation for a
broad set of industry templates and standards including built-in configuration templates
for NIST, Microsoft, and more.
16. Built- in templates
191 | P a g e
Built-in templates for Windows and Linux operating systems and applications from
FDCC, NIST, STIGS, USGCB, and Microsoft.
17. Oval 5.6 SCAP support
18. Streamlined reporting
Streamlined reporting for government and corporate standards with built-in vulnerability
reporting.
192 | P a g e
Case Studies Summary: Top 10 Mistakes -
Managing Windows Networks
“The shoemaker's son always goes barefoot”
 Network Administrators who uses Windows XP or Windows 7 without UAC on their
own computer
 Network Administrators who have a weak password for a local administrator account on
their machine
o An Example from a real client: Zorik:12345
 Network Administrators that their computer in excluded from security scans
 Network Administrators that their computer lacks security patches
 Network Administrators that their computer doesn’t have an Anti-Virus
 Network Administrators with unencrypted Laptops
Domain Administrators on Users VLAN
 In most organizations administrators and user are connected to the same VLAN
 In this case, a user/attacker can:
o Attack the administrators computers using NetBIOS Brute Force
o Spoof a NetBIOS name of a local server and attack using an NBNS Race
Condition Name Spoofing
o Take Over the network traffic using a variety of Layer 2 attacks and:
 Replace/Infect EXE files that will execute with network administrator
privileges
 Steal Passwords & Hashes of Domain Administrators
 Execute Man-In-The-Middle attacks on encrypted connections (RDP,
SSH, SSL)
193 | P a g e
Domain Administrator with a Weak Password
194 | P a g e
Domain Administrator without the Conficker Patch (MS08-
067)
195 | P a g e
(LM and NTLM v1) vs. (NTLM v.2)
 Once the hash of a network administrator is sent over the network, his identity can be
stolen by:
o The can be used in Pass-The-Hash attack
o The hash can be broken via Dictionary, Hybrid, Brute Force, Rainbow Tables
attacks
196 | P a g e
197 | P a g e
Pass the Hash Attack
198 | P a g e
Daily logon as a Domain Administrator
1. Is there an entity among man which answers the definition “God”? (Obviously no…)
a. Computers shouldn’t have one either (refers to “Domain Administrator” default
privilege level)
b. Isn’t a network administrator a normal user when he connects to his machine?
c. Doesn’t the network administrator surf the internet?
d. Doesn’t he visit Facebook?
e. Doesn’t he receive emails and opens them?
f. Doesn’t he download and installs applications?
g. Can’t the application he downloaded contain a malware/virus?
h. What can a virus do running under Domain Administrator privileges?
i. What is the potential damage to data, confidentiality and operability in costs?
Using Domain Administrator for Services
 Why does MSSQL “require” Domain Administrator privileges? (It doesn’t…)
 When a password is assigned to a service, the raw data of the password is stored locally
and can be extracted by a remote user with local administrative account
 The scenario of a service actually requiring Domain Administrator privileges is
extremely rare (almost doesn’t exist) and is mostly a wrong analysis/laziness of real
requirements by the decision maker
199 | P a g e
 In the most common case where a service requires an account which is different from
SYSTEM it only requires a local/domain user with only LOCAL administrative
privileges
 In the cases where a network manager or a service requires “the highest privileges”, they
only require local administrator on clients and/or operational servers but not the Domain
Administrator privilege. (which has login privileges to manage the domain controllers,
DNS servers, backup servers, most of today’s enterprise applications which integrate into
active directory)
Managing the network with Local Administrator Accounts
 In most cases the operational requirement is:
o The ability to install software on servers and client endpoint machines
o Connecting remotely to machines via C$ (NetBIOS) and Remote Registry
o Executing remote network scanning
o It is possible to execute 99% percent of the tasks using Separation of Duties,
assigning each privilege to a single user/account
 Users_Administrator_Group – Local Administrators
 Servers_Administrators_Group – Local Administrators
 Change Password Privilege
The NetLogon Folder
 Improper use of the Netlogon folder is the classic way to get Domain Administrator
privileges for a long term
 The most common cases are:
o Administrative Logon scripts with clear text passwords to domain administrator
accounts or local administrator account on all machines
o Free write/modify permission into the directory
 A logical problem, completely un-noticed, almost undetectable
 The longer the organization’s IT systems exist, the more “treasures” to discover
200 | P a g e
The NetLogon Folder - test.kix – Revealing the Citrix UI Password
The NetLogon Folder - addgroup.cmd – Revealing the local Administrator of THE
ENTIRE NETWORK
201 | P a g e
The NetLogon Folder - password.txt – can’t get any better for a hacker
LSA Secrets & Protected Storage
 The windows operating system implements an API to work securely with passwords
 Encryption keys are stored on the system and the encrypted data is stored in its registry
o Internet Explorer
o NetBIOS Saved Passwords
o Windows Service Manager
202 | P a g e
LSA Secrets
203 | P a g e
204 | P a g e
Protected Storage
205 | P a g e
Wireless Passwords
Cached Logons
 A user at his home, unplugged from the organizational internal network, trying to log into
to his laptop cannot log into the domain
 Therefore, the network logon is simulated:
o The hash of the user’s password is saved on his machine
o When the user inputs his password, it is converted into a hash and compared to
the list of saved hashes, if a match is found, the system logs the user in
 The vulnerability: the default setting in windows is saving hashes of all the last 10
unique/different passwords used to connect to this machine locally
 In most cases, the hash of a domain administrator privileged account is on that list
206 | P a g e
 Most organizations don’t distinguish between PCs, Servers and Laptops when it comes to
the settings for this feature
 Most organizations don’t harden:
o The local PCs cached logons amount to 0
o The Laptops cached logons amount to 1
o The Servers to 0 (unless its mission critical, then 1 to 3 are recommended)
 It means that at least 50% of the machines contains a domain administrator’s hash and
can take over the entire network
 Conclusion: A user/attacker with local administrator privileges can get a domain
administrator account from most of the organization’s computers
Password History
 In order to avoid users recycling their passwords o every forced password change, the
system the system saves the hash passwords locally
 By default, their last 24 passwords are saved on the machine
 An attacker with local administrator privileges on the machine, gets all the “password
patterns” of all the user accounts who ever logged into this machine
 A computer who was used only by 2 people, will contains up to 48 different passwords
 Some of these passwords are usually used for other accounts in the organization
Users as Local Administrators
 When a user is logged on with local administrator privileges, the local system’s entire
integrity is at risk
 He can install privileged software and drivers such as promiscuous network drivers for
advanced network and Man-In-The-Middle attacks and Rootkits
 He is able to extract the hashes of all the old passwords of the users who ever logged to
the current machine
 He is able to extract the hashed of all the CURRENT passwords of the users who ever
logged to the current machine
207 | P a g e
Forgetting to Harden: RestrictAnonymous=1
Weak Passwords / No Complexity Enforcement
 Weak Passwords = A successful Brute Force
 Complexity Compliant Passwords -> which appear in a passwords dictionary
“Password1!”
 Old passwords or default passwords of the organization
Guess what the password was? (gma )
208 | P a g e
Firewalls
Understanding Firewalls (1, 2, 3, 4, 5 generations)
A firewall is a device or set of devices designed to permit or deny network transmissions based upon a set
of rules and is frequently used to protect networks from unauthorized access while permitting legitimate
communications to pass.
Many personal computer operating systems include software-based firewalls to protect against threats from
the public Internet. Many routers that pass data between networks contain firewall components and,
conversely, many firewalls can perform basic routing functions.
First generation: packet filters
The first paper published on firewall technology was in 1988, when engineers from Digital Equipment
Corporation (DEC) developed filter systems known as packet filter firewalls. This fairly basic system was
the first generation of what became a highly involved and technical internet security feature. At AT&T Bell
Labs, Bill Cheswick and Steve Bellovin were continuing their research in packet filtering and developed a
working model for their own company based on their original first generation architecture.
Packet filters act by inspecting the "packets" which transfer between computers on the Internet. If a packet
matches the packet filter's set of rules, the packet filter will drop (silently discard) the packet, or reject it
(discard it, and send "error responses" to the source).
This type of packet filtering pays no attention to whether a packet is part of an existing stream of traffic
(i.e. it stores no information on connection "state"). Instead, it filters each packet based only on information
contained in the packet itself (most commonly using a combination of the packet's source and destination
address, its protocol, and, for TCP and UDP traffic, the port number).
TCP and UDP protocols constitute most communication over the Internet, and because TCP and UDP
traffic by convention uses well known ports for particular types of traffic, a "stateless" packet filter can
distinguish between, and thus control, those types of traffic (such as web browsing, remote printing, email
transmission, file transfer), unless the machines on each side of the packet filter are both using the same
non-standard ports.
Packet filtering firewalls work mainly on the first three layers of the OSI reference model, which means
most of the work is done between the network and physical layers, with a little bit of peeking into the
transport layer to figure out source and destination port numbers.[8]
When a packet originates from the
sender and filters through a firewall, the device checks for matches to any of the packet filtering rules that
are configured in the firewall and drops or rejects the packet accordingly. When the packet passes through
the firewall, it filters the packet on a protocol/port number basis (GSS). For example, if a rule in the
firewall exists to block telnet access, then the firewall will block the TCP protocol for port number 23.
209 | P a g e
Second generation: "stateful" filters
From 1989-1990 three colleagues from AT&T Bell Laboratories, Dave Presetto, Janardan Sharma, and
Kshitij Nigam, developed the second generation of firewalls, calling them circuit level firewalls.
Second-generation firewalls perform the work of their first-generation predecessors but operate up to layer
4 (transport layer) of the OSI model. They examine each data packet as well as its position within the data
stream. Known as stateful packet inspection, it records all connections passing through it determines
whether a packet is the start of a new connection, a part of an existing connection, or not part of any
connection. Though static rules are still used, these rules can now contain connection state as one of their
test criteria.
Certain denial-of-service attacks bombard the firewall with thousands of fake connection packets to in an
attempt to overwhelm it by filling up its connection state memory.
Third generation: application layer
The key benefit of application layer filtering is that it can "understand" certain applications and protocols
(such as File Transfer Protocol, DNS, or web browsing), and it can detect if an unwanted protocol is
sneaking through on a non-standard port or if a protocol is being abused in any harmful way.
The existing deep packet inspection functionality of modern firewalls can be shared by Intrusion-
prevention Systems (IPS).
Currently, the Middlebox Communication Working Group of the Internet Engineering Task Force (IETF)
is working on standardizing protocols for managing firewalls and other middleboxes.
Another axis of development is about integrating identity of users into Firewall rules. Many firewalls
provide such features by binding user identities to IP or MAC addresses, which is very approximate and
can be easily turned around. The NuFW firewall provides real identity-based firewalling, by requesting the
user's signature for each connection. Authpf on BSD systems loads firewall rules dynamically per user,
after authentication via SSH.
Application firewall
An application firewall is a form of firewall which controls input, output, and/or access from, to,
or by an application or service. It operates by monitoring and potentially blocking the input,
output, or system service calls which do not meet the configured policy of the firewall. The
application firewall is typically built to control all network traffic on any OSI layer up to
the application layer. It is able to control applications or services specifically, unlike a stateful
network firewall which is - without additional software - unable to control network traffic
regarding a specific application. There are two primary categories of application
firewalls, network-based application firewalls and host-based application firewalls.
210 | P a g e
Network-based application firewalls
A network-based application layer firewall is a computer networking firewall operating at the application
layer of a protocol stack, and are also known as a proxy-based or reverse-proxy firewall. Application
firewalls specific to a particular kind of network traffic may be titled with the service name, such as a web
application firewall. They may be implemented through software running on a host or a stand-alone piece
of network hardware. Often, it is a host using various forms of proxy servers to proxy traffic before passing
it on to the client or server. Because it acts on the application layer, it may inspect the contents of the
traffic, blocking specified content, such as certain websites, viruses, and attempts to exploit known logical
flaws in client software.
Modern application firewalls may also offload encryption from servers, block application input/output from
detected intrusions or malformed communication, manage or consolidate authentication, or block content
which violates policies.
Host-based application firewalls
A host-based application firewall can monitor any application input, output, and/or system service calls
made from, to, or by an application. This is done by examining information passed through system calls
instead of or in addition to a network stack. A host-based application firewall can only provide protection
to the applications running on the same host.
Application firewalls function by determining whether a process should accept any given connection.
Application firewalls accomplish their function by hooking into socket calls to filter the connections
between the application layer and the lower layers of the OSI model. Application firewalls that hook into
socket calls are also referred to as socket filters. Application firewalls work much like a packet filter but
application filters apply filtering rules (allow/block) on a per process basis instead of filtering connections
on a per port basis. Generally, prompts are used to define rules for processes that have not yet received a
connection. It is rare to find application firewalls not combined or used in conjunction with a packet filter.
Also, application firewalls further filter connections by examining the process ID of data packets against a
ruleset for the local process involved in the data transmission. The extent of the filtering that occurs is
defined by the provided ruleset. Given the variety of software that exists, application firewalls only have
more complex rule sets for the standard services, such as sharing services. These per process rule sets have
limited efficacy in filtering every possible association that may occur with other processes. Also, these per
process ruleset cannot defend against modification of the process via exploitation, such as memory
corruption exploits. Because of these limitations, application firewalls are beginning to be supplanted by a
new generation of application firewalls that rely on mandatory access control (MAC), also referred to as
sandboxing, to protect vulnerable services. Examples of next generation host-based application firewalls
which control system service calls by an application are AppArmor and the TrustedBSD MAC framework
(sandboxing) in Mac OS X.
Host-based application firewalls may also provide network-based application firewalling.
211 | P a g e
Distributed web application firewalls
Distributed Web Application Firewall (also called a dWAF) is a member of the web application firewall
(WAF) and Web applications security family of technologies. Purely software-based, the dWAF
architecture is designed as separate components able to physically exist in different areas of the network.
This advance in architecture allows the resource consumption of the dWAF to be spread across a network
rather than depend on one appliance, while allowing complete freedom to scale as needed. In particular, it
allows the addition / subtraction of any number of components independently of each other for better
resource management. This approach is ideal for large and distributed virtualized infrastructures such as
private, public or hybrid cloud models.
Cloud-based web application firewalls
Cloud-based Web Application Firewall is also member of the web application firewall (WAF)
and Web applications security family of technologies. This technology is unique due to the fact
that it is platform agnostic and does not require any hardware or software changes on the host,
just a DNS change. By applying this DNS change, all web traffic is routed through the WAF
where it is inspected and threats are thwarted. Cloud-based WAFs are typically centrally
orchestrated, which means that threat detection information is shared among all the tenants of the
service. This collaboration results in improved detection rates and lower false positives. Like
other cloud-based solutions, this technology is elastic, scalable and is typically offered as a pay-
as-you grows service. This approach is ideal for cloud-based web applications and small or
medium sized websites that require web application security but are not willing or able to make
software or hardware changes to their systems.
 In 2010, Imperva spun out Incapsula to provide a cloud-based WAF for small to medium
sized businesses.
 Since 2011, United Security Providers provides the Secure Entry Server as an Amazon EC2
Cloud-based Web Application Firewall
 Akamai Technologies offers a cloud-based WAF that incorporates advanced features such as
rate control and custom rules enabling it to address both layer 7 and DDoS attacks.
The Common Firewall’s Limits
1. The common firewall works on ACL rules where something is allowed or denied based
on a simple set of parameters such as Source IP, Destination IP, Source Port and
Destination Port.
2. Most firewalls don’t support application level rules that would allow the creation of smart
rules that match today’s more active application-rich technology world.
3. Every hacker knows that 99.9% from the firewalls on planet earth are configured to
allow connections to remote machines at TCP port 80, since this is the port of the
“WEB”, used by HTTP.
4. Today’s firewalls will allow any kind of traffic to leave the organization on port 80, this
means that:
212 | P a g e
 Hackers can use “network tunneling” technology to transfer ANY kind of
information on port 80 and therefore bypass all of the currently deployed firewalls
 In terms of traffic and content going through a port defined to be open, such as port 80,
Firewalls are configured to act as a blacklist, therefore tunneling an ENCRYPTED
connection such as SSL and SSH on port 80, will bypass all of the firewall’s
potential inspection features.
 The problem gets worse when ports that allow encryption connections are commonly
available, such as port 443, which supports the encrypted HTTPS protocol. Hackers
can tunnel any communication on port 443 and encrypt it with HTTPS to imitate
the behavior of any standard browser.
 The firewalls which do inspect SSL traffic relay on the assumption that they will
generate and sign a certificate on their own for the browsed domain and the browser
will accept it since they are defined on the machine as a trusted Certificate Authority.
However, as firewalls work mostly on blacklist mode, they will still forward any
traffic that they fail to open and inspect.
Implementing Application Aware Firewalls
Features
Palo Alto Networks has built a next-generation firewall with several innovative technologies
enabling organizations to fix the firewall. These technologies bring business-relevant elements
(applications, users, and content) under policy control on high performance firewall architecture.
This technology runs on a high-performance, purpose-built platform based on Palo Alto
Networks' Single-Pass Parallel Processing (SP3) Architecture. Unique to the SP3 Architecture,
traffic is only examined once, using hardware with dedicated processing resources for security,
networking, content scanning and management to provide line-rate, low-latency performance
under load.
Application Traffic Classification
Accurate traffic classification is the heart of any firewall, with the result becoming the basis of
the security policy. Traditional firewalls classify traffic by port and protocol, which, at one point,
was a satisfactory mechanism for securing the perimeter.
Today, applications can easily bypass a port-based firewall; hopping ports, using SSL and SSH,
sneaking across port 80, or using non-standard ports. App-IDTM, a patent-pending traffic
classification mechanism that is unique to Palo Alto Networks, addresses the traffic classification
limitations that plague traditional firewalls by applying multiple classification mechanisms to the
213 | P a g e
traffic stream, as soon as the device sees it, to determine the exact identity of applications
traversing the network.
Classify traffic based on applications, not ports.
App-ID uses multiple identification mechanisms to determine the exact identity of applications
traversing the network. The identification mechanisms are applied in the following manner:
 Traffic is first classified based on the IP address and port.
 Signatures are then applied to the allowed traffic to identify the application based on unique
application properties and related transaction characteristics.
 If App-ID determines that encryption (SSL or SSH) is in use and a decryption policy is in
place, the application is decrypted and application signatures are applied again on the
decrypted flow.
 Decoders for known protocols are then used to apply additional context-based signatures to
detect other applications that may be tunneling inside of the protocol (e.g., Yahoo! Instant
Messenger used across HTTP).
 For applications that are particularly evasive and cannot be identified through advanced
signature and protocol analysis, heuristics or behavioral analysis may be used to determine the
identity of the application.
As the applications are identified by the successive mechanisms, the policy check determines how to
treat the applications and associated functions: block them, or allow them and scan for threats, inspect
for unauthorized file transfer and data patterns, or shape using QoS.
214 | P a g e
Always on, always the first action taken across all ports.
Classifying traffic with App-ID is always the first action taken when traffic hits the firewall, which
means that all App-IDs are always enabled, by default. There is no need to enable a series of
signatures to look for an application that is thought to be on the network; App-ID is always classifying
all of the traffic, across all ports - not just a subset of the traffic (e.g., HTTP). All App-IDs are looking
at all of the traffic passing through the device; business applications, consumer applications, network
protocols, and everything in between.
App-ID continually monitors the state of the application to determine if the application changes
midstream, providing the updated information to the administrator in ACC, applies the appropriate
policy and logs the information accordingly. Like all firewalls, Palo Alto Networks next-generation
firewalls use positive control, default denies all traffic, then allow only those applications that are
within the policy. All else is blocked.
All classification mechanisms, all application versions, all OSes.
App-ID operates at the services layer, monitoring how the application interacts between the client and
the server. This means that App-ID is indifferent to new features, and it is client or server operating
system agnostic. The result is that a single App-ID for Bit Torrent is going to be roughly equal to the
many Bit Torrent OS and client signatures that need to be enabled to try and control this application in
other offerings.
Full visibility and control of custom and internal applications.
Internally developed or custom applications can be managed using either an application override or
custom App-IDs. An applications override effectively renames the traffic stream to that of the internal
application. The other mechanism would be to use the customizable App-IDs based on context-based
signatures for HTTP, HTTPs, FTP, IMAP, SMTP, RTSP, Telnet, and unknown TCP /UDP traffic.
Organizations can use either of these mechanisms to exert the same level of control over their internal
or custom applications that may be applied to SharePoint, Salesforce.com, or Facebook.
Securely Enabling Applications Based on Users & Groups
Traditionally, security policies were applied based on IP addresses, but the increasingly dynamic
nature of users and applications means that IP addresses alone have become ineffective as a
mechanism for monitoring and controlling user activity. Palo Alto Networks next-generation firewalls
integrate with a wide range of user repositories and terminal service offerings, enabling organizations
to incorporate user and group information into their security policies. Through User-ID, organizations
also get full visibility into user activity on the network as well as user-based policy-control, log
viewing and reporting.
215 | P a g e
Transparent use of users and groups for secure application enablement.
User-ID seamlessly integrates Palo Alto Networks next-generation firewalls with the widest range of
enterprise directories on the market; Active Directory, eDirectory, OpenLDAP and most other LDAP
based directory servers. The User-ID agent communicates with the domain controllers, forwarding the
relevant user information to the firewall, making the policy tie-in completely transparent to the end-
user.
Identifying users via a browser challenge.
In cases where a user cannot be automatically identified through a user repository, a captive portal can
be used to identify users and enforce user based security policy. In order to make the authentication
process completely transparent to the user, Captive Portal can be configured to send a NTLM
authentication request to the web browser instead of an explicit username and password prompt.
Integrate user information from other user repositories.
In cases where organizations have a user repository or application that already has knowledge of users
and their current IP addresses, an XML-based REST API can be used to tie the repository to the Palo
Alto Networks next-generation firewall.
216 | P a g e
Transparently extend user-based policies to non-Windows devices.
User-ID can be configured to constantly monitor for logon events produced by Mac OS X, Apple iOS,
Linux/UNIX clients accessing their Microsoft Exchange email. By expanding the User-ID support to
non-Windows platforms, organizations can deploy consistent application enablement policies.
Visibility and control over terminal services users.
In addition to support for a wide range of directory services, User-ID provides visibility and policy
control over users whose identity is obfuscated by a Terminal Services deployment (Citrix or
Microsoft). Completely transparent to the user, every session is correlated to the appropriate user,
which allows the firewall to associate network connections with users and groups sharing one host on
the network. Once the applications and users are identified, full visibility and control within ACC,
policy editing, logging and reporting is available.
High Performance Threat Prevention
Content-ID combines a real-time threat prevention engine with a comprehensive URL database and
elements of application identification to limit unauthorized data and file transfers, detect and block a
wide range of threats and control non-work related web surfing. The application visibility and control
delivered by App-ID, combined with the content inspection enabled by Content-ID means that IT
departments can regain control over application traffic and the related content.
217 | P a g e
NSS-rated IPS.
The NSS-rated IPS blocks known and unknown vulnerability exploits, buffer overflows, D.o.S attacks
and port scans from compromising and damaging enterprise information resources. IPS mechanisms
include:
 Protocol decoder-based analysis statefully decodes the protocol and then intelligently applies
signatures to detect vulnerability exploits.
 Protocol anomaly-based protection detects non-RFC compliant protocol usage such as the use
of overlong URI or overlong FTP login.
 Stateful pattern matching detects attacks across more than one packet, taking into account
elements such as the arrival order and sequence.
 Statistical anomaly detection prevents rate-based D.o.S flooding attacks.
 Heuristic-based analysis detects anomalous packet and traffic patterns such as port scans and
host sweeps.
 Custom vulnerability or spyware phone home signatures that can be used in the either the anti-
spyware or vulnerability protection profiles.
 Other attack protection capabilities such as blocking invalid or malformed packets, IP
defragmentation and TCP reassembly are utilized for protection against evasion and
obfuscation methods employed by attackers.
Traffic is normalized to eliminate invalid and malformed packets, while TCP reassembly and IP de-
fragmentation is performed to ensure the utmost accuracy and protection despite any attack evasion
techniques.
URL Filtering
Complementing the threat prevention and application control capabilities is a fully integrated, URL
filtering database consisting of 20 million URLs across 76 categories that enables IT departments to
monitor and control employee web surfing activities. The on-box URL database can be augmented to
suit the traffic patterns of the local user community with a custom, 1 million URL database. URLs that
218 | P a g e
are not categorized by the local URL database can be pulled into cache from a hosted, 180 million
URL database.
In addition to database customization, administrators can create custom URL categories to further
tailor the URL controls to suit their specific needs. URL filtering visibility and policy controls can be
tied to specific users through the transparent integration with enterprise directory services (Active
Directory, LDAP, eDirectory) with additional insight provided through customizable reporting and
logging.
File and Data Filtering
Data filtering features enable administrators to implement policies that will reduce the risks associated
with the transfer of unauthorized files and data.
 File blocking by type: Control the flow of a wide range of file types by looking deep within the
payload to identify the file type (as opposed to looking only at the file extension).
 Data filtering: Control the transfer of sensitive data patterns such as credit card and social
security numbers in application content or attachments.
 File transfer function control: Control the file transfer functionality within an individual
application, allowing application use yet preventing undesired inbound or outbound file
transfer.
Checkpoint R75 – Application Control Blade
Granular application control
 Identify, allow, block or limit usage of thousands of applications by user or group
 UserCheck technology alerts users about controls, educates on Web 2.0 risks, policies
219 | P a g e
 Embrace the power of Web 2.0 Social Technologies and applications while protecting
against threats and malware
Largest application library with AppWiki
 Leverages the world's largest application library with over 240,000 Web 2.0 applications
and social network widgets
 Identifies, detects, classifies and controls applications for safe use of Web 2.0 social
technologies and communications
 Intuitively grouped in over 80 categories—including Web 2.0, IM, P2P, Voice & Video
and File Share
Integrated into Check Point Software Blade Architecture
 Centralized management of security policy via a single console
 Activate application control on any Check Point security gateway
 Supported gateways include: UTM-1, Power-1, IP Appliances and IAS Appliances
Main Functionalities
 Application detection and usage control
 Enables application security policies to identify, allow, block or limit usage of thousands
of applications, including Web 2.0 and social networking, regardless of port, protocol or
evasive technique used to traverse the network.
 AppWiki application classification library
 AppWiki enables application scanning and detection of more than 4,500 distinct
applications and over 240,000 Web 2.0 widgets including instant messaging, social
networking, video streaming, VoIP, games and more.
 Inspect SSL Encrypted Traffic
 Scan and secure SSL encrypted traffic passing through the gateway, such as HTTPS.
 UserCheck
 UserCheck technology alerts employees in real-time about their application access
limitations, while educating them on Internet risk and corporate usage policies.
 User and machine awareness
 Integration with the Identity Awareness Software Blade enables users of the Application
Control Software Blade to define granular policies to control applications usage.
 Central policy management
 Centralized management offers unmatched leverage and control of application security
policies and enables organizations to use a single repository for user and group
definitions, network objects, access rights and security policies.
 Unified event management
 Using SmartEvent to view user’s online behavior and application usage provides
organizations with the most granular level of visibility.
220 | P a g e
Utilizing Firewalls for Maximum Security
1. Don’t use an old, non-application aware firewall
2. First Firewall rule must be deny all protocols on all ports from all IPs to all IPs
3. Only rules of requires systems must be allowed. For example:
a. HTTP, HTTPS – to all
b. IMAPS to internal mail server
c. NetBIOS to internal file server and etc…
4. Activate Application inspection on all traffic on all ports
5. Enforce that only the defined traffic types would be allowed on that port. For Example
on port 80 only identified HTTP traffic would be allowed.
6. Don’t allow forwarding of any traffic that was failed to be inspected.
7. Define the DNS server as the Domain Controller, do not allow recursive/authoritative
DNS requests make sure the firewall inspects in STRICT mode that the Domain
Controller’s outgoing DNS requests.
8. Active Egress filtering to avoid sending spoofed packets unknowingly and unwillingly
participating in DDOS attacks.
Implementing a Back-Bone Application-Aware Firewall
Implementing a Back-Bone Application-Aware Firewall is the perfect, security solution for
absolute network management.
The best configuration is:
1. Combining full Layer 2 security in Switches and Router equipment
2. Diving all of the organization devices into VLANs which represents the organization’s
Logical groups
3. Implementing each port in each one of the VLANs as PVLAN Edge, which no endpoint
can talk with any other endpoint via Layer 2.
4. Defining all routers to forward all traffic to the firewall (their higher level hop)
5. Placing an application aware firewall as the backbone before the backbone router
Network Inventory & Monitoring
How to map your network connections?
1. Since the every day’s IT management has many tasks, no one really inspects what are the
current open connections.
2. It is possible to configure the firewall to log every established TCP connection and every
host which sent any packet (ICMP, UDP) to any non-TCP port.
221 | P a g e
3. The results of such configuration would be a list of unknown IPs. It is possible to write an
automatic script to execute a Reverse-DNS lookup and an IP WHOIS search on each IP
and create a “resolved list” which has some meaning to it.
4. Anything unknown/unfamiliar IP accessed from within the network, requires to match the
number of stations which accessed it and to make a basic forensic investigation on them
in order to discover the software which made the connection.
5. This process is very technical, time consuming, requires especially skilled security
professionals and therefore is not executed unless a Security Incident was reported.
6. The only solution that reverses this process from being impossible to very reasonable and
simple is IP/Domain/URL whitelisting, which denies everything except the database of
the entire world’s known, well reputed and malware clean approved IPs/Websites.
7. IP/Domain/URL whitelisting is very hard to implement and requires a high amount of
maintenance, it is up to you to make your choice.
How to discover all network devices?
1. Mapping of the network is provided by Firewalls, Anti-Viruses, NACs, SIEM and
Configuration Management products.
2. Some products include an agent that runs on the endpoint, acts as a network sensor and
reports all the machines that passively or actively communicated on its subnet.
3. It is possible to purchase a “Network Inventory Management” solution.
The most reliable way to detect all machines on the network is to combine:
1. The switches know all the ports that have electric power signal and know all the devices
MACs if they ever sent a non-spoofed layer 2 frame on that port.
2. Connect via SNMP to switches and extract all MACs and IPs on all ports
3. Full network TCP and UDP scan of ports 1 to 65535 of the entire network (without any
ping or is-alive scans). If there is a hidden machine that is listening on a self-defined IP
on a specific TCP/UDP port, it will answer at least one packet and will be detected by the
scan.
Detecting “Hidden” Machines – Machines behind a NAT INSIDE Your Network
1. Looking for timing anomalies in ICMP and TCP
2. Looking for IP ID strangeness
a. NAT with Windows on a Linux host might have non-incremental IPID packets,
interspersed with incremental IPID packets
3. Looking for unusual headers in packets
a. Timestamps and other optional parameters may have inherent patterns
How to discover all cross-network installed software?
There are two most common ways to discover the software installed on the networks machines:
222 | P a g e
1. Agent-Less – discovery is done by connecting to the machine remotely through:
a. RPC/WMI
b. SNMP
On windows systems, WMI provides most of the classical functionality, though it only
detects software installed by “Windows Installer” and software registered in the
“Uninstall” registry Key.
Some machines can’t bet “managed”/connected to remotely over the network since:
1. They have a firewall installed or configured to block WMI/RPC access
2. They have a permission error, “Domain Administrator” removed from the “Local
Administrators” group
3. They are not part of the domain – they were never reported and registered
2. Agent-Based – provides the maximum level of discovery, can scan the memory, raw disk,
files, folders locally and report back all of the detected software.
Once the agent is installed, most of the common permission, firewalls, and connectivity
and latency problems are solved.
The main problem is machines the agent was removed from and stranger machines which
never had the agent installed.
3. The Ultimate Solution – Combining agent-based with agent-less technology, this way all
devices get detected and most of the possible information is extracted from them.
NAC
The Problem: Ethernet Network
 Authenticate (Who):
o distinguish between valid or rouge member
 Control (Where to and How?):
o all network members at the network level
 Authorize (Application Layer Conditions):
o check device compliance according to company policy
223 | P a g e
What is a NAC originally?
 The concept was invented in 2003 originally called “Network Admission Control”
 The idea: checking the software version on machines connecting to the network
 The Action: denying connection for those below the standard
Today’s NAC?
 Re-Invented as: Network Access Control
 Adding to the old idea: Disabling ANY foreign machines from connecting into a
computer network
 The Actions:
o Shuts down the power on that port of the switch
o Move foreign machine to Guest VLAN
Why Invent Today’s NAC?
224 | P a g e
Dynamic Solution for a Dynamic Environment
Did We EVER Manage Who Gets IP Access?
What is a NAC?
Network Access Control (NAC) is a computer networking solution that uses a set of protocols to
define and implement a policy that describes how to secure access to network nodes by devices
when they initially attempt to access the network. NAC might integrate the automatic remediation
process (fixing non-compliant nodes before allowing access) into the network systems, allowing
225 | P a g e
the network infrastructure such as routers, switches and firewalls to work together with back
office servers and end user computing equipment to ensure the information system is operating
securely before interoperability is allowed.
Network Access Control aims to do exactly what the name implies—control access to
a network with policies, including pre-admission endpoint security policy checks and post-
admission controls over where users and devices can go on a network and what they can do.
Initially 802.1X was also thought of as NAC. Some still consider 802.1X as the simplest form of
NAC, but most people think of NAC as something more.
Simple Explanation
When a computer connects to a computer network, it is not permitted to access anything unless it
complies with a business defined policy, including anti-virus protection level, system update level
and configuration.
While the computer is being checked by a pre-installed software agent, it can only access
resources that can remediate (resolve or update) any issues. Once the policy is met, the computer
is able to access network resources and the Internet, within the policies defined within the NAC
system.
NAC is mainly used for endpoint health checks, but it is often tied to Role based Access. Access
to the network will be given according to profile of the person and the results of a posture/health
check. For example, in an enterprise, the HR department could access only HR department files if
both the role and the endpoint meet anti-virus minimums.
Goals of NAC
Because NAC represents an emerging category of security products, its definition is both
evolving and controversial.
The overarching goals of the concept can be distilled to:
1. Mitigation of zero-day attacks
The key value proposition of NAC solutions is the ability to prevent end-stations that lack
antivirus, patches, or host intrusion prevention software from accessing the network and
placing other computers at risk of cross-contamination of computer worms.
2. Policy enforcement
NAC solutions allow network operators to define policies, such as the types of computers
or roles of users allowed to access areas of the network, and enforce them in switches,
routers, and network middle boxes.
226 | P a g e
3. Identity and access management
Where conventional IP networks enforce access policies in terms of IP addresses, NAC
environments attempt to do so based on authenticated user identities, at least for user end-
stations such as laptops and desktop computers.
NAC Approaches
 Agent-Full
o Smarter, Unlimited Features
o Faster
o Works Offline (Settings Cache Mode)
o Endpoint Management Itself is more secure
 Agent-Less
o Modular
o Easy to integrate
o Credentials constantly travel the network
o SNMP Traps and DHCP Requests
227 | P a g e
NAC – Behavior Lifecycle
NAC = LAN Mini IPS?
 NAC is one of the functions that a full end to end IPS product should provide
 Some vendors don’t sell NAC as a proprietary module, for example:
o ForeScout CounterAct
 NAC only Solutions by
o Trustwave
o Mcafee
NAC as Part of Endpoint Security Solutions
 Antivirus Vendors provide NAC (Network Admission Control) on managed endpoints
 Vendors like Symantec, Mcafee and Sophos
 A great solution IF:
o The AV Management server controls the switches and disconnects all non-
managed hosts
o Except exclusions (Printers, Cameras, Physical Access Devices)
Talking Endpoints: What’s a NAP?
 NAP is Microsoft’s built-in support client for NAC
 NAP interoperates with every switch and access point
 Controlled by Group Policy
228 | P a g e
General Basic NAC Deployment
NAC Deployment Types:
1. Pre-admission and post-admission
There are two prevailing design philosophies in NAC, based on whether policies are
enforced before or after end-stations gain access to the network. In the former case,
called pre-admission NAC, end-stations are inspected prior to being allowed on the
network. A typical use case of pre-admission NAC would be to prevent clients with out-
of-date antivirus signatures from talking to sensitive servers. Alternatively, post-
admission NAC makes enforcement decisions based on user actions, after those users
have been provided with access to the network.
2. Agent versus agentless
The fundamental idea behind NAC is to allow the network to make access control
decisions based on intelligence about end-systems, so the manner in which the network is
informed about end-systems is a key design decision. A key difference among NAC
systems is whether they require agent software to report end-system characteristics, or
229 | P a g e
whether they use scanning and network inventory techniques to discern those
characteristics remotely.
As NAC has matured, Microsoft now provides their network access protection
(NAP) agent as part of their Windows 7, Vista and XP releases. There are NAP
compatible agents for Linux and Mac OS X that provide near equal intelligence for these
operating systems.
3. Out-of-band versus inline
In some out-of-band systems, agents are distributed on end-stations and report
information to a central console, which in turn can control switches to enforce policy. In
contrast the inline solutions can be single-box solutions which act as internal firewalls
for access-layer networks and enforce the policy. Out-of-band solutions have the
advantage of reusing existing infrastructure; inline products can be easier to deploy on
new networks, and may provide more advanced network enforcement capabilities,
because they are directly in control of individual packets on the wire. However, there are
products that are agentless, and have both the inherent advantages of easier, less risky
out-of-band deployment, but use techniques to provide inline effectiveness for non-
compliant devices, where enforcement is required.
NAC Acceptance Tests
1. Attempting to get an IP using DHCP in a regular Windows machine.
2. Attempting to get an IP using DHCP in a regular Linux machine.
230 | P a g e
3. Multiple attempts to get an IP using DHCP with a private DHCP client with different values
then the Operating Systems in the DHCP packet fields
4. Manually configuring a local IP of type “Link-Local”
5. Manually configuring an IP in the network’s IP range with “Gratuitous ARP” on
6. Manually configuring an IP in the network’s IP range with “Gratuitous ARP” off
7. Inspecting the NAC’s response to DHCP attacks and network attacks in the “1-2 minutes of
grace”
8. Restricting the WMI (RPC) support on the local machine (even using a firewall to block RPC
on TCP port 135)
9. Copy-Catting/Stealing the identity (IP or IP+MAC) of an existing user (received via passive
network sniffing of broadcasts)
10. Using private Denial of Service 0-day exploits in a loop on a specific machine to obtain its
identity on the network
11. Imposing as a printer or other non-smart devices (printers, biometric devices, turnstile
controller, door devices and etc…)
12. Testing the proper enforcement common NAC basic protection features such as:
 Duplicate MAC
 Duplicate IP
 Foreign MAC
 Foreign IP
 Wake Up On LAN
 Domain Membership
 Anti-Virus + Definitions
NAC Vulnerabilities
Attack a NAC is mostly based on network attacks and focuses on several aspects:
 Vulnerabilities by Integration Process - Wrong product positioning in the network
architecture, wrong design of the data flow which results in different levels of security.
These mistakes are caused mostly by the following reasons:
o Integrator’s Lack of understanding of the organization’s requirements, systems
and network architecture
o Integrator’s Lack of understanding of the organization’s security policies and its
expectations from the product
231 | P a g e
o Insufficient involvement of the organization’s IT personnel in the integration
process
o Lack of security auditing to determine the product real-life performance by a
certified information security professional
 Vulnerabilities caused by configuration –Wrong configuration of the functionalities the
product enforces within the organization, such as:
o Not enforcing/monitoring lab/development environments
o Not enforcing /monitoring different VLANs and networks, such as the VoIP
network
o Not blocking/monitoring non-interactive network sniffing modes such as Wake
Up On LAN
o Not analyzing and responding to anomalies in relevant element/protocols,
insufficient network lock-out times,
 Vulnerabilities in the product (Vendor’s Code )
The common attack – Bypassing & Killing the NAC
1. Some of today’s NACs are event based, so the network equipment (switch/router) allows
you to connect to the network and get an IP, but sometime after you connected to the
network, it sends a message notifying the NAC with your IP and MAC and the NAC tries
to connect to your machine and validate it is an approved member of the network..
2. The alerting mechanism from the switches in mostly SNMP alerts called “SNMP Traps”.
3. This behavior grants the attacker one-two minutes to attack/take over/infect some
machines on the network, before his port’s power is disconnected.
4. In most cases after 5 minutes if the port is shut down, the NAC wakes it back to life in
order to keep the organization operable and to accept new devices.
5. For a well prepared hacker, with automatic scripts exploiting most common
vulnerabilities and utilizing the latest exploits, this would be sufficient.
6. The real problem is that a large amount of the NAC vendors provide a product with is
software based and therefore is installed mostly on common Windows or Linux
Machines.
7. As it is well known, common Windows and Linux machines are vulnerable to many
application layer and operating system vulnerabilities, but the absolute whole of them is
vulnerable to network attacks, especially layer 2 attacks.
8. This means that on those 1 or 2 minutes which are available every 5 minutes which
comes out to 5 to 10 minutes per hour, the attacker can find the Windows/Linux machine
hosting the NAC software and kill the communication to it using basic layer 2 attacks
such as ARP Spoofing.
232 | P a g e
Open Source Solutions
 OpenNac/FreeNAC
 PacketFence
OpenNAC/FreeNAC – Keeping It Simple
233 | P a g e
234 | P a g e
PacketFence – Almost Commercial Quality
235 | P a g e
236 | P a g e
237 | P a g e
238 | P a g e
SIEM - (Security Information Event
Management)
SIEM aka “SIM” (Security Information Management) and “SEM” (Security Event Management) solutions
are a combination of the formerly disparate product categories of SIM (security information management)
and SEM (security event management). SIEM technology provides real-time analysis of security alerts
generated by network hardware and applications. SIEM solutions come as software, appliances or managed
services, and are also used to log security data and generate reports for compliance purposes.
The acronyms SEM, SIM and SIEM have been used interchangeably, though there are differences in
meaning and product capabilities. The segment of security management that deals with real-time
monitoring, correlation of events, notifications and console views is commonly known as Security Event
Management (SEM). The second area provides long-term storage, analysis and reporting of log data and is
known as Security Information Management (SIM).
The term Security Information Event Management (SIEM), coined by Mark Nicolett and Amrit Williams
of Gartner in 2005, describes the product capabilities of gathering, analyzing and presenting information
from network and security devices; identity and access management
applications; vulnerability management and policy compliance tools; operating system, database and
application logs; and external threat data. A key focus is to monitor and help manage user and service
privileges, directory services and other system configuration changes; as well as providing log auditing and
review and incident response.
As of January 2012, Mosaic Security Research identified 85 unique SIEM products.
SIEM Capabilities
 Data Aggregation: SIEM/LM (log management) solutions aggregate data from many sources,
including network, security, servers, databases, applications, providing the ability to consolidate
monitored data to help avoid missing crucial events.
 Correlation: looks for common attributes, and links events together into meaningful bundles. This
technology provides the ability to perform a variety of correlation techniques to integrate different
sources, in order to turn data into useful information.
 Alerting: the automated analysis of correlated events and production of alerts, to notify recipients of
immediate issues.
 Dashboards: SIEM/LM tools take event data and turn it into informational charts to assist in seeing
patterns, or identifying activity that is not forming a standard pattern.
 Compliance: SIEM applications can be employed to automate the gathering of compliance data,
producing reports that adapt to existing security, governance and auditing processes.
 Retention: SIEM/SIM solutions employ long-term storage of historical data to facilitate correlation of
data over time, and to provide the retention necessary for compliance requirements.
239 | P a g e
SIEM Architecture
 Low level, real-time detection of known threats and anomalous activity (unknown
threats)
 Compliance automation
 Network, host and policy auditing
 Network behavior analysis and situational behavior
 Log Management
 Intelligence that enhances the accuracy of threat detection
 Risk oriented security analysis
 Executive and technical reports
 A scalable high performance architecture
240 | P a g e
A SIEM Detector Module is Comprised a few main Modules:
1. Detector
 Intrusion Detection
 Anomaly Detection
 Vulnerability Detection
 Discovery, Learning and Network Profiling systems
 Inventory systems
2. Collector
 Connectors to Windows Machines
 Connectors to Linux Machines
 Connectors to Network Devices
 Classifies the information and events
 Normalizes the information
3. SIEM
 Risk Assessment
 Correlation
241 | P a g e
 Risk metrics
 Vulnerability scanning
 Data mining for events
 Real-time monitoring
4. Logger
 Stores the data in the filesystem/DB
 Allows storage of unlimited number of events
 Supports SAN/NAS storage
5. Management Console & Dashboard
 Configuration changes
 Access to Dashboard and Metrics
 Multi-tenant and Multi-user management
 Access to Real-time information
 Reports generation
 Ticketing system
 Vulnerability Management
 Network Flows Management
 Reponses configuration
A SIEM Detector Module is Comprised of Sensors:
 Intrusion Detection
 Anomaly Detection
 Vulnerability Detection
 Discovery, Learning and Network Profiling systems
 Inventory systems
A SIEM Commonly used Open Source Sensors:
1. Snort (Network Intrusion Detection System)
2. Ntop (Network and usage Monitor)
3. OpenVAS (Vulnerability Scanning)
4. P0f (Passive operative system detection)
5. Pads (Passive Asset Detection System)
6. Arpwatch (Ethernet/IP address parings monitor)
7. OSSEC (Host Intrusion Detection System)
8. Osiris (Host Integrity Monitoring)
9. Nagios (Availability Monitoring)
10. OCS (Inventory)
242 | P a g e
SIEM Logics
243 | P a g e
Planning for the right amounts of data
Introduction
Critical business systems and their associated technologies are typically held to performance
benchmarks. In the security space, benchmarks of speed, capacity and accuracy are common for
encryption, packet inspection, assessment, alerting and other critical protection technologies. But
how do you set benchmarks for a tool based on collection, normalization and correlation of
security events from multiple logging devices? And how do you apply these benchmarks to
today’s diverse network environments?
This is the problem with benchmarking Security Information Event Management (SIEM)
systems, which collect security events from one to thousands of devices, each with its own
different log data format. If we take every conceivable environment into consideration, it is
impossible to benchmark SIEM systems. We can, however, set one baseline environment against
which to benchmark and then include equations so that organizations can extrapolate their own
benchmark requirements.
Consider that network and application firewalls, network and host Intrusion Detection/Prevention
(IDS/IPS), access controls, sniffers, and Unified Threat Management systems (UTM)—all log
security events that must be monitored. Every switch, router, load balancer, operating system,
server, badge reader, custom or legacy application, and many other IT systems across the
enterprise, produce logs of security events, along with every new system to follow (such as
virtualization). Most have their own log expression formats. Some systems, like legacy
applications, don’t produce logs at all.
First we must determine what is important. Do we need all log data from every critical system in
order to perform security, response, and audit? Will we need all that data at lightning speed?
(Most likely, we will not.) How much data can the network and collection tool actually handle
under load? What is the threshold before networks bottleneck and/or the SIEM is rendered
unusable, not unlike a denial of service (DOS)? These are variables that every organization must
consider as they hold SIEM to standards that best suit their operational goals.
Why is benchmarking SIEM important? According to the National Institute of Standards (NIST),
SIEM software is a relatively new type of centralized logging software compared to syslog. Our
SANS Log Management Survey shows 51 percent of respondents ranked collecting logs as their
most critical challenge – and collecting logs is a basic feature a SIEM system can provide.
Further, a recent NetworkWorld article explains how different SIEM products typically integrate
well with selected logging tools, but not with all tools. This is due to the disparity between
logging and reporting formats from different systems. There is an effort under way to standardize
logs through MITRE’s Common Event Expression (CEE) standard event log language.
244 | P a g e
But until all logs look alike, normalization is an important SIEM benchmark, which is measured
in events per second (EPS).
Event performance characteristics provide a metric against which most enterprises can judge a
SIEM system. The true value of a SIEM platform, however, will be in terms of Mean Time To
Remediate (MTTR) or other metrics that can show the ability of rapid incident response to
mitigate risk and minimize operational and financial impact. In our second set of benchmarks for
storage and analysis, we have addressed the ability of SIEM to react within a reasonable MTTR
rate to incidents that require automatic or manual intervention.
Because this document is a benchmark, it does not cover the important requirements that cannot
be benchmarked, such as requirements for integration with existing systems (agent vs. agent-less,
transport mechanism, ports and protocols, interface with change control, usability of user
interface, storage type, integration with physical security systems, etc.). Other requirements that
organizations should consider but aren’t benchmarked include the ability to process connection-
specific flow data from network elements, which can be used to further enhance forensic and root-
cause analysis.
Other features, such as the ability to learn from new events, make recommendations and store
them locally, and filter out incoming events from known infected devices that have been sent to
remediation, are also important features that should be considered, but are not benchmarked here.
Variety and type of reports available, report customization features, role-based policy
management and workflow management are more features to consider as they apply to an
individual organization’s needs but are not included in this benchmark. In addition, organizations
should look at a SIEM tool’s overall history of false positives, something that can be
benchmarked, but is not within the scope of this paper. In place of false positives, Table 2
focuses on accuracy rates within applicable categories. These and other considerations are
included in the following equations, sample EPS baseline for a medium-sized enterprise, and
benchmarks that can be applied to storage and analysis. As appendices, we’ve included a device
map for our sample network and a calculation worksheet for organizations to use in developing
their own EPS benchmarks.
SIEM Benchmarking Process
The matrices that follow are designed as guidelines to assist readers in setting their own
benchmark requirements for SIEM system testing. While this is a benchmark checklist, readers
must remember that benchmarking, itself, is governed by variables specific to each organization.
For a real-life example, consider an article in eSecurity Planet, in which Aurora Health in
Michigan estimated that they produced 5,000–10,000 EPS, depending upon the time of day.
We assume that means during the normal ebb and flow of network traffic. What would that load
look like if it were under attack? How many security events would an incident, such as a virus
outbreak on one, two or three subnets, produce?
245 | P a g e
An organization also needs to consider their devices. For example, a Nokia high-availability
firewall is capable of handling more than 100,000 connections per second, each of which could
theoretically create a security event log. This single device would seem to imply a need for
100,000 minimum EPS just for firewall logs. However, research shows that SIEM products
typically handle 10,000–15,000 EPS per collector.
Common sense tells us that we should be able to handle as many events as ALL our devices could
simultaneously produce as a result of a security incident. But that isn’t a likely scenario, nor is it
practical or necessary. Aside from the argument that no realistic scenario would involve all
devices sending maximum EPS, so many events at once would create bottlenecks on the network
and overload and render the SIEM collectors useless. So, it is critical to create a methodology for
prioritizing event relevance during times of load so that even during a significant incident, critical
event data is getting through, while ancillary events are temporarily filtered.
Speed of hardware, NICs (network interface cards), operating systems, logging configurations,
network bandwidth, load balancing and many other factors must also go into benchmark
requirements. One may have two identical server environments with two very different EPS
requirements due to any or all of these and other variables. With consideration of these variables,
EPS can be established for normal and peak usage times. We developed the equations included
here, therefore, to determine Peak Events (PE) per second and to establish normal usage by
exchanging the PEx for NEx (Normal Events per second).
List all of the devices in the environment expected to report to the SIEM. Be sure to consider any
planned changes, such as adding new equipment, consolidating devices, or removing end of life
equipment. First, determine the PE (or NE) for each device with these steps:
1. Carefully select only the security events intended to be collected by the SIEM. Make
sure those are the only events included in the sample being used for the formula.
2. Select reasonable time frames of known activity: Normal and Peak (under attack, if
possible). This may be any period from minutes to days. A longer period of time, such
as a minimum of 90 days, will give a more accurate average, especially for “normal”
activity.
Total the number of Normal or Peak events during the chosen period. (It will also be
helpful to consider computing a “low” activity set of numbers, because fewer events may
be interesting as well.)
3. Determine the number of seconds within the time frame selected.
4. Divide the number of events by the number of seconds to determine PE or NE for the
selected device.
Formula 1:
# of Security Events = EPS Time Period in Seconds
246 | P a g e
1. The resulting EPS is the PE or NE depending upon whether we began with peak activity
or normal activity. Once we have completed this computation for every device needing
security information event management, we can insert the resulting numbers in the
formula below to determine Normal EPS and Peak EPS totals for a benchmark
requirement.
Formula 2:
1. In your production environment determine the peak number of security events (PEx)
created by each device that requires logging using Formula1. (If you have identical
devices with identical hardware, configurations, load, traffic, etc., you may use this
formula to avoid having to determine PE for every device):
2. [PEx (# of identical devices)]
Sum all PE numbers to come up with a grand total for your environment
3. 3. Add at least 10% to the Sum for headroom and another 10% for growth.
So, the resulting formula looks like this:
Step 1: (PE1+PE2+PE3...+ (PE4 x D4) + (PE5 x D5)...) = SUM1 [baseline PE]
Step 2: SUM1 + (SUM1 x 10%) = SUM2 [adds 10% headroom]
Step 3: SUM2 + (SUM2 x 10%) = Total PE benchmark requirement [adds 10% growth
potential]
Once these computations are complete, the resulting Peak EPS set of numbers will reflect that
grand, but impractical, peak total mentioned above. Again, it is unlikely that all devices will ever
simultaneously produce log events at maximum rate. Seek consultation from SMEs and the
system engineers provided by the vendor in order to establish a realistic Peak EPS that the SIEM
system must be able to handle, and then set filters for getting required event information through
to SIEM analysis, should an overload occur.
We have used these equations to evaluate a hypothetical mid-market network with a set number
of devices. If readers have a similar infrastructure, similar rates may apply. If the organization is
different, the benchmark can be adjusted to fit organizational infrastructures using our equations.
The Baseline Network
A mid-sized organization is defined as having 500–1000 users, according to a December guide by
Gartner, Inc., titled “Gartner’s New SMB Segmentation and Methodology.” Gartner Principal
Analyst Adam Hils, together with a team of Gartner analysts, helped us determine that a 750–
1000 user organization is a reasonable base point for our benchmark. As Hils puts it, this number
represents some geo and technical diversity found in large enterprises without being too complex
to scope and benchmark.
With Gartner’s advice, we set our hypothetical organization to have 750 employees, 750 user end
points, five offices, six subnets, five databases, and a central data center. Each subnet will have
247 | P a g e
an IPS, a switch and gateway/router. The data center has four firewalls and a VPN. (See the
matrix below and Appendix A, “Baseline Network Device Map,” for more details.)
Once the topography is defined, the next stage is to average EPS collected from these devices
during normal and peak periods. Remember that demanding all log data at the highest speed
24x7 could, in it, become problematic, causing a potential DOS situation with network or SIEM
system overload. So realistic speeds based on networking and SIEM product restrictions must
also be considered in the baseline.
Protocols and data sources present other variables considered determining average and peak load
requirements. In terms of effect on EPS rates, our experience is that systems using UDP can
generate more events more quickly, but this creates a higher load for the management tool, which
actually slows collection and correlation when compared to TCP. One of our reviewing analysts
has seen UDP packets dropped at 3,000 EPS, while TCP could maintain a 100,000 EPS load. It’s
also been our experience that use of both protocols in single environment. Table 1, “Baseline
Network Device EPS Averages,” provides a breakdown of Average, Peak and Averaged Peak
EPS for different systems logs are collected from. Each total below is the result of device
quantity (column 1) x EPS calculated for the device. For example, 0.60 Average EPS for Cisco
Gateway/Routers has already been multiplied by the quantity of 7 devices. So the EPS per single
device is not displayed in the matrix, except when the quantity is 1.
To calculate Average Peak EPS, we determined two subnets under attack, with affected devices
sending 80 percent of their EPS capacity to the SIEM. These numbers are by no means scientific.
But they do represent research against product information (number of events devices are capable
of producing), other research, and the consensus of expert SANS Analysts contributing to this
paper.
248 | P a g e
A single security incident, such as a quickly replicating worm in a subnet, may fire off thousands
of events per second from the firewall, IPS, router/switch, servers, and other infrastructure at a
single gateway. What if another subnet falls victim and the EPS are at peak in two subnets?
Using our baseline, such a scenario with two infected subnets representing 250 infected end
points could theoretically produce 8,119 EPS.
We used this as our Average Peak EPS baseline because this midline number is more
representative of a serious attack on an organization of this size. In this scenario, we still have
event information coming from servers and applications not directly under attack, but there is
potential impact to those devices. It is important, therefore, that these normal logs, which are
useful in analysis and automatic or manual reaction, continue to be collected as needed.
249 | P a g e
SIEM Storage and Analysis
Now that we have said so much about EPS, it is important to note that no one ever analyzes a
single second’s worth of data. An EPS rating is simply designed as a guideline to be used for
evaluation, planning and comparison. When designing a SIEM system, one must also consider
the volume of data that may be analyzed for a single incident. If an organization collects an
average of 20,000 EPS
over eight hours of an ongoing incident, that will require sorting and analysis of 576,000,000 data
records. Using a 300 byte average size, that amounts to 172.8 gigabytes of data. This
consideration will help put into perspective some reporting and analysis baselines set in the below
table. Remember that some incidents may last for extended periods of time, perhaps tapering off,
then spiking in activity at different points during the attack.
While simple event performance characteristics provide a metric against which most enterprises
can judge a SIEM, as mentioned earlier, the ultimate value of a well-deployed SIEM platform
will be in terms of MTTR (Mean “Time To Remediate”) or other metrics that can equate rapid
incident response to improved business continuity and minimal operational/fiscal impact.
It should be noted in this section, as well, that event storage may refer to multiple data facilities
within the SIEM deployment model. There is a local event database, used to perform active
investigations and forensic analysis against recent activities; long-term storage, used as an archive
of summarized event information that is no longer granular enough for comprehensive forensics;
and read/only and encrypted raw log storage, used to preserve the original event for forensic
analysis and nonrepudiation—guaranteeing chain of custody for regulatory compliance.
250 | P a g e
251 | P a g e
Baseline Network Device Map
This network map is the diagram for our sample network. Traffic flow, points for collecting
and/or forwarding event data, and throttle points were all considered in setting the benchmark
baseline in Table 1.
252 | P a g e
EPS Calculation Worksheet
Common SIEM Report Types
1. Security SIEM DB
2. Logger DB
3. Alarms
4. Incidents
5. Vulnerabilities
6. Availability
7. Network Statistics
8. Asset Information and Inventory
9. Ticketing system
10. Network
253 | P a g e
Custom Reports
Defining the right Rules – It’s all about the rules
When it comes to a SIEM, it is all about the rules.
The SIEM can be configured to be most effective and produce the best results by:
1. Defining the right rules that define “what is considered a security event/incident”
2. Implementing an automated response/mitigation action to stop it at real time
3. Configuring it to alert the right person for each incident - in real time
An example of a subset of a few events, which together represent a security incident:
1. Some IP on the internet does port scanning on the organization’s IP, port scan is detected
and logged
2. 10 days later, a machine from the internal network connects to that IP = Intrusion!
254 | P a g e
IDS/IPS
Intrusion prevention systems (IPS), also known as intrusion detection and prevention systems (IDPS),
are network security appliances that monitor network and/or system activities for malicious activity. The
main functions of intrusion prevention systems are to identify malicious activity, log information about said
activity, attempt to block/stop activity, and report activity.
Intrusion prevention systems are considered extensions of intrusion detection systems because they both
monitor network traffic and/or system activities for malicious activity. The main differences are, unlike
intrusion detection systems, intrusion prevention systems are placed in-line and are able to actively
prevent/block intrusions that are detected. More specifically, IPS can take such actions as sending an
alarm, dropping the malicious packets, resetting the connection and/or blocking the traffic from the
offending IP address. An IPS can also correct Cyclic Redundancy Check (CRC) errors, un-fragment packet
streams, prevent TCP sequencing issues, and clean up unwanted transport and network layer options
255 | P a g e
IPS Types
1. Network-based intrusion prevention system (NIPS): monitors the entire network for suspicious
traffic by analyzing protocol activity.
2. Wireless intrusion prevention systems (WIPS): monitors a wireless network for suspicious
traffic by analyzing wireless networking protocols.
3. Network behavior analysis (NBA): examines network traffic to identify threats that generate
unusual traffic flows, such as distributed denial of service (DDoS) attacks, certain forms of
malware, and policy violations.
4. Host-based intrusion prevention system (HIPS): an installed software package which monitors
a single host for suspicious activity by analyzing events occurring within that host.
Detection Methods
1. Signature-Based Detection: This method of detection utilizes signatures, which are attack
patterns that are preconfigured and predetermined. A signature-based intrusion prevention system
monitors the network traffic for matches to these signatures. Once a match is found the intrusion
prevention system takes the appropriate action. Signatures can be exploit-based or vulnerability-
based. Exploit-based signatures analyze patterns appearing in exploits being protected against,
while vulnerability-based signatures analyze vulnerabilities in a program, its execution, and
conditions needed to exploit said vulnerability.
2. Statistical anomaly-based detection: This method of detection baselines performance of average
network traffic conditions. After a baseline is created, the system intermittently samples network
traffic, using statistical analysis to compare the sample to the set baseline. If the activity is outside
the baseline parameters, the intrusion prevention system takes the appropriate action.
3. Stateful Protocol Analysis Detection: This method identifies deviations of protocol states by
comparing observed events with “predetermined profiles of generally accepted definitions of
benign activity.
256 | P a g e
Signature Catalog:
257 | P a g e
Alert Monitoring:
258 | P a g e
Security Reporting:
259 | P a g e
Alert Monitor:
260 | P a g e
Anti-Virus:
Web content protection & filtering
Session Hi-Jacking and Internal Network Man-In-The-
Middle
XSS Attack Vector
The attack flow:
1. The attacker finds an XSS vulnerability in the server/website/web application
2. The attacker creates an encoded URL attack string to decrease suspicion level
3. The attacker spreads the link to a targeted victim or to a distribution list
4. The victim logs into the web application, clicks the link
5. The attacker’s code is executed under the victims credentials and sends the unique
session identifier to the attacker
261 | P a g e
6. The attacker plants the unique session identifier in his browser and is now connected to
the system as the victim
The Man-In-The-Middle Attack Vector
• Taking over an active session to a computer system
• In order to attack the system, the attacker must know the protocol/method being used to
handle the active sessions with the system
• In order to attack the system, the attacker must achieve the user’s session identifier
(session id, session hash, token, IP)
• The most common use of Session Hi-jacking revolves around textual protocols such as
the HTTP protocol where the identifier is the ASPSESSID/PHPSESSID/JSESSION
parameter located HTTP Cookie Header aka “The Session Cookie”
• Most common scenarios of Session Hi-Jacking is done with combination with:
• XSS - Where the session cookie is read by an attacker’s JavaScript code
• Man-In-The-Middle – Where the cookie is sent over clear-text HTTP through the
attacker’s machine, which becomes the victim’s gateway
262 | P a g e
263 | P a g e
264 | P a g e
265 | P a g e
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2
Implementing and auditing security controls   part 2

More Related Content

Viewers also liked

Java secure development part 2
Java secure development   part 2Java secure development   part 2
Java secure development part 2Rafel Ivgi
 
Java secure development part 3
Java secure development   part 3Java secure development   part 3
Java secure development part 3Rafel Ivgi
 
Implementing and auditing security controls part 1
Implementing and auditing security controls   part 1Implementing and auditing security controls   part 1
Implementing and auditing security controls part 1Rafel Ivgi
 
Java secure development part 1
Java secure development   part 1Java secure development   part 1
Java secure development part 1Rafel Ivgi
 
Issa security in a virtual world
Issa   security in a virtual worldIssa   security in a virtual world
Issa security in a virtual worldRafel Ivgi
 
Cyber attacks 101
Cyber attacks 101Cyber attacks 101
Cyber attacks 101Rafel Ivgi
 
Ciso back to the future - network vulnerabilities
Ciso   back to the future - network vulnerabilitiesCiso   back to the future - network vulnerabilities
Ciso back to the future - network vulnerabilitiesRafel Ivgi
 
Siem & log management
Siem & log managementSiem & log management
Siem & log managementRafel Ivgi
 
Totally Excellent Tips for Righteous Local SEO
Totally Excellent Tips for Righteous Local SEOTotally Excellent Tips for Righteous Local SEO
Totally Excellent Tips for Righteous Local SEOGreg Gifford
 
Agriculture connectée 4.0
Agriculture connectée 4.0Agriculture connectée 4.0
Agriculture connectée 4.0Jérôme Monteil
 
The Next Tsunami AI Blockchain IOT and Our Swarm Evolutionary Singularity
The Next Tsunami AI Blockchain IOT and Our Swarm Evolutionary SingularityThe Next Tsunami AI Blockchain IOT and Our Swarm Evolutionary Singularity
The Next Tsunami AI Blockchain IOT and Our Swarm Evolutionary SingularityDinis Guarda
 
Beyond the Gig Economy
Beyond the Gig EconomyBeyond the Gig Economy
Beyond the Gig EconomyJon Lieber
 
Recovery: Job Growth and Education Requirements Through 2020
Recovery: Job Growth and Education Requirements Through 2020Recovery: Job Growth and Education Requirements Through 2020
Recovery: Job Growth and Education Requirements Through 2020CEW Georgetown
 
3 hard facts shaping higher education thinking and behavior
3 hard facts shaping higher education thinking and behavior3 hard facts shaping higher education thinking and behavior
3 hard facts shaping higher education thinking and behaviorGrant Thornton LLP
 
African Americans: College Majors and Earnings
African Americans: College Majors and Earnings African Americans: College Majors and Earnings
African Americans: College Majors and Earnings CEW Georgetown
 
The Online College Labor Market
The Online College Labor MarketThe Online College Labor Market
The Online College Labor MarketCEW Georgetown
 
Game Based Learning for Language Learners
Game Based Learning for Language LearnersGame Based Learning for Language Learners
Game Based Learning for Language LearnersShelly Sanchez Terrell
 

Viewers also liked (20)

Java secure development part 2
Java secure development   part 2Java secure development   part 2
Java secure development part 2
 
Java secure development part 3
Java secure development   part 3Java secure development   part 3
Java secure development part 3
 
Implementing and auditing security controls part 1
Implementing and auditing security controls   part 1Implementing and auditing security controls   part 1
Implementing and auditing security controls part 1
 
Java secure development part 1
Java secure development   part 1Java secure development   part 1
Java secure development part 1
 
Issa security in a virtual world
Issa   security in a virtual worldIssa   security in a virtual world
Issa security in a virtual world
 
Cyber attacks 101
Cyber attacks 101Cyber attacks 101
Cyber attacks 101
 
Ciso back to the future - network vulnerabilities
Ciso   back to the future - network vulnerabilitiesCiso   back to the future - network vulnerabilities
Ciso back to the future - network vulnerabilities
 
Siem & log management
Siem & log managementSiem & log management
Siem & log management
 
Darknet
DarknetDarknet
Darknet
 
Cyber crime
Cyber crimeCyber crime
Cyber crime
 
Totally Excellent Tips for Righteous Local SEO
Totally Excellent Tips for Righteous Local SEOTotally Excellent Tips for Righteous Local SEO
Totally Excellent Tips for Righteous Local SEO
 
Endocarditis
EndocarditisEndocarditis
Endocarditis
 
Agriculture connectée 4.0
Agriculture connectée 4.0Agriculture connectée 4.0
Agriculture connectée 4.0
 
The Next Tsunami AI Blockchain IOT and Our Swarm Evolutionary Singularity
The Next Tsunami AI Blockchain IOT and Our Swarm Evolutionary SingularityThe Next Tsunami AI Blockchain IOT and Our Swarm Evolutionary Singularity
The Next Tsunami AI Blockchain IOT and Our Swarm Evolutionary Singularity
 
Beyond the Gig Economy
Beyond the Gig EconomyBeyond the Gig Economy
Beyond the Gig Economy
 
Recovery: Job Growth and Education Requirements Through 2020
Recovery: Job Growth and Education Requirements Through 2020Recovery: Job Growth and Education Requirements Through 2020
Recovery: Job Growth and Education Requirements Through 2020
 
3 hard facts shaping higher education thinking and behavior
3 hard facts shaping higher education thinking and behavior3 hard facts shaping higher education thinking and behavior
3 hard facts shaping higher education thinking and behavior
 
African Americans: College Majors and Earnings
African Americans: College Majors and Earnings African Americans: College Majors and Earnings
African Americans: College Majors and Earnings
 
The Online College Labor Market
The Online College Labor MarketThe Online College Labor Market
The Online College Labor Market
 
Game Based Learning for Language Learners
Game Based Learning for Language LearnersGame Based Learning for Language Learners
Game Based Learning for Language Learners
 

Similar to Implementing and auditing security controls part 2

Intent Based Networking: turning intentions into reality with network securit...
Intent Based Networking: turning intentions into reality with network securit...Intent Based Networking: turning intentions into reality with network securit...
Intent Based Networking: turning intentions into reality with network securit...shira koper
 
1Low Cost automated inventory system.docx
1Low Cost automated inventory system.docx1Low Cost automated inventory system.docx
1Low Cost automated inventory system.docxfelicidaddinwoodie
 
Service Assurance Constructs for Achieving Network Transformation - Sunku Ran...
Service Assurance Constructs for Achieving Network Transformation - Sunku Ran...Service Assurance Constructs for Achieving Network Transformation - Sunku Ran...
Service Assurance Constructs for Achieving Network Transformation - Sunku Ran...Liz Warner
 
Service Assurance Constructs for Achieving Network Transformation by Sunku Ra...
Service Assurance Constructs for Achieving Network Transformation by Sunku Ra...Service Assurance Constructs for Achieving Network Transformation by Sunku Ra...
Service Assurance Constructs for Achieving Network Transformation by Sunku Ra...Liz Warner
 
SplunkLive! Zurich 2018: Integrating Metrics and Logs
SplunkLive! Zurich 2018: Integrating Metrics and LogsSplunkLive! Zurich 2018: Integrating Metrics and Logs
SplunkLive! Zurich 2018: Integrating Metrics and LogsSplunk
 
Building Secure Services in the Cloud
Building Secure Services in the CloudBuilding Secure Services in the Cloud
Building Secure Services in the CloudSumo Logic
 
Advanced Authorization for SAP Global Deployments Part II of III
Advanced Authorization for SAP Global Deployments Part II of IIIAdvanced Authorization for SAP Global Deployments Part II of III
Advanced Authorization for SAP Global Deployments Part II of IIINextLabs, Inc.
 
Cybersecurity Strategy Must Include Software License Optimization
Cybersecurity Strategy Must Include Software License OptimizationCybersecurity Strategy Must Include Software License Optimization
Cybersecurity Strategy Must Include Software License OptimizationFlexera
 
Whitepaper factors to consider commercial infrastructure management vendors
Whitepaper  factors to consider commercial infrastructure management vendorsWhitepaper  factors to consider commercial infrastructure management vendors
Whitepaper factors to consider commercial infrastructure management vendorsapprize360
 
Whitepaper factors to consider when selecting an open source infrastructure ...
Whitepaper  factors to consider when selecting an open source infrastructure ...Whitepaper  factors to consider when selecting an open source infrastructure ...
Whitepaper factors to consider when selecting an open source infrastructure ...apprize360
 
Software Engineering Important Short Question for Exams
Software Engineering Important Short Question for ExamsSoftware Engineering Important Short Question for Exams
Software Engineering Important Short Question for ExamsMuhammadTalha436
 
Sap grc process control 10.0
Sap grc process control 10.0Sap grc process control 10.0
Sap grc process control 10.0Latha Kamal
 
15 hacks for better ITAM with ServiceDesk Plus
15 hacks for better ITAM with ServiceDesk Plus15 hacks for better ITAM with ServiceDesk Plus
15 hacks for better ITAM with ServiceDesk PlusLeeben Amirthavasagam
 
stackArmor - FedRAMP and 800-171 compliant cloud solutions
stackArmor - FedRAMP and 800-171 compliant cloud solutionsstackArmor - FedRAMP and 800-171 compliant cloud solutions
stackArmor - FedRAMP and 800-171 compliant cloud solutionsGaurav "GP" Pal
 
System analysis and design
System analysis and designSystem analysis and design
System analysis and designRobinsonObura
 
Smart Asset & Tower Service Management Solution updated.pdf
Smart Asset & Tower Service Management Solution updated.pdfSmart Asset & Tower Service Management Solution updated.pdf
Smart Asset & Tower Service Management Solution updated.pdfHunterZhang13
 

Similar to Implementing and auditing security controls part 2 (20)

Intent Based Networking: turning intentions into reality with network securit...
Intent Based Networking: turning intentions into reality with network securit...Intent Based Networking: turning intentions into reality with network securit...
Intent Based Networking: turning intentions into reality with network securit...
 
What is SCADA system? SCADA Solutions for IoT
What is SCADA system? SCADA Solutions for IoTWhat is SCADA system? SCADA Solutions for IoT
What is SCADA system? SCADA Solutions for IoT
 
1Low Cost automated inventory system.docx
1Low Cost automated inventory system.docx1Low Cost automated inventory system.docx
1Low Cost automated inventory system.docx
 
Service Assurance Constructs for Achieving Network Transformation - Sunku Ran...
Service Assurance Constructs for Achieving Network Transformation - Sunku Ran...Service Assurance Constructs for Achieving Network Transformation - Sunku Ran...
Service Assurance Constructs for Achieving Network Transformation - Sunku Ran...
 
Service Assurance Constructs for Achieving Network Transformation by Sunku Ra...
Service Assurance Constructs for Achieving Network Transformation by Sunku Ra...Service Assurance Constructs for Achieving Network Transformation by Sunku Ra...
Service Assurance Constructs for Achieving Network Transformation by Sunku Ra...
 
SplunkLive! Zurich 2018: Integrating Metrics and Logs
SplunkLive! Zurich 2018: Integrating Metrics and LogsSplunkLive! Zurich 2018: Integrating Metrics and Logs
SplunkLive! Zurich 2018: Integrating Metrics and Logs
 
Building Secure Services in the Cloud
Building Secure Services in the CloudBuilding Secure Services in the Cloud
Building Secure Services in the Cloud
 
Validation
ValidationValidation
Validation
 
Advanced Authorization for SAP Global Deployments Part II of III
Advanced Authorization for SAP Global Deployments Part II of IIIAdvanced Authorization for SAP Global Deployments Part II of III
Advanced Authorization for SAP Global Deployments Part II of III
 
Ridge weigh technical writeup
Ridge weigh technical writeupRidge weigh technical writeup
Ridge weigh technical writeup
 
Cybersecurity Strategy Must Include Software License Optimization
Cybersecurity Strategy Must Include Software License OptimizationCybersecurity Strategy Must Include Software License Optimization
Cybersecurity Strategy Must Include Software License Optimization
 
Whitepaper factors to consider commercial infrastructure management vendors
Whitepaper  factors to consider commercial infrastructure management vendorsWhitepaper  factors to consider commercial infrastructure management vendors
Whitepaper factors to consider commercial infrastructure management vendors
 
Whitepaper factors to consider when selecting an open source infrastructure ...
Whitepaper  factors to consider when selecting an open source infrastructure ...Whitepaper  factors to consider when selecting an open source infrastructure ...
Whitepaper factors to consider when selecting an open source infrastructure ...
 
Software Engineering Important Short Question for Exams
Software Engineering Important Short Question for ExamsSoftware Engineering Important Short Question for Exams
Software Engineering Important Short Question for Exams
 
Sap grc process control 10.0
Sap grc process control 10.0Sap grc process control 10.0
Sap grc process control 10.0
 
15 hacks for better ITAM with ServiceDesk Plus
15 hacks for better ITAM with ServiceDesk Plus15 hacks for better ITAM with ServiceDesk Plus
15 hacks for better ITAM with ServiceDesk Plus
 
stackArmor - FedRAMP and 800-171 compliant cloud solutions
stackArmor - FedRAMP and 800-171 compliant cloud solutionsstackArmor - FedRAMP and 800-171 compliant cloud solutions
stackArmor - FedRAMP and 800-171 compliant cloud solutions
 
System analysis and design
System analysis and designSystem analysis and design
System analysis and design
 
Jon shende fbcs citp q&a
Jon shende fbcs citp q&aJon shende fbcs citp q&a
Jon shende fbcs citp q&a
 
Smart Asset & Tower Service Management Solution updated.pdf
Smart Asset & Tower Service Management Solution updated.pdfSmart Asset & Tower Service Management Solution updated.pdf
Smart Asset & Tower Service Management Solution updated.pdf
 

More from Rafel Ivgi

Hacker techniques, exploit and incident handling
Hacker techniques, exploit and incident handlingHacker techniques, exploit and incident handling
Hacker techniques, exploit and incident handlingRafel Ivgi
 
Top 10 mistakes running a windows network
Top 10 mistakes   running a windows networkTop 10 mistakes   running a windows network
Top 10 mistakes running a windows networkRafel Ivgi
 
Advanced web application hacking and exploitation
Advanced web application hacking and exploitationAdvanced web application hacking and exploitation
Advanced web application hacking and exploitationRafel Ivgi
 
Firmitas Cyber Solutions - Inforgraphic - Mirai Botnet - A few basic facts on...
Firmitas Cyber Solutions - Inforgraphic - Mirai Botnet - A few basic facts on...Firmitas Cyber Solutions - Inforgraphic - Mirai Botnet - A few basic facts on...
Firmitas Cyber Solutions - Inforgraphic - Mirai Botnet - A few basic facts on...Rafel Ivgi
 
Firmitas Cyber Solutions - Inforgraphic - ICS & SCADA Vulnerabilities
Firmitas Cyber Solutions - Inforgraphic - ICS & SCADA VulnerabilitiesFirmitas Cyber Solutions - Inforgraphic - ICS & SCADA Vulnerabilities
Firmitas Cyber Solutions - Inforgraphic - ICS & SCADA VulnerabilitiesRafel Ivgi
 
United States O1 Visa Approval
United States O1 Visa ApprovalUnited States O1 Visa Approval
United States O1 Visa ApprovalRafel Ivgi
 
Comptia Security+ CE Certificate
Comptia Security+ CE CertificateComptia Security+ CE Certificate
Comptia Security+ CE CertificateRafel Ivgi
 
ISACA Membership
ISACA MembershipISACA Membership
ISACA MembershipRafel Ivgi
 
Iso 27001 Pecb Ismsla 100193 Rafel Ivgi
Iso 27001 Pecb Ismsla 100193 Rafel IvgiIso 27001 Pecb Ismsla 100193 Rafel Ivgi
Iso 27001 Pecb Ismsla 100193 Rafel IvgiRafel Ivgi
 
Webapplicationsecurity05 2010 100601100553 Phpapp02
Webapplicationsecurity05 2010 100601100553 Phpapp02Webapplicationsecurity05 2010 100601100553 Phpapp02
Webapplicationsecurity05 2010 100601100553 Phpapp02Rafel Ivgi
 

More from Rafel Ivgi (14)

Hacker techniques, exploit and incident handling
Hacker techniques, exploit and incident handlingHacker techniques, exploit and incident handling
Hacker techniques, exploit and incident handling
 
Top 10 mistakes running a windows network
Top 10 mistakes   running a windows networkTop 10 mistakes   running a windows network
Top 10 mistakes running a windows network
 
Advanced web application hacking and exploitation
Advanced web application hacking and exploitationAdvanced web application hacking and exploitation
Advanced web application hacking and exploitation
 
Firmitas Cyber Solutions - Inforgraphic - Mirai Botnet - A few basic facts on...
Firmitas Cyber Solutions - Inforgraphic - Mirai Botnet - A few basic facts on...Firmitas Cyber Solutions - Inforgraphic - Mirai Botnet - A few basic facts on...
Firmitas Cyber Solutions - Inforgraphic - Mirai Botnet - A few basic facts on...
 
Firmitas Cyber Solutions - Inforgraphic - ICS & SCADA Vulnerabilities
Firmitas Cyber Solutions - Inforgraphic - ICS & SCADA VulnerabilitiesFirmitas Cyber Solutions - Inforgraphic - ICS & SCADA Vulnerabilities
Firmitas Cyber Solutions - Inforgraphic - ICS & SCADA Vulnerabilities
 
United States O1 Visa Approval
United States O1 Visa ApprovalUnited States O1 Visa Approval
United States O1 Visa Approval
 
Comptia Security+ CE Certificate
Comptia Security+ CE CertificateComptia Security+ CE Certificate
Comptia Security+ CE Certificate
 
ISACA Membership
ISACA MembershipISACA Membership
ISACA Membership
 
CISSP
CISSPCISSP
CISSP
 
CISM
CISMCISM
CISM
 
LPIC-1
LPIC-1LPIC-1
LPIC-1
 
CRISC
CRISCCRISC
CRISC
 
Iso 27001 Pecb Ismsla 100193 Rafel Ivgi
Iso 27001 Pecb Ismsla 100193 Rafel IvgiIso 27001 Pecb Ismsla 100193 Rafel Ivgi
Iso 27001 Pecb Ismsla 100193 Rafel Ivgi
 
Webapplicationsecurity05 2010 100601100553 Phpapp02
Webapplicationsecurity05 2010 100601100553 Phpapp02Webapplicationsecurity05 2010 100601100553 Phpapp02
Webapplicationsecurity05 2010 100601100553 Phpapp02
 

Recently uploaded

Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...shyamraj55
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsRizwan Syed
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 
SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024Scott Keck-Warren
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsMemoori
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfAlex Barbosa Coqueiro
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brandgvaughan
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Patryk Bandurski
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024The Digital Insurer
 
Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfAddepto
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr LapshynFwdays
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):comworks
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 

Recently uploaded (20)

E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL Certs
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial Buildings
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdf
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brand
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024
 
Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food Manufacturing
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdf
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 

Implementing and auditing security controls part 2

  • 1. 173 | P a g e Main Functionalities:  Real-time, subnet-level tracking of unmanaged, networked devices  Detailed hardware information including slot description, memory configuration and network adaptor configuration  Extended plug-and-play monitor data including secondary monitor information  Detailed asset-tag and serial number information, as well as embedded pointed device, fixed drive and CD-ROM data  Multi-layer information model – the idea is to represent the same equipment and connections in several layers, with technology-specific information included in the dedicated layer providing a consistent view of the network for the operator without an information overflow.  The layers represent both physical and logical information of managed network, including: physical network resources, infrastructure, physical connections, digital transmission layer (SDH/SONET (STM-n, VC-4, VC-12, OC-n), PDH (E1, T1)),
  • 2. 174 | P a g e telephony layer, IP-related layers, GSM/CDMA/UMTS-related layers as well as ATM and FR layers  History tracking - inventory objects (equipment, connections, numbering resources etc.) are stored with full history of changes which enables change tracking; a new history entry is made in three cases: object creation (the first history entry is made); object modification (for each modification a new entry is added); and object removal (the last history entry is made)  Auto-discovery and reconciliation – enables to keep the stored information up-to-date with the changes occurring in the network. The auto-discovery tool enables adding new network elements to the inventory database, removing existing network elements from the inventory database as well as updating the inventory database due to changed cards, ports or interfaces  Network planning – future object planning support (storing future changes in the equipment, switches configuration, connections, etc.); plans are executed or applied by the system logic – object creation / changing actually take place and planned objects become active in the inventory system; enables visualization of the network state in the future  Inventory-Based Billing enables accurate calculations of customer charges for inventory products and services (e.g. equipment, locations, connections, capacity); this module is able to calculate charges for services leased from another operator (vendor) and resold (with profit) to customers, and to generate invoices  Inventory and Console Tools allow user-friendly management of important objects used in the application (creating templates (Logical View, Report Management, Charts), editing symbols and links, searching for objects, encrypting passwords and notifying users of various actions/events)  Wizards and templates provide flexibility but do not allow for inconsistent manipulation of data; new objects are created with an object creation wizard (so called template), which enables defining all attributes and necessary referential object (path details for connections, detailed elements (cards, ports) for equipment etc.); the user can define which attributes of an object should be mandatory / predefined and if they should have a constant value  Process-driven Inventory – by introduction of automated processes, all user tasks related to inventory data are done in the context of a process instance; changing the state of the network (e.g. by provisioning a new service) cannot be done without updating information in the inventory; this assures real-time accuracy of the inventory database  Information theft – A network inventory management system not only keeps track of your hardware but also your software. It also shows you who has access to that software. A regular check of your system's inventory will let you know who has downloaded and used software they may not be authorized to use.  Equipment theft – A network management system will automatically detect every piece of equipment and software connected to your system. And it will also let you know which items are not working properly, which items need to be replaced, and which items have
  • 3. 175 | P a g e mysteriously disappeared. Eliminate workplace theft simply by running a regularly scheduled inventory check.  Licensing agreements – An inventory of your software and licensing agreements will let you know if you've got the necessary licensing agreements for all your software. Insufficient licensing can cost you usage fees and fines and duplicating software that you already have is an unnecessary expense.  System Upgrades – Outdated equipment and software can cost your company time, money, and resources. Downtime and slow response times are two of the biggest time killers for your business. Set filters on your network inventory management system to alert you when it's time to upgrade software or replace hardware with newer technology to keep your system running as smoothly and efficiently as possible. Benefits:  End-to-end view of multi-vendor, multi-technology networks  Reduced network operating cost  Improved utilization of existing resources  Quicker, more efficient change management  Visualization and control of distributed resources  Seamless integration within the existing environment  Automatically discovers and diagrams network topology  Automatic generation of network maps in Microsoft Office Visio  Automatically detects new devices and changes to network topology  Simplifies inventory management for hardware and software assets  Addresses reporting needs for PCI compliance and other regulatory requirements powerful capabilities, including: o Inventory management for all systems o Direct access to Windows, Macintosh and Linux devices o Automatically save hardware and software configuration information in a SQL database o Generate systems continuity and backup profiler reports o Use remote management capabilities to shut down, restart and launch applications
  • 4. 176 | P a g e Completing the gaps with scripts
  • 5. 177 | P a g e Creating Device Groups (Security Level, Same Version…) Creating Policies Microsoft released Security Compliance Manager along with a heap of new security baseline for you to use to compare against your environment. In case you are not familiar with SCM then it is a great product from Microsoft that consolidates all the best practice for their software with in-depth explanation for each setting. Notably this new version has security baselines for Exchange Server 2010 and 2007. These baselines are also customized for the specific role of the server. Also interesting is the baseline settings not only include group policy computer settings but also PowerShell command to configured aspects of the product that are not as simply to make as a registry key change.
  • 6. 178 | P a g e
  • 7. 179 | P a g e As you can see from the image below the PowerShell script to perform the required configuration is listed in the detail pain… Attachments and Guidelines Another new feature you might notice is that there is now a section called Attachments and Guidelines that has a lot of support documentation that relate to the Security baseline. This section also allows you to add your own supporting documentation to your custom baseline templates.
  • 8. 180 | P a g e How to Import an existing GPO into Microsoft Security Compliance Manager v2 To start you simply need to make a backup of the existing Group Policy Object via the Group Policy Management Console and then import it by selecting the “Import GPO” option in the new tool at the top right corner (see image below).
  • 9. 181 | P a g e Select the path to the backup of individual GPO (see image below).
  • 10. 182 | P a g e Once you click OK the policy will then import into the SCM tool. Once the GPO is imported the tool will look at the registry path and if it is a known value it will then match it up with the additional information already contained in the SCM database (very smart).
  • 11. 183 | P a g e Now that you have the GPO imported into the SCM tool you can use the “compare” to see the differences between this and the other baselines. How to compare Baseline setting in the Security Compliance Manager tool Simply select the policy you want to compare on the left hand column and then select the “Compare” option on the right hand side (see image below).
  • 12. 184 | P a g e Now select the Baseline policy you want to do the comparison with and press OK.
  • 13. 185 | P a g e The result is a reporting showing the setting and values that are different between the two policies.
  • 14. 186 | P a g e The values tab will show you all the common settings between the policies that have different values and the other tab will show you all the settings that are uniquely configured in either policy.
  • 15. 187 | P a g e Auditing to verify security in practice How to avoid risk from inconsistent network and security configuration practices? Regulations define specific traffic and firewall policies that must be deployed, monitored, audited, and enforced. Unfortunately, due to the silos organizations often lack the ability to seamlessly assess when a network configuration allows traffic that is "out of policy" per compliance, corporate mandate, or industry best practice. Configuration Audit: Configuration Audit tools provide automated collection, monitoring, and audit of configuration across an organization's switches, routers, firewalls, and IDS/IPS. Through a unique ability to normalize multi-vendor device configuration, provides a detailed and intuitive assessment of how devices are configured, including defined firewall rules, security policy, and network hierarchy. These solutions maintain a history of configuration changes, audit configuration rules on a device, and compare this across devices. Intelligently integrated with network activity data, device configuration data is instrumental in building an enterprise-wide representation of a networks topology. This topology mapping helps an organization to understand allowed and
  • 16. 188 | P a g e denied activity across the entire network, resulting in improved consistency of device configuration and flagged configuration changes that introduce risk to the network. Configuration Auditing Solution Vary To the following types: 1. Configuration Management Software – Usually provides a comparison between two configuration sets and also a comparison a specific compliance template 2. Configuration Analyzers – Mostly common in analyzing Firewall configurations known as “Firewall Analyzer” and “Firewall Configuration Analyzer” 3. Local Security Compliance Scanners – Tools such as “MBSA” Microsoft Baseline Security Analyzer tools provide local system configuration analysis 4. Vulnerability Assessment Products – aka “Security Scanners” Vulnerability scanners can be used to audit the settings and configuration of operating systems, applications, databases and network devices. Unlike vulnerability testing, an audit policy is used to check various values to ensure that they are configured to the correct policy. Example policies for auditing include password complexity, ensuring the logging is enabled and testing that anti- virus software is installed properly. Audit policies of common vulnerability scanners have been certified by the US Government or Center for Internet Security to ensure that the auditing tool accurately tests for best practice and required configuration settings. When combined with vulnerability scanning and real-time monitoring with the auditing tools offer some powerful features such as:  Detecting system change events in real-time and then performing a configuration audit  Ensuring that logging is configured correctly for Windows and Unix hosts  Auditing the configuration of a web application's operating system, application and SQL database Audit policies may also be deployed to search for documents that contain sensitive data such as credit card or Social Security numbers. A basic tenet of most IT management practices is to minimize variance. Even though your organization may consist of certain types of operating systems and hardware, small changes in drivers, software, security policies, patch updates and sometimes even usage can have dramatic effects on the underlying configuration. As time goes by, these servers and desktop computers can have their configuration drift further away from a "known good" standard, which makes maintaining them more difficult. The following are the most common types of auditing provided by security auditing tools:  Application Auditing Configuration settings of applications such as web servers and anti-virus can be tested against a policy.  Content Auditing Office documents and files can be searched for credit card numbers and other sensitive content.  Database Auditing
  • 17. 189 | P a g e SQL database settings as well as setting so the host operating systems can be tested for compliance.  Operating System Auditing Access control, system hardening, error reporting, security settings and more can be tested against many types of industry and government policies.  Router Auditing Authentication, security and configuration settings can be audited against a policy. Agentless vs. Agent-Based Security Auditing Solutions The chart below provides a high level view of agent-based versus agentless systems; details follow. Solution Characteristic Agentless Agent-Based Asset Discovery Advantage None/Limited Asset Coverage Advantage Limited Audit Comprehensiveness Par Par Target System Impact Advantage Variable Target System Security Advantage Variable Network Impact Variable/Low Low Cost of Deployment Advantage High Cost of Ownership Advantage High Scalability Advantage Limited Functionalities: 1. Asset Discovery: the ability to discover and maintain an accurate inventory of IT assets and applications. Agentless solutions typically have broader discovery capabilities – including both active and passive technologies – that permit them to discover a wider range of assets. This includes discovery of assets that may be unknown to administrators or should not be on your network. 2. Asset Coverage: the breadth of IT assets and applications that can be assessed. Many IT assets that need to be audited simply cannot accept agent software. Examples include network devices like routers and switches, point-of-sale systems, IP phones and many firewalls. 3. Audit Comprehensiveness: the degree of completeness with which the auditing system can assess the target system’s security and compliance status. Using credentialed access, agentless solutions can assess any configuration or data item on the target system, including an analysis of system file integrity (file integrity monitoring). 4. Target System Impact: the impact on the stability and performance of the scan target. Agentless solutions use well-defined remote access interfaces to log in and retrieve the desired data, and as a result have a much more benign impact on the stability of the assets being scanned than agent-based systems do.
  • 18. 190 | P a g e 5. Target System Security: the impact of the auditing system on the security of the target system. Agentless auditing solutions are uniquely positioned to conduct objective and trusted security analyses because they do not run on the target system. 6. Network Impact: the impact on the performance of the associated network. Although agentless auditing solutions gather target system configuration information using a network-based remote login, actual network impact is marginal due to bandwidth throttling and overall low usage. 7. Cost of Deployment: the time and effort required to make the auditing system operational. Since there are no agents to install, getting started with agentless solutions is significantly faster than with agent-based solutions – typically hours rather than days or weeks. 8. Cost of Ownership: the time and effort required to update and adjust the configuration of the auditing system. Agentless solutions typically have much lower costs of ownership than agent-based systems; deployment is easier and faster, there are fewer components to update and configuration is centralized on one or two systems. 9. Scalability: the number of target systems that a single instance of the audit system can reliably audit in a typical audit interval. Agentless auditing solutions excel in scalability, as auditing scalability is virtually unlimited simply by increasing the number of management servers. 10. Simplified configuration compliance Simplifies configuration compliance with drag-and-drop templates for Windows and Linux operating systems and applications from FDCC, NIST, STIGS, USGCB and Microsoft. Prioritize and manage risk, audit configurations against internal policy or external best practice and centralize reporting for monitoring and regulatory purposes. 11. Complete configuration assessment Provides a comprehensive view of Windows devices by retrieving software configuration that includes audit settings, security settings, user rights, logging configuration and hardware information including memory, processors, display adapters, storage devices, motherboard details, printers, services, and ports in use. 12. Out-of-the-box configuration auditing Out-of-the-box configuration auditing, reporting, and alerting for common industry guidelines and best practices to keep your network running, available, and accessible. 13. Datasheet configuration auditing Compare assets to industry baselines and best practices to check whether any software or hardware changes were made since the last scan that could impact your security and compliance objectives. 14. Up-to-date baselines With this module, a complete configuration compliance benchmark library keeps systems up-to-date with industry benchmarks including changes to benchmarks and adjustments for newer operating systems and applications. 15. Customized best practices Customized best practices for improved policy enforcement and implementation for a broad set of industry templates and standards including built-in configuration templates for NIST, Microsoft, and more. 16. Built- in templates
  • 19. 191 | P a g e Built-in templates for Windows and Linux operating systems and applications from FDCC, NIST, STIGS, USGCB, and Microsoft. 17. Oval 5.6 SCAP support 18. Streamlined reporting Streamlined reporting for government and corporate standards with built-in vulnerability reporting.
  • 20. 192 | P a g e Case Studies Summary: Top 10 Mistakes - Managing Windows Networks “The shoemaker's son always goes barefoot”  Network Administrators who uses Windows XP or Windows 7 without UAC on their own computer  Network Administrators who have a weak password for a local administrator account on their machine o An Example from a real client: Zorik:12345  Network Administrators that their computer in excluded from security scans  Network Administrators that their computer lacks security patches  Network Administrators that their computer doesn’t have an Anti-Virus  Network Administrators with unencrypted Laptops Domain Administrators on Users VLAN  In most organizations administrators and user are connected to the same VLAN  In this case, a user/attacker can: o Attack the administrators computers using NetBIOS Brute Force o Spoof a NetBIOS name of a local server and attack using an NBNS Race Condition Name Spoofing o Take Over the network traffic using a variety of Layer 2 attacks and:  Replace/Infect EXE files that will execute with network administrator privileges  Steal Passwords & Hashes of Domain Administrators  Execute Man-In-The-Middle attacks on encrypted connections (RDP, SSH, SSL)
  • 21. 193 | P a g e Domain Administrator with a Weak Password
  • 22. 194 | P a g e Domain Administrator without the Conficker Patch (MS08- 067)
  • 23. 195 | P a g e (LM and NTLM v1) vs. (NTLM v.2)  Once the hash of a network administrator is sent over the network, his identity can be stolen by: o The can be used in Pass-The-Hash attack o The hash can be broken via Dictionary, Hybrid, Brute Force, Rainbow Tables attacks
  • 24. 196 | P a g e
  • 25. 197 | P a g e Pass the Hash Attack
  • 26. 198 | P a g e Daily logon as a Domain Administrator 1. Is there an entity among man which answers the definition “God”? (Obviously no…) a. Computers shouldn’t have one either (refers to “Domain Administrator” default privilege level) b. Isn’t a network administrator a normal user when he connects to his machine? c. Doesn’t the network administrator surf the internet? d. Doesn’t he visit Facebook? e. Doesn’t he receive emails and opens them? f. Doesn’t he download and installs applications? g. Can’t the application he downloaded contain a malware/virus? h. What can a virus do running under Domain Administrator privileges? i. What is the potential damage to data, confidentiality and operability in costs? Using Domain Administrator for Services  Why does MSSQL “require” Domain Administrator privileges? (It doesn’t…)  When a password is assigned to a service, the raw data of the password is stored locally and can be extracted by a remote user with local administrative account  The scenario of a service actually requiring Domain Administrator privileges is extremely rare (almost doesn’t exist) and is mostly a wrong analysis/laziness of real requirements by the decision maker
  • 27. 199 | P a g e  In the most common case where a service requires an account which is different from SYSTEM it only requires a local/domain user with only LOCAL administrative privileges  In the cases where a network manager or a service requires “the highest privileges”, they only require local administrator on clients and/or operational servers but not the Domain Administrator privilege. (which has login privileges to manage the domain controllers, DNS servers, backup servers, most of today’s enterprise applications which integrate into active directory) Managing the network with Local Administrator Accounts  In most cases the operational requirement is: o The ability to install software on servers and client endpoint machines o Connecting remotely to machines via C$ (NetBIOS) and Remote Registry o Executing remote network scanning o It is possible to execute 99% percent of the tasks using Separation of Duties, assigning each privilege to a single user/account  Users_Administrator_Group – Local Administrators  Servers_Administrators_Group – Local Administrators  Change Password Privilege The NetLogon Folder  Improper use of the Netlogon folder is the classic way to get Domain Administrator privileges for a long term  The most common cases are: o Administrative Logon scripts with clear text passwords to domain administrator accounts or local administrator account on all machines o Free write/modify permission into the directory  A logical problem, completely un-noticed, almost undetectable  The longer the organization’s IT systems exist, the more “treasures” to discover
  • 28. 200 | P a g e The NetLogon Folder - test.kix – Revealing the Citrix UI Password The NetLogon Folder - addgroup.cmd – Revealing the local Administrator of THE ENTIRE NETWORK
  • 29. 201 | P a g e The NetLogon Folder - password.txt – can’t get any better for a hacker LSA Secrets & Protected Storage  The windows operating system implements an API to work securely with passwords  Encryption keys are stored on the system and the encrypted data is stored in its registry o Internet Explorer o NetBIOS Saved Passwords o Windows Service Manager
  • 30. 202 | P a g e LSA Secrets
  • 31. 203 | P a g e
  • 32. 204 | P a g e Protected Storage
  • 33. 205 | P a g e Wireless Passwords Cached Logons  A user at his home, unplugged from the organizational internal network, trying to log into to his laptop cannot log into the domain  Therefore, the network logon is simulated: o The hash of the user’s password is saved on his machine o When the user inputs his password, it is converted into a hash and compared to the list of saved hashes, if a match is found, the system logs the user in  The vulnerability: the default setting in windows is saving hashes of all the last 10 unique/different passwords used to connect to this machine locally  In most cases, the hash of a domain administrator privileged account is on that list
  • 34. 206 | P a g e  Most organizations don’t distinguish between PCs, Servers and Laptops when it comes to the settings for this feature  Most organizations don’t harden: o The local PCs cached logons amount to 0 o The Laptops cached logons amount to 1 o The Servers to 0 (unless its mission critical, then 1 to 3 are recommended)  It means that at least 50% of the machines contains a domain administrator’s hash and can take over the entire network  Conclusion: A user/attacker with local administrator privileges can get a domain administrator account from most of the organization’s computers Password History  In order to avoid users recycling their passwords o every forced password change, the system the system saves the hash passwords locally  By default, their last 24 passwords are saved on the machine  An attacker with local administrator privileges on the machine, gets all the “password patterns” of all the user accounts who ever logged into this machine  A computer who was used only by 2 people, will contains up to 48 different passwords  Some of these passwords are usually used for other accounts in the organization Users as Local Administrators  When a user is logged on with local administrator privileges, the local system’s entire integrity is at risk  He can install privileged software and drivers such as promiscuous network drivers for advanced network and Man-In-The-Middle attacks and Rootkits  He is able to extract the hashes of all the old passwords of the users who ever logged to the current machine  He is able to extract the hashed of all the CURRENT passwords of the users who ever logged to the current machine
  • 35. 207 | P a g e Forgetting to Harden: RestrictAnonymous=1 Weak Passwords / No Complexity Enforcement  Weak Passwords = A successful Brute Force  Complexity Compliant Passwords -> which appear in a passwords dictionary “Password1!”  Old passwords or default passwords of the organization Guess what the password was? (gma )
  • 36. 208 | P a g e Firewalls Understanding Firewalls (1, 2, 3, 4, 5 generations) A firewall is a device or set of devices designed to permit or deny network transmissions based upon a set of rules and is frequently used to protect networks from unauthorized access while permitting legitimate communications to pass. Many personal computer operating systems include software-based firewalls to protect against threats from the public Internet. Many routers that pass data between networks contain firewall components and, conversely, many firewalls can perform basic routing functions. First generation: packet filters The first paper published on firewall technology was in 1988, when engineers from Digital Equipment Corporation (DEC) developed filter systems known as packet filter firewalls. This fairly basic system was the first generation of what became a highly involved and technical internet security feature. At AT&T Bell Labs, Bill Cheswick and Steve Bellovin were continuing their research in packet filtering and developed a working model for their own company based on their original first generation architecture. Packet filters act by inspecting the "packets" which transfer between computers on the Internet. If a packet matches the packet filter's set of rules, the packet filter will drop (silently discard) the packet, or reject it (discard it, and send "error responses" to the source). This type of packet filtering pays no attention to whether a packet is part of an existing stream of traffic (i.e. it stores no information on connection "state"). Instead, it filters each packet based only on information contained in the packet itself (most commonly using a combination of the packet's source and destination address, its protocol, and, for TCP and UDP traffic, the port number). TCP and UDP protocols constitute most communication over the Internet, and because TCP and UDP traffic by convention uses well known ports for particular types of traffic, a "stateless" packet filter can distinguish between, and thus control, those types of traffic (such as web browsing, remote printing, email transmission, file transfer), unless the machines on each side of the packet filter are both using the same non-standard ports. Packet filtering firewalls work mainly on the first three layers of the OSI reference model, which means most of the work is done between the network and physical layers, with a little bit of peeking into the transport layer to figure out source and destination port numbers.[8] When a packet originates from the sender and filters through a firewall, the device checks for matches to any of the packet filtering rules that are configured in the firewall and drops or rejects the packet accordingly. When the packet passes through the firewall, it filters the packet on a protocol/port number basis (GSS). For example, if a rule in the firewall exists to block telnet access, then the firewall will block the TCP protocol for port number 23.
  • 37. 209 | P a g e Second generation: "stateful" filters From 1989-1990 three colleagues from AT&T Bell Laboratories, Dave Presetto, Janardan Sharma, and Kshitij Nigam, developed the second generation of firewalls, calling them circuit level firewalls. Second-generation firewalls perform the work of their first-generation predecessors but operate up to layer 4 (transport layer) of the OSI model. They examine each data packet as well as its position within the data stream. Known as stateful packet inspection, it records all connections passing through it determines whether a packet is the start of a new connection, a part of an existing connection, or not part of any connection. Though static rules are still used, these rules can now contain connection state as one of their test criteria. Certain denial-of-service attacks bombard the firewall with thousands of fake connection packets to in an attempt to overwhelm it by filling up its connection state memory. Third generation: application layer The key benefit of application layer filtering is that it can "understand" certain applications and protocols (such as File Transfer Protocol, DNS, or web browsing), and it can detect if an unwanted protocol is sneaking through on a non-standard port or if a protocol is being abused in any harmful way. The existing deep packet inspection functionality of modern firewalls can be shared by Intrusion- prevention Systems (IPS). Currently, the Middlebox Communication Working Group of the Internet Engineering Task Force (IETF) is working on standardizing protocols for managing firewalls and other middleboxes. Another axis of development is about integrating identity of users into Firewall rules. Many firewalls provide such features by binding user identities to IP or MAC addresses, which is very approximate and can be easily turned around. The NuFW firewall provides real identity-based firewalling, by requesting the user's signature for each connection. Authpf on BSD systems loads firewall rules dynamically per user, after authentication via SSH. Application firewall An application firewall is a form of firewall which controls input, output, and/or access from, to, or by an application or service. It operates by monitoring and potentially blocking the input, output, or system service calls which do not meet the configured policy of the firewall. The application firewall is typically built to control all network traffic on any OSI layer up to the application layer. It is able to control applications or services specifically, unlike a stateful network firewall which is - without additional software - unable to control network traffic regarding a specific application. There are two primary categories of application firewalls, network-based application firewalls and host-based application firewalls.
  • 38. 210 | P a g e Network-based application firewalls A network-based application layer firewall is a computer networking firewall operating at the application layer of a protocol stack, and are also known as a proxy-based or reverse-proxy firewall. Application firewalls specific to a particular kind of network traffic may be titled with the service name, such as a web application firewall. They may be implemented through software running on a host or a stand-alone piece of network hardware. Often, it is a host using various forms of proxy servers to proxy traffic before passing it on to the client or server. Because it acts on the application layer, it may inspect the contents of the traffic, blocking specified content, such as certain websites, viruses, and attempts to exploit known logical flaws in client software. Modern application firewalls may also offload encryption from servers, block application input/output from detected intrusions or malformed communication, manage or consolidate authentication, or block content which violates policies. Host-based application firewalls A host-based application firewall can monitor any application input, output, and/or system service calls made from, to, or by an application. This is done by examining information passed through system calls instead of or in addition to a network stack. A host-based application firewall can only provide protection to the applications running on the same host. Application firewalls function by determining whether a process should accept any given connection. Application firewalls accomplish their function by hooking into socket calls to filter the connections between the application layer and the lower layers of the OSI model. Application firewalls that hook into socket calls are also referred to as socket filters. Application firewalls work much like a packet filter but application filters apply filtering rules (allow/block) on a per process basis instead of filtering connections on a per port basis. Generally, prompts are used to define rules for processes that have not yet received a connection. It is rare to find application firewalls not combined or used in conjunction with a packet filter. Also, application firewalls further filter connections by examining the process ID of data packets against a ruleset for the local process involved in the data transmission. The extent of the filtering that occurs is defined by the provided ruleset. Given the variety of software that exists, application firewalls only have more complex rule sets for the standard services, such as sharing services. These per process rule sets have limited efficacy in filtering every possible association that may occur with other processes. Also, these per process ruleset cannot defend against modification of the process via exploitation, such as memory corruption exploits. Because of these limitations, application firewalls are beginning to be supplanted by a new generation of application firewalls that rely on mandatory access control (MAC), also referred to as sandboxing, to protect vulnerable services. Examples of next generation host-based application firewalls which control system service calls by an application are AppArmor and the TrustedBSD MAC framework (sandboxing) in Mac OS X. Host-based application firewalls may also provide network-based application firewalling.
  • 39. 211 | P a g e Distributed web application firewalls Distributed Web Application Firewall (also called a dWAF) is a member of the web application firewall (WAF) and Web applications security family of technologies. Purely software-based, the dWAF architecture is designed as separate components able to physically exist in different areas of the network. This advance in architecture allows the resource consumption of the dWAF to be spread across a network rather than depend on one appliance, while allowing complete freedom to scale as needed. In particular, it allows the addition / subtraction of any number of components independently of each other for better resource management. This approach is ideal for large and distributed virtualized infrastructures such as private, public or hybrid cloud models. Cloud-based web application firewalls Cloud-based Web Application Firewall is also member of the web application firewall (WAF) and Web applications security family of technologies. This technology is unique due to the fact that it is platform agnostic and does not require any hardware or software changes on the host, just a DNS change. By applying this DNS change, all web traffic is routed through the WAF where it is inspected and threats are thwarted. Cloud-based WAFs are typically centrally orchestrated, which means that threat detection information is shared among all the tenants of the service. This collaboration results in improved detection rates and lower false positives. Like other cloud-based solutions, this technology is elastic, scalable and is typically offered as a pay- as-you grows service. This approach is ideal for cloud-based web applications and small or medium sized websites that require web application security but are not willing or able to make software or hardware changes to their systems.  In 2010, Imperva spun out Incapsula to provide a cloud-based WAF for small to medium sized businesses.  Since 2011, United Security Providers provides the Secure Entry Server as an Amazon EC2 Cloud-based Web Application Firewall  Akamai Technologies offers a cloud-based WAF that incorporates advanced features such as rate control and custom rules enabling it to address both layer 7 and DDoS attacks. The Common Firewall’s Limits 1. The common firewall works on ACL rules where something is allowed or denied based on a simple set of parameters such as Source IP, Destination IP, Source Port and Destination Port. 2. Most firewalls don’t support application level rules that would allow the creation of smart rules that match today’s more active application-rich technology world. 3. Every hacker knows that 99.9% from the firewalls on planet earth are configured to allow connections to remote machines at TCP port 80, since this is the port of the “WEB”, used by HTTP. 4. Today’s firewalls will allow any kind of traffic to leave the organization on port 80, this means that:
  • 40. 212 | P a g e  Hackers can use “network tunneling” technology to transfer ANY kind of information on port 80 and therefore bypass all of the currently deployed firewalls  In terms of traffic and content going through a port defined to be open, such as port 80, Firewalls are configured to act as a blacklist, therefore tunneling an ENCRYPTED connection such as SSL and SSH on port 80, will bypass all of the firewall’s potential inspection features.  The problem gets worse when ports that allow encryption connections are commonly available, such as port 443, which supports the encrypted HTTPS protocol. Hackers can tunnel any communication on port 443 and encrypt it with HTTPS to imitate the behavior of any standard browser.  The firewalls which do inspect SSL traffic relay on the assumption that they will generate and sign a certificate on their own for the browsed domain and the browser will accept it since they are defined on the machine as a trusted Certificate Authority. However, as firewalls work mostly on blacklist mode, they will still forward any traffic that they fail to open and inspect. Implementing Application Aware Firewalls Features Palo Alto Networks has built a next-generation firewall with several innovative technologies enabling organizations to fix the firewall. These technologies bring business-relevant elements (applications, users, and content) under policy control on high performance firewall architecture. This technology runs on a high-performance, purpose-built platform based on Palo Alto Networks' Single-Pass Parallel Processing (SP3) Architecture. Unique to the SP3 Architecture, traffic is only examined once, using hardware with dedicated processing resources for security, networking, content scanning and management to provide line-rate, low-latency performance under load. Application Traffic Classification Accurate traffic classification is the heart of any firewall, with the result becoming the basis of the security policy. Traditional firewalls classify traffic by port and protocol, which, at one point, was a satisfactory mechanism for securing the perimeter. Today, applications can easily bypass a port-based firewall; hopping ports, using SSL and SSH, sneaking across port 80, or using non-standard ports. App-IDTM, a patent-pending traffic classification mechanism that is unique to Palo Alto Networks, addresses the traffic classification limitations that plague traditional firewalls by applying multiple classification mechanisms to the
  • 41. 213 | P a g e traffic stream, as soon as the device sees it, to determine the exact identity of applications traversing the network. Classify traffic based on applications, not ports. App-ID uses multiple identification mechanisms to determine the exact identity of applications traversing the network. The identification mechanisms are applied in the following manner:  Traffic is first classified based on the IP address and port.  Signatures are then applied to the allowed traffic to identify the application based on unique application properties and related transaction characteristics.  If App-ID determines that encryption (SSL or SSH) is in use and a decryption policy is in place, the application is decrypted and application signatures are applied again on the decrypted flow.  Decoders for known protocols are then used to apply additional context-based signatures to detect other applications that may be tunneling inside of the protocol (e.g., Yahoo! Instant Messenger used across HTTP).  For applications that are particularly evasive and cannot be identified through advanced signature and protocol analysis, heuristics or behavioral analysis may be used to determine the identity of the application. As the applications are identified by the successive mechanisms, the policy check determines how to treat the applications and associated functions: block them, or allow them and scan for threats, inspect for unauthorized file transfer and data patterns, or shape using QoS.
  • 42. 214 | P a g e Always on, always the first action taken across all ports. Classifying traffic with App-ID is always the first action taken when traffic hits the firewall, which means that all App-IDs are always enabled, by default. There is no need to enable a series of signatures to look for an application that is thought to be on the network; App-ID is always classifying all of the traffic, across all ports - not just a subset of the traffic (e.g., HTTP). All App-IDs are looking at all of the traffic passing through the device; business applications, consumer applications, network protocols, and everything in between. App-ID continually monitors the state of the application to determine if the application changes midstream, providing the updated information to the administrator in ACC, applies the appropriate policy and logs the information accordingly. Like all firewalls, Palo Alto Networks next-generation firewalls use positive control, default denies all traffic, then allow only those applications that are within the policy. All else is blocked. All classification mechanisms, all application versions, all OSes. App-ID operates at the services layer, monitoring how the application interacts between the client and the server. This means that App-ID is indifferent to new features, and it is client or server operating system agnostic. The result is that a single App-ID for Bit Torrent is going to be roughly equal to the many Bit Torrent OS and client signatures that need to be enabled to try and control this application in other offerings. Full visibility and control of custom and internal applications. Internally developed or custom applications can be managed using either an application override or custom App-IDs. An applications override effectively renames the traffic stream to that of the internal application. The other mechanism would be to use the customizable App-IDs based on context-based signatures for HTTP, HTTPs, FTP, IMAP, SMTP, RTSP, Telnet, and unknown TCP /UDP traffic. Organizations can use either of these mechanisms to exert the same level of control over their internal or custom applications that may be applied to SharePoint, Salesforce.com, or Facebook. Securely Enabling Applications Based on Users & Groups Traditionally, security policies were applied based on IP addresses, but the increasingly dynamic nature of users and applications means that IP addresses alone have become ineffective as a mechanism for monitoring and controlling user activity. Palo Alto Networks next-generation firewalls integrate with a wide range of user repositories and terminal service offerings, enabling organizations to incorporate user and group information into their security policies. Through User-ID, organizations also get full visibility into user activity on the network as well as user-based policy-control, log viewing and reporting.
  • 43. 215 | P a g e Transparent use of users and groups for secure application enablement. User-ID seamlessly integrates Palo Alto Networks next-generation firewalls with the widest range of enterprise directories on the market; Active Directory, eDirectory, OpenLDAP and most other LDAP based directory servers. The User-ID agent communicates with the domain controllers, forwarding the relevant user information to the firewall, making the policy tie-in completely transparent to the end- user. Identifying users via a browser challenge. In cases where a user cannot be automatically identified through a user repository, a captive portal can be used to identify users and enforce user based security policy. In order to make the authentication process completely transparent to the user, Captive Portal can be configured to send a NTLM authentication request to the web browser instead of an explicit username and password prompt. Integrate user information from other user repositories. In cases where organizations have a user repository or application that already has knowledge of users and their current IP addresses, an XML-based REST API can be used to tie the repository to the Palo Alto Networks next-generation firewall.
  • 44. 216 | P a g e Transparently extend user-based policies to non-Windows devices. User-ID can be configured to constantly monitor for logon events produced by Mac OS X, Apple iOS, Linux/UNIX clients accessing their Microsoft Exchange email. By expanding the User-ID support to non-Windows platforms, organizations can deploy consistent application enablement policies. Visibility and control over terminal services users. In addition to support for a wide range of directory services, User-ID provides visibility and policy control over users whose identity is obfuscated by a Terminal Services deployment (Citrix or Microsoft). Completely transparent to the user, every session is correlated to the appropriate user, which allows the firewall to associate network connections with users and groups sharing one host on the network. Once the applications and users are identified, full visibility and control within ACC, policy editing, logging and reporting is available. High Performance Threat Prevention Content-ID combines a real-time threat prevention engine with a comprehensive URL database and elements of application identification to limit unauthorized data and file transfers, detect and block a wide range of threats and control non-work related web surfing. The application visibility and control delivered by App-ID, combined with the content inspection enabled by Content-ID means that IT departments can regain control over application traffic and the related content.
  • 45. 217 | P a g e NSS-rated IPS. The NSS-rated IPS blocks known and unknown vulnerability exploits, buffer overflows, D.o.S attacks and port scans from compromising and damaging enterprise information resources. IPS mechanisms include:  Protocol decoder-based analysis statefully decodes the protocol and then intelligently applies signatures to detect vulnerability exploits.  Protocol anomaly-based protection detects non-RFC compliant protocol usage such as the use of overlong URI or overlong FTP login.  Stateful pattern matching detects attacks across more than one packet, taking into account elements such as the arrival order and sequence.  Statistical anomaly detection prevents rate-based D.o.S flooding attacks.  Heuristic-based analysis detects anomalous packet and traffic patterns such as port scans and host sweeps.  Custom vulnerability or spyware phone home signatures that can be used in the either the anti- spyware or vulnerability protection profiles.  Other attack protection capabilities such as blocking invalid or malformed packets, IP defragmentation and TCP reassembly are utilized for protection against evasion and obfuscation methods employed by attackers. Traffic is normalized to eliminate invalid and malformed packets, while TCP reassembly and IP de- fragmentation is performed to ensure the utmost accuracy and protection despite any attack evasion techniques. URL Filtering Complementing the threat prevention and application control capabilities is a fully integrated, URL filtering database consisting of 20 million URLs across 76 categories that enables IT departments to monitor and control employee web surfing activities. The on-box URL database can be augmented to suit the traffic patterns of the local user community with a custom, 1 million URL database. URLs that
  • 46. 218 | P a g e are not categorized by the local URL database can be pulled into cache from a hosted, 180 million URL database. In addition to database customization, administrators can create custom URL categories to further tailor the URL controls to suit their specific needs. URL filtering visibility and policy controls can be tied to specific users through the transparent integration with enterprise directory services (Active Directory, LDAP, eDirectory) with additional insight provided through customizable reporting and logging. File and Data Filtering Data filtering features enable administrators to implement policies that will reduce the risks associated with the transfer of unauthorized files and data.  File blocking by type: Control the flow of a wide range of file types by looking deep within the payload to identify the file type (as opposed to looking only at the file extension).  Data filtering: Control the transfer of sensitive data patterns such as credit card and social security numbers in application content or attachments.  File transfer function control: Control the file transfer functionality within an individual application, allowing application use yet preventing undesired inbound or outbound file transfer. Checkpoint R75 – Application Control Blade Granular application control  Identify, allow, block or limit usage of thousands of applications by user or group  UserCheck technology alerts users about controls, educates on Web 2.0 risks, policies
  • 47. 219 | P a g e  Embrace the power of Web 2.0 Social Technologies and applications while protecting against threats and malware Largest application library with AppWiki  Leverages the world's largest application library with over 240,000 Web 2.0 applications and social network widgets  Identifies, detects, classifies and controls applications for safe use of Web 2.0 social technologies and communications  Intuitively grouped in over 80 categories—including Web 2.0, IM, P2P, Voice & Video and File Share Integrated into Check Point Software Blade Architecture  Centralized management of security policy via a single console  Activate application control on any Check Point security gateway  Supported gateways include: UTM-1, Power-1, IP Appliances and IAS Appliances Main Functionalities  Application detection and usage control  Enables application security policies to identify, allow, block or limit usage of thousands of applications, including Web 2.0 and social networking, regardless of port, protocol or evasive technique used to traverse the network.  AppWiki application classification library  AppWiki enables application scanning and detection of more than 4,500 distinct applications and over 240,000 Web 2.0 widgets including instant messaging, social networking, video streaming, VoIP, games and more.  Inspect SSL Encrypted Traffic  Scan and secure SSL encrypted traffic passing through the gateway, such as HTTPS.  UserCheck  UserCheck technology alerts employees in real-time about their application access limitations, while educating them on Internet risk and corporate usage policies.  User and machine awareness  Integration with the Identity Awareness Software Blade enables users of the Application Control Software Blade to define granular policies to control applications usage.  Central policy management  Centralized management offers unmatched leverage and control of application security policies and enables organizations to use a single repository for user and group definitions, network objects, access rights and security policies.  Unified event management  Using SmartEvent to view user’s online behavior and application usage provides organizations with the most granular level of visibility.
  • 48. 220 | P a g e Utilizing Firewalls for Maximum Security 1. Don’t use an old, non-application aware firewall 2. First Firewall rule must be deny all protocols on all ports from all IPs to all IPs 3. Only rules of requires systems must be allowed. For example: a. HTTP, HTTPS – to all b. IMAPS to internal mail server c. NetBIOS to internal file server and etc… 4. Activate Application inspection on all traffic on all ports 5. Enforce that only the defined traffic types would be allowed on that port. For Example on port 80 only identified HTTP traffic would be allowed. 6. Don’t allow forwarding of any traffic that was failed to be inspected. 7. Define the DNS server as the Domain Controller, do not allow recursive/authoritative DNS requests make sure the firewall inspects in STRICT mode that the Domain Controller’s outgoing DNS requests. 8. Active Egress filtering to avoid sending spoofed packets unknowingly and unwillingly participating in DDOS attacks. Implementing a Back-Bone Application-Aware Firewall Implementing a Back-Bone Application-Aware Firewall is the perfect, security solution for absolute network management. The best configuration is: 1. Combining full Layer 2 security in Switches and Router equipment 2. Diving all of the organization devices into VLANs which represents the organization’s Logical groups 3. Implementing each port in each one of the VLANs as PVLAN Edge, which no endpoint can talk with any other endpoint via Layer 2. 4. Defining all routers to forward all traffic to the firewall (their higher level hop) 5. Placing an application aware firewall as the backbone before the backbone router Network Inventory & Monitoring How to map your network connections? 1. Since the every day’s IT management has many tasks, no one really inspects what are the current open connections. 2. It is possible to configure the firewall to log every established TCP connection and every host which sent any packet (ICMP, UDP) to any non-TCP port.
  • 49. 221 | P a g e 3. The results of such configuration would be a list of unknown IPs. It is possible to write an automatic script to execute a Reverse-DNS lookup and an IP WHOIS search on each IP and create a “resolved list” which has some meaning to it. 4. Anything unknown/unfamiliar IP accessed from within the network, requires to match the number of stations which accessed it and to make a basic forensic investigation on them in order to discover the software which made the connection. 5. This process is very technical, time consuming, requires especially skilled security professionals and therefore is not executed unless a Security Incident was reported. 6. The only solution that reverses this process from being impossible to very reasonable and simple is IP/Domain/URL whitelisting, which denies everything except the database of the entire world’s known, well reputed and malware clean approved IPs/Websites. 7. IP/Domain/URL whitelisting is very hard to implement and requires a high amount of maintenance, it is up to you to make your choice. How to discover all network devices? 1. Mapping of the network is provided by Firewalls, Anti-Viruses, NACs, SIEM and Configuration Management products. 2. Some products include an agent that runs on the endpoint, acts as a network sensor and reports all the machines that passively or actively communicated on its subnet. 3. It is possible to purchase a “Network Inventory Management” solution. The most reliable way to detect all machines on the network is to combine: 1. The switches know all the ports that have electric power signal and know all the devices MACs if they ever sent a non-spoofed layer 2 frame on that port. 2. Connect via SNMP to switches and extract all MACs and IPs on all ports 3. Full network TCP and UDP scan of ports 1 to 65535 of the entire network (without any ping or is-alive scans). If there is a hidden machine that is listening on a self-defined IP on a specific TCP/UDP port, it will answer at least one packet and will be detected by the scan. Detecting “Hidden” Machines – Machines behind a NAT INSIDE Your Network 1. Looking for timing anomalies in ICMP and TCP 2. Looking for IP ID strangeness a. NAT with Windows on a Linux host might have non-incremental IPID packets, interspersed with incremental IPID packets 3. Looking for unusual headers in packets a. Timestamps and other optional parameters may have inherent patterns How to discover all cross-network installed software? There are two most common ways to discover the software installed on the networks machines:
  • 50. 222 | P a g e 1. Agent-Less – discovery is done by connecting to the machine remotely through: a. RPC/WMI b. SNMP On windows systems, WMI provides most of the classical functionality, though it only detects software installed by “Windows Installer” and software registered in the “Uninstall” registry Key. Some machines can’t bet “managed”/connected to remotely over the network since: 1. They have a firewall installed or configured to block WMI/RPC access 2. They have a permission error, “Domain Administrator” removed from the “Local Administrators” group 3. They are not part of the domain – they were never reported and registered 2. Agent-Based – provides the maximum level of discovery, can scan the memory, raw disk, files, folders locally and report back all of the detected software. Once the agent is installed, most of the common permission, firewalls, and connectivity and latency problems are solved. The main problem is machines the agent was removed from and stranger machines which never had the agent installed. 3. The Ultimate Solution – Combining agent-based with agent-less technology, this way all devices get detected and most of the possible information is extracted from them. NAC The Problem: Ethernet Network  Authenticate (Who): o distinguish between valid or rouge member  Control (Where to and How?): o all network members at the network level  Authorize (Application Layer Conditions): o check device compliance according to company policy
  • 51. 223 | P a g e What is a NAC originally?  The concept was invented in 2003 originally called “Network Admission Control”  The idea: checking the software version on machines connecting to the network  The Action: denying connection for those below the standard Today’s NAC?  Re-Invented as: Network Access Control  Adding to the old idea: Disabling ANY foreign machines from connecting into a computer network  The Actions: o Shuts down the power on that port of the switch o Move foreign machine to Guest VLAN Why Invent Today’s NAC?
  • 52. 224 | P a g e Dynamic Solution for a Dynamic Environment Did We EVER Manage Who Gets IP Access? What is a NAC? Network Access Control (NAC) is a computer networking solution that uses a set of protocols to define and implement a policy that describes how to secure access to network nodes by devices when they initially attempt to access the network. NAC might integrate the automatic remediation process (fixing non-compliant nodes before allowing access) into the network systems, allowing
  • 53. 225 | P a g e the network infrastructure such as routers, switches and firewalls to work together with back office servers and end user computing equipment to ensure the information system is operating securely before interoperability is allowed. Network Access Control aims to do exactly what the name implies—control access to a network with policies, including pre-admission endpoint security policy checks and post- admission controls over where users and devices can go on a network and what they can do. Initially 802.1X was also thought of as NAC. Some still consider 802.1X as the simplest form of NAC, but most people think of NAC as something more. Simple Explanation When a computer connects to a computer network, it is not permitted to access anything unless it complies with a business defined policy, including anti-virus protection level, system update level and configuration. While the computer is being checked by a pre-installed software agent, it can only access resources that can remediate (resolve or update) any issues. Once the policy is met, the computer is able to access network resources and the Internet, within the policies defined within the NAC system. NAC is mainly used for endpoint health checks, but it is often tied to Role based Access. Access to the network will be given according to profile of the person and the results of a posture/health check. For example, in an enterprise, the HR department could access only HR department files if both the role and the endpoint meet anti-virus minimums. Goals of NAC Because NAC represents an emerging category of security products, its definition is both evolving and controversial. The overarching goals of the concept can be distilled to: 1. Mitigation of zero-day attacks The key value proposition of NAC solutions is the ability to prevent end-stations that lack antivirus, patches, or host intrusion prevention software from accessing the network and placing other computers at risk of cross-contamination of computer worms. 2. Policy enforcement NAC solutions allow network operators to define policies, such as the types of computers or roles of users allowed to access areas of the network, and enforce them in switches, routers, and network middle boxes.
  • 54. 226 | P a g e 3. Identity and access management Where conventional IP networks enforce access policies in terms of IP addresses, NAC environments attempt to do so based on authenticated user identities, at least for user end- stations such as laptops and desktop computers. NAC Approaches  Agent-Full o Smarter, Unlimited Features o Faster o Works Offline (Settings Cache Mode) o Endpoint Management Itself is more secure  Agent-Less o Modular o Easy to integrate o Credentials constantly travel the network o SNMP Traps and DHCP Requests
  • 55. 227 | P a g e NAC – Behavior Lifecycle NAC = LAN Mini IPS?  NAC is one of the functions that a full end to end IPS product should provide  Some vendors don’t sell NAC as a proprietary module, for example: o ForeScout CounterAct  NAC only Solutions by o Trustwave o Mcafee NAC as Part of Endpoint Security Solutions  Antivirus Vendors provide NAC (Network Admission Control) on managed endpoints  Vendors like Symantec, Mcafee and Sophos  A great solution IF: o The AV Management server controls the switches and disconnects all non- managed hosts o Except exclusions (Printers, Cameras, Physical Access Devices) Talking Endpoints: What’s a NAP?  NAP is Microsoft’s built-in support client for NAC  NAP interoperates with every switch and access point  Controlled by Group Policy
  • 56. 228 | P a g e General Basic NAC Deployment NAC Deployment Types: 1. Pre-admission and post-admission There are two prevailing design philosophies in NAC, based on whether policies are enforced before or after end-stations gain access to the network. In the former case, called pre-admission NAC, end-stations are inspected prior to being allowed on the network. A typical use case of pre-admission NAC would be to prevent clients with out- of-date antivirus signatures from talking to sensitive servers. Alternatively, post- admission NAC makes enforcement decisions based on user actions, after those users have been provided with access to the network. 2. Agent versus agentless The fundamental idea behind NAC is to allow the network to make access control decisions based on intelligence about end-systems, so the manner in which the network is informed about end-systems is a key design decision. A key difference among NAC systems is whether they require agent software to report end-system characteristics, or
  • 57. 229 | P a g e whether they use scanning and network inventory techniques to discern those characteristics remotely. As NAC has matured, Microsoft now provides their network access protection (NAP) agent as part of their Windows 7, Vista and XP releases. There are NAP compatible agents for Linux and Mac OS X that provide near equal intelligence for these operating systems. 3. Out-of-band versus inline In some out-of-band systems, agents are distributed on end-stations and report information to a central console, which in turn can control switches to enforce policy. In contrast the inline solutions can be single-box solutions which act as internal firewalls for access-layer networks and enforce the policy. Out-of-band solutions have the advantage of reusing existing infrastructure; inline products can be easier to deploy on new networks, and may provide more advanced network enforcement capabilities, because they are directly in control of individual packets on the wire. However, there are products that are agentless, and have both the inherent advantages of easier, less risky out-of-band deployment, but use techniques to provide inline effectiveness for non- compliant devices, where enforcement is required. NAC Acceptance Tests 1. Attempting to get an IP using DHCP in a regular Windows machine. 2. Attempting to get an IP using DHCP in a regular Linux machine.
  • 58. 230 | P a g e 3. Multiple attempts to get an IP using DHCP with a private DHCP client with different values then the Operating Systems in the DHCP packet fields 4. Manually configuring a local IP of type “Link-Local” 5. Manually configuring an IP in the network’s IP range with “Gratuitous ARP” on 6. Manually configuring an IP in the network’s IP range with “Gratuitous ARP” off 7. Inspecting the NAC’s response to DHCP attacks and network attacks in the “1-2 minutes of grace” 8. Restricting the WMI (RPC) support on the local machine (even using a firewall to block RPC on TCP port 135) 9. Copy-Catting/Stealing the identity (IP or IP+MAC) of an existing user (received via passive network sniffing of broadcasts) 10. Using private Denial of Service 0-day exploits in a loop on a specific machine to obtain its identity on the network 11. Imposing as a printer or other non-smart devices (printers, biometric devices, turnstile controller, door devices and etc…) 12. Testing the proper enforcement common NAC basic protection features such as:  Duplicate MAC  Duplicate IP  Foreign MAC  Foreign IP  Wake Up On LAN  Domain Membership  Anti-Virus + Definitions NAC Vulnerabilities Attack a NAC is mostly based on network attacks and focuses on several aspects:  Vulnerabilities by Integration Process - Wrong product positioning in the network architecture, wrong design of the data flow which results in different levels of security. These mistakes are caused mostly by the following reasons: o Integrator’s Lack of understanding of the organization’s requirements, systems and network architecture o Integrator’s Lack of understanding of the organization’s security policies and its expectations from the product
  • 59. 231 | P a g e o Insufficient involvement of the organization’s IT personnel in the integration process o Lack of security auditing to determine the product real-life performance by a certified information security professional  Vulnerabilities caused by configuration –Wrong configuration of the functionalities the product enforces within the organization, such as: o Not enforcing/monitoring lab/development environments o Not enforcing /monitoring different VLANs and networks, such as the VoIP network o Not blocking/monitoring non-interactive network sniffing modes such as Wake Up On LAN o Not analyzing and responding to anomalies in relevant element/protocols, insufficient network lock-out times,  Vulnerabilities in the product (Vendor’s Code ) The common attack – Bypassing & Killing the NAC 1. Some of today’s NACs are event based, so the network equipment (switch/router) allows you to connect to the network and get an IP, but sometime after you connected to the network, it sends a message notifying the NAC with your IP and MAC and the NAC tries to connect to your machine and validate it is an approved member of the network.. 2. The alerting mechanism from the switches in mostly SNMP alerts called “SNMP Traps”. 3. This behavior grants the attacker one-two minutes to attack/take over/infect some machines on the network, before his port’s power is disconnected. 4. In most cases after 5 minutes if the port is shut down, the NAC wakes it back to life in order to keep the organization operable and to accept new devices. 5. For a well prepared hacker, with automatic scripts exploiting most common vulnerabilities and utilizing the latest exploits, this would be sufficient. 6. The real problem is that a large amount of the NAC vendors provide a product with is software based and therefore is installed mostly on common Windows or Linux Machines. 7. As it is well known, common Windows and Linux machines are vulnerable to many application layer and operating system vulnerabilities, but the absolute whole of them is vulnerable to network attacks, especially layer 2 attacks. 8. This means that on those 1 or 2 minutes which are available every 5 minutes which comes out to 5 to 10 minutes per hour, the attacker can find the Windows/Linux machine hosting the NAC software and kill the communication to it using basic layer 2 attacks such as ARP Spoofing.
  • 60. 232 | P a g e Open Source Solutions  OpenNac/FreeNAC  PacketFence OpenNAC/FreeNAC – Keeping It Simple
  • 61. 233 | P a g e
  • 62. 234 | P a g e PacketFence – Almost Commercial Quality
  • 63. 235 | P a g e
  • 64. 236 | P a g e
  • 65. 237 | P a g e
  • 66. 238 | P a g e SIEM - (Security Information Event Management) SIEM aka “SIM” (Security Information Management) and “SEM” (Security Event Management) solutions are a combination of the formerly disparate product categories of SIM (security information management) and SEM (security event management). SIEM technology provides real-time analysis of security alerts generated by network hardware and applications. SIEM solutions come as software, appliances or managed services, and are also used to log security data and generate reports for compliance purposes. The acronyms SEM, SIM and SIEM have been used interchangeably, though there are differences in meaning and product capabilities. The segment of security management that deals with real-time monitoring, correlation of events, notifications and console views is commonly known as Security Event Management (SEM). The second area provides long-term storage, analysis and reporting of log data and is known as Security Information Management (SIM). The term Security Information Event Management (SIEM), coined by Mark Nicolett and Amrit Williams of Gartner in 2005, describes the product capabilities of gathering, analyzing and presenting information from network and security devices; identity and access management applications; vulnerability management and policy compliance tools; operating system, database and application logs; and external threat data. A key focus is to monitor and help manage user and service privileges, directory services and other system configuration changes; as well as providing log auditing and review and incident response. As of January 2012, Mosaic Security Research identified 85 unique SIEM products. SIEM Capabilities  Data Aggregation: SIEM/LM (log management) solutions aggregate data from many sources, including network, security, servers, databases, applications, providing the ability to consolidate monitored data to help avoid missing crucial events.  Correlation: looks for common attributes, and links events together into meaningful bundles. This technology provides the ability to perform a variety of correlation techniques to integrate different sources, in order to turn data into useful information.  Alerting: the automated analysis of correlated events and production of alerts, to notify recipients of immediate issues.  Dashboards: SIEM/LM tools take event data and turn it into informational charts to assist in seeing patterns, or identifying activity that is not forming a standard pattern.  Compliance: SIEM applications can be employed to automate the gathering of compliance data, producing reports that adapt to existing security, governance and auditing processes.  Retention: SIEM/SIM solutions employ long-term storage of historical data to facilitate correlation of data over time, and to provide the retention necessary for compliance requirements.
  • 67. 239 | P a g e SIEM Architecture  Low level, real-time detection of known threats and anomalous activity (unknown threats)  Compliance automation  Network, host and policy auditing  Network behavior analysis and situational behavior  Log Management  Intelligence that enhances the accuracy of threat detection  Risk oriented security analysis  Executive and technical reports  A scalable high performance architecture
  • 68. 240 | P a g e A SIEM Detector Module is Comprised a few main Modules: 1. Detector  Intrusion Detection  Anomaly Detection  Vulnerability Detection  Discovery, Learning and Network Profiling systems  Inventory systems 2. Collector  Connectors to Windows Machines  Connectors to Linux Machines  Connectors to Network Devices  Classifies the information and events  Normalizes the information 3. SIEM  Risk Assessment  Correlation
  • 69. 241 | P a g e  Risk metrics  Vulnerability scanning  Data mining for events  Real-time monitoring 4. Logger  Stores the data in the filesystem/DB  Allows storage of unlimited number of events  Supports SAN/NAS storage 5. Management Console & Dashboard  Configuration changes  Access to Dashboard and Metrics  Multi-tenant and Multi-user management  Access to Real-time information  Reports generation  Ticketing system  Vulnerability Management  Network Flows Management  Reponses configuration A SIEM Detector Module is Comprised of Sensors:  Intrusion Detection  Anomaly Detection  Vulnerability Detection  Discovery, Learning and Network Profiling systems  Inventory systems A SIEM Commonly used Open Source Sensors: 1. Snort (Network Intrusion Detection System) 2. Ntop (Network and usage Monitor) 3. OpenVAS (Vulnerability Scanning) 4. P0f (Passive operative system detection) 5. Pads (Passive Asset Detection System) 6. Arpwatch (Ethernet/IP address parings monitor) 7. OSSEC (Host Intrusion Detection System) 8. Osiris (Host Integrity Monitoring) 9. Nagios (Availability Monitoring) 10. OCS (Inventory)
  • 70. 242 | P a g e SIEM Logics
  • 71. 243 | P a g e Planning for the right amounts of data Introduction Critical business systems and their associated technologies are typically held to performance benchmarks. In the security space, benchmarks of speed, capacity and accuracy are common for encryption, packet inspection, assessment, alerting and other critical protection technologies. But how do you set benchmarks for a tool based on collection, normalization and correlation of security events from multiple logging devices? And how do you apply these benchmarks to today’s diverse network environments? This is the problem with benchmarking Security Information Event Management (SIEM) systems, which collect security events from one to thousands of devices, each with its own different log data format. If we take every conceivable environment into consideration, it is impossible to benchmark SIEM systems. We can, however, set one baseline environment against which to benchmark and then include equations so that organizations can extrapolate their own benchmark requirements. Consider that network and application firewalls, network and host Intrusion Detection/Prevention (IDS/IPS), access controls, sniffers, and Unified Threat Management systems (UTM)—all log security events that must be monitored. Every switch, router, load balancer, operating system, server, badge reader, custom or legacy application, and many other IT systems across the enterprise, produce logs of security events, along with every new system to follow (such as virtualization). Most have their own log expression formats. Some systems, like legacy applications, don’t produce logs at all. First we must determine what is important. Do we need all log data from every critical system in order to perform security, response, and audit? Will we need all that data at lightning speed? (Most likely, we will not.) How much data can the network and collection tool actually handle under load? What is the threshold before networks bottleneck and/or the SIEM is rendered unusable, not unlike a denial of service (DOS)? These are variables that every organization must consider as they hold SIEM to standards that best suit their operational goals. Why is benchmarking SIEM important? According to the National Institute of Standards (NIST), SIEM software is a relatively new type of centralized logging software compared to syslog. Our SANS Log Management Survey shows 51 percent of respondents ranked collecting logs as their most critical challenge – and collecting logs is a basic feature a SIEM system can provide. Further, a recent NetworkWorld article explains how different SIEM products typically integrate well with selected logging tools, but not with all tools. This is due to the disparity between logging and reporting formats from different systems. There is an effort under way to standardize logs through MITRE’s Common Event Expression (CEE) standard event log language.
  • 72. 244 | P a g e But until all logs look alike, normalization is an important SIEM benchmark, which is measured in events per second (EPS). Event performance characteristics provide a metric against which most enterprises can judge a SIEM system. The true value of a SIEM platform, however, will be in terms of Mean Time To Remediate (MTTR) or other metrics that can show the ability of rapid incident response to mitigate risk and minimize operational and financial impact. In our second set of benchmarks for storage and analysis, we have addressed the ability of SIEM to react within a reasonable MTTR rate to incidents that require automatic or manual intervention. Because this document is a benchmark, it does not cover the important requirements that cannot be benchmarked, such as requirements for integration with existing systems (agent vs. agent-less, transport mechanism, ports and protocols, interface with change control, usability of user interface, storage type, integration with physical security systems, etc.). Other requirements that organizations should consider but aren’t benchmarked include the ability to process connection- specific flow data from network elements, which can be used to further enhance forensic and root- cause analysis. Other features, such as the ability to learn from new events, make recommendations and store them locally, and filter out incoming events from known infected devices that have been sent to remediation, are also important features that should be considered, but are not benchmarked here. Variety and type of reports available, report customization features, role-based policy management and workflow management are more features to consider as they apply to an individual organization’s needs but are not included in this benchmark. In addition, organizations should look at a SIEM tool’s overall history of false positives, something that can be benchmarked, but is not within the scope of this paper. In place of false positives, Table 2 focuses on accuracy rates within applicable categories. These and other considerations are included in the following equations, sample EPS baseline for a medium-sized enterprise, and benchmarks that can be applied to storage and analysis. As appendices, we’ve included a device map for our sample network and a calculation worksheet for organizations to use in developing their own EPS benchmarks. SIEM Benchmarking Process The matrices that follow are designed as guidelines to assist readers in setting their own benchmark requirements for SIEM system testing. While this is a benchmark checklist, readers must remember that benchmarking, itself, is governed by variables specific to each organization. For a real-life example, consider an article in eSecurity Planet, in which Aurora Health in Michigan estimated that they produced 5,000–10,000 EPS, depending upon the time of day. We assume that means during the normal ebb and flow of network traffic. What would that load look like if it were under attack? How many security events would an incident, such as a virus outbreak on one, two or three subnets, produce?
  • 73. 245 | P a g e An organization also needs to consider their devices. For example, a Nokia high-availability firewall is capable of handling more than 100,000 connections per second, each of which could theoretically create a security event log. This single device would seem to imply a need for 100,000 minimum EPS just for firewall logs. However, research shows that SIEM products typically handle 10,000–15,000 EPS per collector. Common sense tells us that we should be able to handle as many events as ALL our devices could simultaneously produce as a result of a security incident. But that isn’t a likely scenario, nor is it practical or necessary. Aside from the argument that no realistic scenario would involve all devices sending maximum EPS, so many events at once would create bottlenecks on the network and overload and render the SIEM collectors useless. So, it is critical to create a methodology for prioritizing event relevance during times of load so that even during a significant incident, critical event data is getting through, while ancillary events are temporarily filtered. Speed of hardware, NICs (network interface cards), operating systems, logging configurations, network bandwidth, load balancing and many other factors must also go into benchmark requirements. One may have two identical server environments with two very different EPS requirements due to any or all of these and other variables. With consideration of these variables, EPS can be established for normal and peak usage times. We developed the equations included here, therefore, to determine Peak Events (PE) per second and to establish normal usage by exchanging the PEx for NEx (Normal Events per second). List all of the devices in the environment expected to report to the SIEM. Be sure to consider any planned changes, such as adding new equipment, consolidating devices, or removing end of life equipment. First, determine the PE (or NE) for each device with these steps: 1. Carefully select only the security events intended to be collected by the SIEM. Make sure those are the only events included in the sample being used for the formula. 2. Select reasonable time frames of known activity: Normal and Peak (under attack, if possible). This may be any period from minutes to days. A longer period of time, such as a minimum of 90 days, will give a more accurate average, especially for “normal” activity. Total the number of Normal or Peak events during the chosen period. (It will also be helpful to consider computing a “low” activity set of numbers, because fewer events may be interesting as well.) 3. Determine the number of seconds within the time frame selected. 4. Divide the number of events by the number of seconds to determine PE or NE for the selected device. Formula 1: # of Security Events = EPS Time Period in Seconds
  • 74. 246 | P a g e 1. The resulting EPS is the PE or NE depending upon whether we began with peak activity or normal activity. Once we have completed this computation for every device needing security information event management, we can insert the resulting numbers in the formula below to determine Normal EPS and Peak EPS totals for a benchmark requirement. Formula 2: 1. In your production environment determine the peak number of security events (PEx) created by each device that requires logging using Formula1. (If you have identical devices with identical hardware, configurations, load, traffic, etc., you may use this formula to avoid having to determine PE for every device): 2. [PEx (# of identical devices)] Sum all PE numbers to come up with a grand total for your environment 3. 3. Add at least 10% to the Sum for headroom and another 10% for growth. So, the resulting formula looks like this: Step 1: (PE1+PE2+PE3...+ (PE4 x D4) + (PE5 x D5)...) = SUM1 [baseline PE] Step 2: SUM1 + (SUM1 x 10%) = SUM2 [adds 10% headroom] Step 3: SUM2 + (SUM2 x 10%) = Total PE benchmark requirement [adds 10% growth potential] Once these computations are complete, the resulting Peak EPS set of numbers will reflect that grand, but impractical, peak total mentioned above. Again, it is unlikely that all devices will ever simultaneously produce log events at maximum rate. Seek consultation from SMEs and the system engineers provided by the vendor in order to establish a realistic Peak EPS that the SIEM system must be able to handle, and then set filters for getting required event information through to SIEM analysis, should an overload occur. We have used these equations to evaluate a hypothetical mid-market network with a set number of devices. If readers have a similar infrastructure, similar rates may apply. If the organization is different, the benchmark can be adjusted to fit organizational infrastructures using our equations. The Baseline Network A mid-sized organization is defined as having 500–1000 users, according to a December guide by Gartner, Inc., titled “Gartner’s New SMB Segmentation and Methodology.” Gartner Principal Analyst Adam Hils, together with a team of Gartner analysts, helped us determine that a 750– 1000 user organization is a reasonable base point for our benchmark. As Hils puts it, this number represents some geo and technical diversity found in large enterprises without being too complex to scope and benchmark. With Gartner’s advice, we set our hypothetical organization to have 750 employees, 750 user end points, five offices, six subnets, five databases, and a central data center. Each subnet will have
  • 75. 247 | P a g e an IPS, a switch and gateway/router. The data center has four firewalls and a VPN. (See the matrix below and Appendix A, “Baseline Network Device Map,” for more details.) Once the topography is defined, the next stage is to average EPS collected from these devices during normal and peak periods. Remember that demanding all log data at the highest speed 24x7 could, in it, become problematic, causing a potential DOS situation with network or SIEM system overload. So realistic speeds based on networking and SIEM product restrictions must also be considered in the baseline. Protocols and data sources present other variables considered determining average and peak load requirements. In terms of effect on EPS rates, our experience is that systems using UDP can generate more events more quickly, but this creates a higher load for the management tool, which actually slows collection and correlation when compared to TCP. One of our reviewing analysts has seen UDP packets dropped at 3,000 EPS, while TCP could maintain a 100,000 EPS load. It’s also been our experience that use of both protocols in single environment. Table 1, “Baseline Network Device EPS Averages,” provides a breakdown of Average, Peak and Averaged Peak EPS for different systems logs are collected from. Each total below is the result of device quantity (column 1) x EPS calculated for the device. For example, 0.60 Average EPS for Cisco Gateway/Routers has already been multiplied by the quantity of 7 devices. So the EPS per single device is not displayed in the matrix, except when the quantity is 1. To calculate Average Peak EPS, we determined two subnets under attack, with affected devices sending 80 percent of their EPS capacity to the SIEM. These numbers are by no means scientific. But they do represent research against product information (number of events devices are capable of producing), other research, and the consensus of expert SANS Analysts contributing to this paper.
  • 76. 248 | P a g e A single security incident, such as a quickly replicating worm in a subnet, may fire off thousands of events per second from the firewall, IPS, router/switch, servers, and other infrastructure at a single gateway. What if another subnet falls victim and the EPS are at peak in two subnets? Using our baseline, such a scenario with two infected subnets representing 250 infected end points could theoretically produce 8,119 EPS. We used this as our Average Peak EPS baseline because this midline number is more representative of a serious attack on an organization of this size. In this scenario, we still have event information coming from servers and applications not directly under attack, but there is potential impact to those devices. It is important, therefore, that these normal logs, which are useful in analysis and automatic or manual reaction, continue to be collected as needed.
  • 77. 249 | P a g e SIEM Storage and Analysis Now that we have said so much about EPS, it is important to note that no one ever analyzes a single second’s worth of data. An EPS rating is simply designed as a guideline to be used for evaluation, planning and comparison. When designing a SIEM system, one must also consider the volume of data that may be analyzed for a single incident. If an organization collects an average of 20,000 EPS over eight hours of an ongoing incident, that will require sorting and analysis of 576,000,000 data records. Using a 300 byte average size, that amounts to 172.8 gigabytes of data. This consideration will help put into perspective some reporting and analysis baselines set in the below table. Remember that some incidents may last for extended periods of time, perhaps tapering off, then spiking in activity at different points during the attack. While simple event performance characteristics provide a metric against which most enterprises can judge a SIEM, as mentioned earlier, the ultimate value of a well-deployed SIEM platform will be in terms of MTTR (Mean “Time To Remediate”) or other metrics that can equate rapid incident response to improved business continuity and minimal operational/fiscal impact. It should be noted in this section, as well, that event storage may refer to multiple data facilities within the SIEM deployment model. There is a local event database, used to perform active investigations and forensic analysis against recent activities; long-term storage, used as an archive of summarized event information that is no longer granular enough for comprehensive forensics; and read/only and encrypted raw log storage, used to preserve the original event for forensic analysis and nonrepudiation—guaranteeing chain of custody for regulatory compliance.
  • 78. 250 | P a g e
  • 79. 251 | P a g e Baseline Network Device Map This network map is the diagram for our sample network. Traffic flow, points for collecting and/or forwarding event data, and throttle points were all considered in setting the benchmark baseline in Table 1.
  • 80. 252 | P a g e EPS Calculation Worksheet Common SIEM Report Types 1. Security SIEM DB 2. Logger DB 3. Alarms 4. Incidents 5. Vulnerabilities 6. Availability 7. Network Statistics 8. Asset Information and Inventory 9. Ticketing system 10. Network
  • 81. 253 | P a g e Custom Reports Defining the right Rules – It’s all about the rules When it comes to a SIEM, it is all about the rules. The SIEM can be configured to be most effective and produce the best results by: 1. Defining the right rules that define “what is considered a security event/incident” 2. Implementing an automated response/mitigation action to stop it at real time 3. Configuring it to alert the right person for each incident - in real time An example of a subset of a few events, which together represent a security incident: 1. Some IP on the internet does port scanning on the organization’s IP, port scan is detected and logged 2. 10 days later, a machine from the internal network connects to that IP = Intrusion!
  • 82. 254 | P a g e IDS/IPS Intrusion prevention systems (IPS), also known as intrusion detection and prevention systems (IDPS), are network security appliances that monitor network and/or system activities for malicious activity. The main functions of intrusion prevention systems are to identify malicious activity, log information about said activity, attempt to block/stop activity, and report activity. Intrusion prevention systems are considered extensions of intrusion detection systems because they both monitor network traffic and/or system activities for malicious activity. The main differences are, unlike intrusion detection systems, intrusion prevention systems are placed in-line and are able to actively prevent/block intrusions that are detected. More specifically, IPS can take such actions as sending an alarm, dropping the malicious packets, resetting the connection and/or blocking the traffic from the offending IP address. An IPS can also correct Cyclic Redundancy Check (CRC) errors, un-fragment packet streams, prevent TCP sequencing issues, and clean up unwanted transport and network layer options
  • 83. 255 | P a g e IPS Types 1. Network-based intrusion prevention system (NIPS): monitors the entire network for suspicious traffic by analyzing protocol activity. 2. Wireless intrusion prevention systems (WIPS): monitors a wireless network for suspicious traffic by analyzing wireless networking protocols. 3. Network behavior analysis (NBA): examines network traffic to identify threats that generate unusual traffic flows, such as distributed denial of service (DDoS) attacks, certain forms of malware, and policy violations. 4. Host-based intrusion prevention system (HIPS): an installed software package which monitors a single host for suspicious activity by analyzing events occurring within that host. Detection Methods 1. Signature-Based Detection: This method of detection utilizes signatures, which are attack patterns that are preconfigured and predetermined. A signature-based intrusion prevention system monitors the network traffic for matches to these signatures. Once a match is found the intrusion prevention system takes the appropriate action. Signatures can be exploit-based or vulnerability- based. Exploit-based signatures analyze patterns appearing in exploits being protected against, while vulnerability-based signatures analyze vulnerabilities in a program, its execution, and conditions needed to exploit said vulnerability. 2. Statistical anomaly-based detection: This method of detection baselines performance of average network traffic conditions. After a baseline is created, the system intermittently samples network traffic, using statistical analysis to compare the sample to the set baseline. If the activity is outside the baseline parameters, the intrusion prevention system takes the appropriate action. 3. Stateful Protocol Analysis Detection: This method identifies deviations of protocol states by comparing observed events with “predetermined profiles of generally accepted definitions of benign activity.
  • 84. 256 | P a g e Signature Catalog:
  • 85. 257 | P a g e Alert Monitoring:
  • 86. 258 | P a g e Security Reporting:
  • 87. 259 | P a g e Alert Monitor:
  • 88. 260 | P a g e Anti-Virus: Web content protection & filtering Session Hi-Jacking and Internal Network Man-In-The- Middle XSS Attack Vector The attack flow: 1. The attacker finds an XSS vulnerability in the server/website/web application 2. The attacker creates an encoded URL attack string to decrease suspicion level 3. The attacker spreads the link to a targeted victim or to a distribution list 4. The victim logs into the web application, clicks the link 5. The attacker’s code is executed under the victims credentials and sends the unique session identifier to the attacker
  • 89. 261 | P a g e 6. The attacker plants the unique session identifier in his browser and is now connected to the system as the victim The Man-In-The-Middle Attack Vector • Taking over an active session to a computer system • In order to attack the system, the attacker must know the protocol/method being used to handle the active sessions with the system • In order to attack the system, the attacker must achieve the user’s session identifier (session id, session hash, token, IP) • The most common use of Session Hi-jacking revolves around textual protocols such as the HTTP protocol where the identifier is the ASPSESSID/PHPSESSID/JSESSION parameter located HTTP Cookie Header aka “The Session Cookie” • Most common scenarios of Session Hi-Jacking is done with combination with: • XSS - Where the session cookie is read by an attacker’s JavaScript code • Man-In-The-Middle – Where the cookie is sent over clear-text HTTP through the attacker’s machine, which becomes the victim’s gateway
  • 90. 262 | P a g e
  • 91. 263 | P a g e
  • 92. 264 | P a g e
  • 93. 265 | P a g e