This talk is discussing the idea, approach and possibilities of firewall rule reviews. These identify incorrect and inefficient settings in current firewall settings.
Security Information and Event Management (SIEM)k33a
This document provides an overview of security information and event management (SIEM). It defines SIEM as software and services that combine security information management (SIM) and security event management (SEM). The key objectives of SIEM are to identify threats and breaches, collect audit logs for security and compliance, and conduct investigations. SIEM solutions centralize log collection, correlate events in real-time, generate reports, and provide log retention, forensics and compliance reporting capabilities. The document discusses typical SIEM features, architecture, deployment options, and reasons for SIEM implementation failures.
Effective Cyber Defense Using CIS Critical Security ControlsBSides Delhi
The CIS Critical Security Controls are a recommended set of actions for cyber defense that provide specific and actionable ways to stop today’s most pervasive and dangerous attacks. They are developed, renewed, validated, and supported by a large volunteer community of security experts under the stewardship of the Center for Internet Security (www.cisecurity.org). Contributors, adopters, and supporters are found around the world and come from all types of roles, backgrounds, missions, and businesses. State and local governments, power distributors, transportation agencies, academic institutions, nancial services, federal government, and defense contractors are among the hundreds of organizations that have adopted the Controls. They have all implemented the Controls to address the key question: “What needs to be done right now to protect my organization from advanced and
targeted attacks?”
This document provides an overview of security information and event management (SIEM) tools and related topics. It discusses getting started with Security Onion and Docker, then covers SIEM concepts like collecting events, creating incidents, and example tools like IBM QRadar and Splunk. It also summarizes related areas like user entity behavior analytics, security orchestration automation and response, threat intelligence attribution and distribution, and security analytics hunting techniques.
Security information and event management (SIEM) technology supports threat detection, compliance and security incident management through the collection and analysis (both near real time and historical) of security events, as well as a wide variety of other event and contextual data sources.
SIEM systems provide security event monitoring and log management by collecting security data from across an organization's network and systems. The first SIEM was developed in 1996 and major players today include IBM QRadar, HP ArcSight, and McAfee Nitro. SIEMs aggregate logs from various sources, use correlation engines to identify related security events, and generate alerts when multiple events indicate a higher risk threat. They provide visibility across an organization's security infrastructure and help with compliance, operations, and forensic investigations. SIEM is important for threat detection, compliance, and gaining insights from security event data.
Log management involves collecting logs from various sources, normalizing the data into a readable format, and using log intelligence and monitoring tools to detect threats, enable incident response and forensic investigations, and ensure regulatory compliance. It provides a centralized way to search logs, correlate events, and generate reports which can help security teams more efficiently investigate issues compared to traditional methods of reviewing raw logs. Challenges include a lack of standard log formats and capturing all activity, but the market for log management software is large and growing due to its importance for compliance needs.
LTS Secure Security Information and Event Management (SIEM), is a technology that provides real-time analysis of security alerts generated by network hardware and applications.
Security Information and Event Management (SIEM)k33a
This document provides an overview of security information and event management (SIEM). It defines SIEM as software and services that combine security information management (SIM) and security event management (SEM). The key objectives of SIEM are to identify threats and breaches, collect audit logs for security and compliance, and conduct investigations. SIEM solutions centralize log collection, correlate events in real-time, generate reports, and provide log retention, forensics and compliance reporting capabilities. The document discusses typical SIEM features, architecture, deployment options, and reasons for SIEM implementation failures.
Effective Cyber Defense Using CIS Critical Security ControlsBSides Delhi
The CIS Critical Security Controls are a recommended set of actions for cyber defense that provide specific and actionable ways to stop today’s most pervasive and dangerous attacks. They are developed, renewed, validated, and supported by a large volunteer community of security experts under the stewardship of the Center for Internet Security (www.cisecurity.org). Contributors, adopters, and supporters are found around the world and come from all types of roles, backgrounds, missions, and businesses. State and local governments, power distributors, transportation agencies, academic institutions, nancial services, federal government, and defense contractors are among the hundreds of organizations that have adopted the Controls. They have all implemented the Controls to address the key question: “What needs to be done right now to protect my organization from advanced and
targeted attacks?”
This document provides an overview of security information and event management (SIEM) tools and related topics. It discusses getting started with Security Onion and Docker, then covers SIEM concepts like collecting events, creating incidents, and example tools like IBM QRadar and Splunk. It also summarizes related areas like user entity behavior analytics, security orchestration automation and response, threat intelligence attribution and distribution, and security analytics hunting techniques.
Security information and event management (SIEM) technology supports threat detection, compliance and security incident management through the collection and analysis (both near real time and historical) of security events, as well as a wide variety of other event and contextual data sources.
SIEM systems provide security event monitoring and log management by collecting security data from across an organization's network and systems. The first SIEM was developed in 1996 and major players today include IBM QRadar, HP ArcSight, and McAfee Nitro. SIEMs aggregate logs from various sources, use correlation engines to identify related security events, and generate alerts when multiple events indicate a higher risk threat. They provide visibility across an organization's security infrastructure and help with compliance, operations, and forensic investigations. SIEM is important for threat detection, compliance, and gaining insights from security event data.
Log management involves collecting logs from various sources, normalizing the data into a readable format, and using log intelligence and monitoring tools to detect threats, enable incident response and forensic investigations, and ensure regulatory compliance. It provides a centralized way to search logs, correlate events, and generate reports which can help security teams more efficiently investigate issues compared to traditional methods of reviewing raw logs. Challenges include a lack of standard log formats and capturing all activity, but the market for log management software is large and growing due to its importance for compliance needs.
LTS Secure Security Information and Event Management (SIEM), is a technology that provides real-time analysis of security alerts generated by network hardware and applications.
Use Cases are a formal technique taught in most IS/IT disciplines. This presentation discusses a model to take that methodology and apply it to developing Security Operations and SIEM focused uses cases. The template discussed is in use at a major SIEM provider today, and is based on 10 years of implementing SIEM and building up SecOps across 15+ organizations over 10 years.
The document describes a company's SIEM (Security Information and Event Management) design and integration services. It details a typical 4-phase SIEM project approach: 1) Assessment and requirements gathering, 2) System design, 3) Integration services, and 4) Long-term SIEM co-sourcing services. The company works collaboratively with clients to understand their needs, design a customized SIEM solution, implement the system in development and production environments, and provide ongoing support services.
Get advice from security gurus on how to get up & running with SIEM quickly and painlessly. You'll learn about log collection, log management, log correlation, integrated data sources and how-to leverage threat intelligence into your SIEM implementation.
The document provides a review and comparison of the QRadar, ArcSight, and Splunk SIEM platforms. It summarizes their key capabilities and components. For each solution, it outlines strengths such as integrated monitoring, analytics features, and scalability. It also notes weaknesses such as complexity, customization limitations, and high data volume licensing costs. The comparison finds QRadar well-suited for smaller deployments, ArcSight for medium-large organizations, and notes Splunk's log collection strengths but limited out-of-the-box correlations compared to competitors. Gartner assessments for each platform cover visibility trends, deployment challenges, and roadmap monitoring advice.
End-to-End Security Analytics with the Elastic StackElasticsearch
Interested in staying ahead of the adversary in a shifting security landscape? Learn how to create a centralized security analytics platform with the speed and scale you need for ad hoc analysis during threat detection and hunting exercises.
IOT is the new emerging technology with equal good and bads.This technology can be even misused by hackers and attackers . so there comes the concept of IOT Forensics to identify,collect and analyse the data on the IOT device
This document provides an overview of data loss prevention (DLP). It discusses cyber security risks and increasing data breach statistics and costs. It defines DLP and the lifecycle of data protection. Key aspects of a DLP implementation are outlined, including defining objectives and scope, policy setup, data discovery and classification, monitoring and tuning, and reporting. The benefits of visibility, monitoring, and improved protection are highlighted.
SplunkLive! Frankfurt 2019: Splunk at Dachser Splunk
1. Dachser implemented Splunk as their SIEM to gain evidence-based security management and standardized log management after previously facing challenges evaluating security risks without good log data.
2. They established a Splunk architecture with separate search heads for IT security and IT teams and integrate various data sources like email logs, file access logs, and NetApp storage logs.
3. Use cases like detecting malware in email quarantines and addressing "fat finger syndrome" of accidental file deletions demonstrate how Splunk enhances their security operations and incident response.
"In this session, we will address the current threat landscape, present DDoS attacks that we have seen on AWS, and discuss the methods and technologies we use to protect AWS services. You will leave this session with a better understanding of:
DDoS attacks on AWS as well as the actual threats and volumes that we typically see.
What AWS does to protect our services from these attacks.
How this all relates to the AWS Shared Responsibility Model."
The USB device plugin identified that a USB thumb drive had been used on Tom Warner's computer on October 29, 2004 based on registry entries showing the mounting of drive E on that date. This suggests potential transfer of files from his work computer to an external storage device.
Managing Personally Identifiable Information (PII)KP Naidu
This document discusses personally identifiable information (PII) and provides guidance on managing PII. It defines PII as information that can be used to identify an individual. The document notes that data breaches involving PII are common and outlines legal issues related to PII. It recommends assessing the confidentiality impact of PII and implementing appropriate controls based on the impact level. Specific steps are outlined to help organizations properly manage PII.
This document discusses Splunk Enterprise Security and its frameworks for analyzing security data. It provides an overview of Splunk's security portfolio and how it addresses challenges with legacy SIEM solutions. Key frameworks covered include Notable Events for streamlining incident management, Asset and Identity for enriching incidents with contextual data, Risk Analysis for prioritizing incidents based on quantitative risk scores, and Threat Intelligence for detecting indicators of compromise in machine data. Interactive dashboards and incident review interfaces are highlighted as ways to investigate threats and monitor the security posture.
- The Security Posture dashboard provides a near real-time overview of an organization's security posture by displaying notable security events.
- The analyst can pivot from this dashboard to the Incident Review dashboard to begin investigating critical notable events.
- Drilling into a notable event on the Incident Review dashboard provides important context about the event such as the affected systems, compliance data, and location to assist the analyst's investigation.
This document discusses best practices for log monitoring. It recommends developing a logging policy to determine what information to collect, centralizing log collection on a dedicated secure server, normalizing log formats, regularly reviewing logs both manually and automatically, implementing log rotation policies based on volume and retention requirements, and using monitoring tools to analyze logs.
Using Assessment Tools on ICS (English)Digital Bond
Dale Peterson of Digital Bond describes the methodology of using security assessment tools on an operational ICS. He also discusses how to best use the features and functions of these tools.
Stuxnet is a complex malware that targeted industrial control systems in Iran. It used four zero-day exploits and spread through removable drives and local networks to find computers with Siemens Step 7 software to modify PLC code and sabotage industrial systems while avoiding detection. The malware infected over 100,000 hosts worldwide but about 60% were in Iran, its main target. It conducted five attack waves against Iranian organizations from 2009 to 2010.
The New Pentest? Rise of the Compromise AssessmentInfocyte
If an attacker had a foothold in your network today, would you know it?
If they made it past your real-time defense measures (EDR, EPP, AV, UEBA, firewalls, etc.) or an analyst misinterpreted a critical alert, chances are they've entrenched themselves for the long haul. Skilled and organized attackers know long-term persistence in your network is the most critical component to meeting their goal of stealing information, causing damage, or pivoting attacks on other organizations.
Threat hunting is the proactive practice of finding attackers in your environment before they can cause damage (or at least stop the bleeding from continued exposure). Unfortunately, effective threat hunting practices remain out-of-reach for most organizations due to lack of security infrastructure and qualified people to manage advanced endpoint security solutions.
One solution to this problem is to hire a third party to conduct a periodic assessment geared toward discovery of unauthorized access and compromised systems. This is called a "compromise assessment" and just recently compromise assessments have become one of the most requested services from top security service providers.
Customers don’t want to just know if they can be hacked (a good penetration tester will generally conclude “yes”) they want to know if they ARE hacked—right now—and if so, what endpoints/hosts/servers on their network are compromised.
In this presentation, which was originally prepared for Black Hat 2018, Chris Gerritz outlines the growing practice of compromise assessments and the best practices being utilized by some of the largest and most sophisticated managed security service providers (MSSPs) with this offering.
What approaches are most effective?
What data is being utilized?
What are some of the top challenges?
To request a free 100-node compromise assessment or to learn more about Infocyte HUNT — our comprehensive threat hunting platform — and start a free trial, please visit https://try.infocyte.com.
The Next Generation of Security Operations Centre (SOC)PECB
The document discusses the key aspects of building a next generation Security Operations Centre (SOC). It emphasizes that skilled people, well-defined processes, and integrating new technologies are critical. Specifically, it recommends adopting automation and analytics to analyze large datasets, integrating threat intelligence from multiple sources, and establishing red and blue teams to continuously test defenses. The goal of a next generation SOC is to use predictive analysis of vast security data to improve threat detection, response, and the overall security posture of an organization.
The document provides installation and configuration instructions for the ArcSight Forwarding Connector. It discusses:
1. Sending events from an ArcSight ESM Source Manager to various destinations including another ESM Manager, ArcSight Logger, NSP, CSV files, and McAfee ePO.
2. Standard installation procedures including installing ESM, assigning privileges on the source Manager, and installing the Forwarding Connector.
3. Configuring the Forwarding Connector to send events to different destinations such as another ESM Manager, Logger, NSP, CSV files, McAfee ePO, and HP Operations Manager. It also discusses upgrading, uninstalling, and rolling back the connector
Detecting and Resolving Firewall Policy AnomaliesAli Habeeb
This document proposes a framework for detecting and resolving firewall policy anomalies. It first identifies policy conflicts by segmenting the packet space. It then generates action constraints based on a risk assessment and works to resolve conflicts by reordering rules to satisfy the constraints. Finally, it aims to eliminate redundant rules by analyzing the properties of rule subspaces. The overall goal is to provide an innovative approach for managing firewall policy anomalies.
This document provides an overview of firewall policy anomaly reports generated by Firewall Analyzer to help optimize firewall performance. The reports identify shadowed, redundant, generalized, and correlated policies, and recommend policy grouping and cleanup. The latest version also features faster log processing, support for additional firewall brands, and new protocols for fetching firewall configurations.
Use Cases are a formal technique taught in most IS/IT disciplines. This presentation discusses a model to take that methodology and apply it to developing Security Operations and SIEM focused uses cases. The template discussed is in use at a major SIEM provider today, and is based on 10 years of implementing SIEM and building up SecOps across 15+ organizations over 10 years.
The document describes a company's SIEM (Security Information and Event Management) design and integration services. It details a typical 4-phase SIEM project approach: 1) Assessment and requirements gathering, 2) System design, 3) Integration services, and 4) Long-term SIEM co-sourcing services. The company works collaboratively with clients to understand their needs, design a customized SIEM solution, implement the system in development and production environments, and provide ongoing support services.
Get advice from security gurus on how to get up & running with SIEM quickly and painlessly. You'll learn about log collection, log management, log correlation, integrated data sources and how-to leverage threat intelligence into your SIEM implementation.
The document provides a review and comparison of the QRadar, ArcSight, and Splunk SIEM platforms. It summarizes their key capabilities and components. For each solution, it outlines strengths such as integrated monitoring, analytics features, and scalability. It also notes weaknesses such as complexity, customization limitations, and high data volume licensing costs. The comparison finds QRadar well-suited for smaller deployments, ArcSight for medium-large organizations, and notes Splunk's log collection strengths but limited out-of-the-box correlations compared to competitors. Gartner assessments for each platform cover visibility trends, deployment challenges, and roadmap monitoring advice.
End-to-End Security Analytics with the Elastic StackElasticsearch
Interested in staying ahead of the adversary in a shifting security landscape? Learn how to create a centralized security analytics platform with the speed and scale you need for ad hoc analysis during threat detection and hunting exercises.
IOT is the new emerging technology with equal good and bads.This technology can be even misused by hackers and attackers . so there comes the concept of IOT Forensics to identify,collect and analyse the data on the IOT device
This document provides an overview of data loss prevention (DLP). It discusses cyber security risks and increasing data breach statistics and costs. It defines DLP and the lifecycle of data protection. Key aspects of a DLP implementation are outlined, including defining objectives and scope, policy setup, data discovery and classification, monitoring and tuning, and reporting. The benefits of visibility, monitoring, and improved protection are highlighted.
SplunkLive! Frankfurt 2019: Splunk at Dachser Splunk
1. Dachser implemented Splunk as their SIEM to gain evidence-based security management and standardized log management after previously facing challenges evaluating security risks without good log data.
2. They established a Splunk architecture with separate search heads for IT security and IT teams and integrate various data sources like email logs, file access logs, and NetApp storage logs.
3. Use cases like detecting malware in email quarantines and addressing "fat finger syndrome" of accidental file deletions demonstrate how Splunk enhances their security operations and incident response.
"In this session, we will address the current threat landscape, present DDoS attacks that we have seen on AWS, and discuss the methods and technologies we use to protect AWS services. You will leave this session with a better understanding of:
DDoS attacks on AWS as well as the actual threats and volumes that we typically see.
What AWS does to protect our services from these attacks.
How this all relates to the AWS Shared Responsibility Model."
The USB device plugin identified that a USB thumb drive had been used on Tom Warner's computer on October 29, 2004 based on registry entries showing the mounting of drive E on that date. This suggests potential transfer of files from his work computer to an external storage device.
Managing Personally Identifiable Information (PII)KP Naidu
This document discusses personally identifiable information (PII) and provides guidance on managing PII. It defines PII as information that can be used to identify an individual. The document notes that data breaches involving PII are common and outlines legal issues related to PII. It recommends assessing the confidentiality impact of PII and implementing appropriate controls based on the impact level. Specific steps are outlined to help organizations properly manage PII.
This document discusses Splunk Enterprise Security and its frameworks for analyzing security data. It provides an overview of Splunk's security portfolio and how it addresses challenges with legacy SIEM solutions. Key frameworks covered include Notable Events for streamlining incident management, Asset and Identity for enriching incidents with contextual data, Risk Analysis for prioritizing incidents based on quantitative risk scores, and Threat Intelligence for detecting indicators of compromise in machine data. Interactive dashboards and incident review interfaces are highlighted as ways to investigate threats and monitor the security posture.
- The Security Posture dashboard provides a near real-time overview of an organization's security posture by displaying notable security events.
- The analyst can pivot from this dashboard to the Incident Review dashboard to begin investigating critical notable events.
- Drilling into a notable event on the Incident Review dashboard provides important context about the event such as the affected systems, compliance data, and location to assist the analyst's investigation.
This document discusses best practices for log monitoring. It recommends developing a logging policy to determine what information to collect, centralizing log collection on a dedicated secure server, normalizing log formats, regularly reviewing logs both manually and automatically, implementing log rotation policies based on volume and retention requirements, and using monitoring tools to analyze logs.
Using Assessment Tools on ICS (English)Digital Bond
Dale Peterson of Digital Bond describes the methodology of using security assessment tools on an operational ICS. He also discusses how to best use the features and functions of these tools.
Stuxnet is a complex malware that targeted industrial control systems in Iran. It used four zero-day exploits and spread through removable drives and local networks to find computers with Siemens Step 7 software to modify PLC code and sabotage industrial systems while avoiding detection. The malware infected over 100,000 hosts worldwide but about 60% were in Iran, its main target. It conducted five attack waves against Iranian organizations from 2009 to 2010.
The New Pentest? Rise of the Compromise AssessmentInfocyte
If an attacker had a foothold in your network today, would you know it?
If they made it past your real-time defense measures (EDR, EPP, AV, UEBA, firewalls, etc.) or an analyst misinterpreted a critical alert, chances are they've entrenched themselves for the long haul. Skilled and organized attackers know long-term persistence in your network is the most critical component to meeting their goal of stealing information, causing damage, or pivoting attacks on other organizations.
Threat hunting is the proactive practice of finding attackers in your environment before they can cause damage (or at least stop the bleeding from continued exposure). Unfortunately, effective threat hunting practices remain out-of-reach for most organizations due to lack of security infrastructure and qualified people to manage advanced endpoint security solutions.
One solution to this problem is to hire a third party to conduct a periodic assessment geared toward discovery of unauthorized access and compromised systems. This is called a "compromise assessment" and just recently compromise assessments have become one of the most requested services from top security service providers.
Customers don’t want to just know if they can be hacked (a good penetration tester will generally conclude “yes”) they want to know if they ARE hacked—right now—and if so, what endpoints/hosts/servers on their network are compromised.
In this presentation, which was originally prepared for Black Hat 2018, Chris Gerritz outlines the growing practice of compromise assessments and the best practices being utilized by some of the largest and most sophisticated managed security service providers (MSSPs) with this offering.
What approaches are most effective?
What data is being utilized?
What are some of the top challenges?
To request a free 100-node compromise assessment or to learn more about Infocyte HUNT — our comprehensive threat hunting platform — and start a free trial, please visit https://try.infocyte.com.
The Next Generation of Security Operations Centre (SOC)PECB
The document discusses the key aspects of building a next generation Security Operations Centre (SOC). It emphasizes that skilled people, well-defined processes, and integrating new technologies are critical. Specifically, it recommends adopting automation and analytics to analyze large datasets, integrating threat intelligence from multiple sources, and establishing red and blue teams to continuously test defenses. The goal of a next generation SOC is to use predictive analysis of vast security data to improve threat detection, response, and the overall security posture of an organization.
The document provides installation and configuration instructions for the ArcSight Forwarding Connector. It discusses:
1. Sending events from an ArcSight ESM Source Manager to various destinations including another ESM Manager, ArcSight Logger, NSP, CSV files, and McAfee ePO.
2. Standard installation procedures including installing ESM, assigning privileges on the source Manager, and installing the Forwarding Connector.
3. Configuring the Forwarding Connector to send events to different destinations such as another ESM Manager, Logger, NSP, CSV files, McAfee ePO, and HP Operations Manager. It also discusses upgrading, uninstalling, and rolling back the connector
Detecting and Resolving Firewall Policy AnomaliesAli Habeeb
This document proposes a framework for detecting and resolving firewall policy anomalies. It first identifies policy conflicts by segmenting the packet space. It then generates action constraints based on a risk assessment and works to resolve conflicts by reordering rules to satisfy the constraints. Finally, it aims to eliminate redundant rules by analyzing the properties of rule subspaces. The overall goal is to provide an innovative approach for managing firewall policy anomalies.
This document provides an overview of firewall policy anomaly reports generated by Firewall Analyzer to help optimize firewall performance. The reports identify shadowed, redundant, generalized, and correlated policies, and recommend policy grouping and cleanup. The latest version also features faster log processing, support for additional firewall brands, and new protocols for fetching firewall configurations.
Model-driven Extraction and Analysis of Network Security Policies (at MoDELS'13)Jordi Cabot
1. The document describes the configuration files for two firewalls (FW1 and FW2) that control network traffic between public hosts, a DMZ, and an internal network.
2. FW1's Netfilter/iptables configuration sets default policies to drop all traffic, then uses custom chains to allow outgoing SMTP and HTTP from the DMZ to public hosts, as well as incoming SMTP to the DMZ server, while denying access from the local network.
3. FW2's Cisco PIX configuration uses an access list to deny SMTP and HTTP connections from two specific hosts in the DMZ to the server, but permits them from other hosts in the DMZ subnet.
5 Under-utilized PCI Requirements and how you can leverage themPraveen Vackayil
This document discusses the Payment Card Industry Data Security Standards (PCI DSS) and provides an overview of some key requirements including firewall rule reviews, log reviews, penetration testing, and risk assessments. It notes that PCI DSS requirements focus on protecting credit card data and emphasizes security over mere compliance. Various approaches and best practices for conducting reviews, testing, and assessments according to the PCI standards are also outlined.
This document discusses different types of firewalls and how to configure them. It defines application layer firewalls, packet filtering firewalls, and hybrid firewalls. It also describes how to develop a firewall configuration based on an organization's internet policy, including rules for single firewall, dual firewall, and external internet systems architectures. Finally, it provides guidance on designing an effective firewall rule set by ranking traffic types and placing specific rules above general rules.
This document provides an overview of networking concepts including the OSI model, IP addressing, and subnetting. It begins with an agenda for a basic training on transmission network management systems and IP networking topics. It then covers each layer of the OSI model in detail and explains concepts like encapsulation and peer-to-peer communication between layers. Finally, it discusses common LAN devices, technologies like Ethernet, IP addressing schemes, subnetting using CIDR notation, and how bits are borrowed from the host portion of an IP address to create subnets.
The document is a presentation on continuous time analog systems. It introduces analog and digital signals, classifies signals as continuous or discrete, and discusses elementary continuous time signals like unit step, ramp, and impulse functions. It describes continuous time systems as those with continuous input and output signals. Key properties of linear and time-invariant continuous systems are explained. Analog signal processing tools like convolution, Fourier transforms, and Laplace transforms are also introduced.
The document discusses web crawling and provides an overview of the process. It defines web crawling as the process of gathering web pages to index them and support search. The objective is to quickly gather useful pages and link structures. The presentation covers the basic operation of crawlers including using a seed set of URLs and frontier of URLs to crawl. It describes common modules in crawler architecture like URL filtering tests. It also discusses topics like politeness, distributed crawling, DNS resolution, and types of crawlers.
This document provides guidelines for administering the 2014 National Achievement Test (NAT). It outlines the roles and responsibilities of testing staff at different levels, including the Division Testing Coordinator, Private School Supervisor, Chief Examiner, Room Examiner, and Room Supervisor. It also provides instructions for preparing and submitting enrolment data, assigning school IDs, and administering the test for special groups of examinees.
How to Audit Firewall, what are the standard Practices for Firewall Auditkeyuradmin
Firewalls continue to secure a countless number of organizations across the world and remain first line of defense against known cyber attacks and network risks. Avalanche of IT-led forces and evolution in threat landscape has brought increased onus on firewalls. On the other side, as enterprises extend their business leveraging internet driven business models and increasingly collaborative networks, embracing cloud and virtual environments, there's a need to understand how this ties with the changing role of security technologies such as a firewall. This webinar explains how a tectonic shift in enterprise networking requires rethinking firewall deployment and management for effective security management.
The document discusses the design of FIR (finite impulse response) filters. It introduces FIR filters and covers their advantages and disadvantages. It then discusses various methods for designing FIR filters, including windowing techniques, optimum filter design using the Parks-McClellan algorithm, and the alternation theorem as it relates to filter design. The document provides examples and comparisons of different windowing techniques and concludes by discussing the advantages of FIR filters and limitations.
A firewall protects networks and computers from unauthorized access. There are two main types - software firewalls that protect individual computers, and hardware firewalls that protect entire networks. A firewall works by inspecting all incoming and outgoing data packets and determining whether to allow or block them based on a set of rules. Firewalls can block hackers, enforce security policies to protect private information, and log internet activity. However, firewalls cannot protect against insider threats, connections not routed through the firewall, or completely new viruses.
THREATS are possible attacks.
It includes
The spread of computer viruses
Infiltration and theft of data from external hackers
Engineered network overloads triggered by malicious mass e-mailing
Misuse of computer resources and confidential information by employees
Unauthorized financial transactions and other kinds of computer fraud conducted in the company's name
Electronic inspection of corporate computer data by outside parties
Damage from failure, fire, or natural disasters
The document provides an overview of key concepts for understanding firewalls. It discusses three basic types of firewalls: packet filters, application-level gateways, and stateful inspection firewalls. It also describes how firewalls work by processing packets at different locations including the network interface card, kernel, and application levels using techniques like packet filtering, proxy applications, and user authentication.
NAT maps private IP addresses to public IP addresses, allowing multiple devices on a private network to share a single public IP address to access the Internet. It is commonly used when there is a shortage of IPv4 addresses. There are different types of NAT, including dynamic NAT which maps private addresses to public addresses on a need basis, and NAPT which allows thousands of devices to share one IP address by also mapping port numbers. NAT solves issues like merging networks with duplicate private addresses and changing ISPs without renumbering an entire network.
Cable modems allow high-speed internet access over existing cable TV networks. A cable modem connects a computer to the cable network via Ethernet, converting signals for transmission. It receives a signal from the cable network and provides internet access to connected devices. Cable modems isolate TV and internet signals to avoid interference, using different frequencies for each. Accessing the internet through cable has advantages like high speeds and using existing infrastructure, but disadvantages include limited availability and security risks.
The optimization and implementation of iptables rules setPOOJA MEHTA
The document discusses optimizing iptables rules on Linux. It introduces iptables as a packet filter that operates on the TCP/IP stack. It covers the iptables framework and working, and describes iptables rule structure. The main topic is optimizing and realizing the rule set by eliminating duplication, improving filtration efficiency with algorithms, and enhancing system performance and network throughput.
The document outlines various techniques used in business analysis across different phases including requirements elicitation, requirements management and communication, enterprise analysis, and solution assessment and validation. It provides a comprehensive overview of planning, conducting, and managing business analysis activities from initial stakeholder engagement through validating solutions.
Advanced Project Analysis and Project Benchmarking with Acumen Cloud™Acumen
A presentation on project analysis, visualization and resolution including tips, tools and techniques for improved project intelligence through advanced analytics. Project benchmarking with Acumen Cloud was also introduced.
The document provides an overview of the Foundations of Business Analysis certificate course. The course consists of 3 modules that cover the disciplines and practices of business analysis: Foundations of Business Analysis, Leadership in Business Analysis, and Tools and Techniques in Business Analysis. The introductory module outlines the course content over 12 weeks, covering topics such as business analysis competencies, techniques, requirements elicitation, and case study assignments. The document defines business analysis and compares the roles and certifications of business analysts and project managers.
The document provides a table of contents for a National Guard Black Belt training module on continuous process improvement (CPI). It outlines the course schedule and content by week and phase, including modules on defining problems, measuring processes, analyzing data, improving processes, and controlling results. The training integrates Lean Six Sigma tools and methods and uses simulations and projects to teach CPI approaches.
The document summarizes the usability testing process for the knowledge sharing website Knetwit. It describes personas and scenarios created for target users, a comparative evaluation against other sites, individual heuristic evaluations, a user survey of 65 respondents, and usability testing with 5 students completing tasks. Key findings included issues with finding and downloading notes, limited search capabilities, and difficulties joining courses. The evaluation provided insights to improve the usability of Knetwit's features.
Software can impact many aspects of society and is found almost everywhere. Common problems in software development include projects not fulfilling customer needs, being difficult to extend and improve, lacking documentation, and having poor quality. Software engineering aims to produce software on time, reliably, and completely by applying a systematic and disciplined approach.
Substructrual surrogates for learning decomposable classification problems: i...kknsastry
This paper presents a learning methodology based on a substructural classification model to solve decomposable classification problems. The proposed method consists of three important components: (1) a structural model that represents salient interactions between attributes for a given data, (2) a surrogate model which provides a functional approximation of the output as a function of attributes, and (3) a classification model which predicts the class for new inputs. The structural model is used to infer the functional form of the surrogate and its coefficients are estimated using linear regression methods. The classification model uses a maximally-accurate, least-complex surrogate to predict the output for given inputs. The structural model that yields an optimal classification model is searched using an iterative greedy search heuristic. Results show that the proposed method successfully detects the interacting variables in hierarchical problems, group them in linkages groups, and build maximally accurate classification models. The initial results on non-trivial hierarchical test problems indicate that the proposed method holds promise and have also shed light on several improvements to enhance the capabilities of the proposed method.
Usability testing involves identifying users, understanding their needs and goals as well as client needs. It includes conceptual design research, prototyping, and production testing. Methods include interviews, surveys, observations, card sorting, focus groups and user testing. Tests involve an opening, pre-session questionnaire, tasks with pre and post task questionnaires, and a post-session questionnaire to collect performance, issue, behavioral and self-reported metrics. Planning considers equipment, location, questionnaires, tasks and forms.
I. The presentation discusses layout design methodology, including advanced technology, nanometer solutions, and library-based design.
II. It covers semi-automation flows that utilize Virtuoso tools to generate layout from schematics in a connectivity-driven manner, reducing DRC/LVS errors.
III. Expertise in analog layout is also discussed, focusing on topics like references, pumps, regulators, proximity, and design-layout interaction.
A platform for the decision support studiojhjsmits
Scenario Navigator is a platform that was originally desktop software mainly used by analysts and modelers to support simulation projects (1). It has evolved into a cloud-based simulation as a service platform with a web interface allowing running of models on servers (2). The vision is for Scenario Navigator to become a platform for a Decision Support Studio - an integrated, collaborative environment that supports multi-actor decision making through dynamic modeling and simulation across devices as part of a corporate infrastructure (3).
The document discusses statistical process control and statistical thinking. It outlines key concepts of statistical thinking including process and variation thinking. It emphasizes the importance of understanding variation and using data to quantify variation and measure effects in order to improve processes. It also discusses how statistical thinking can be applied at different levels from executives to managers to workers.
This document discusses quick changeover techniques to improve process efficiency. It begins by outlining an 8-step process improvement methodology. It then defines changeover times and differentiates between traditional and lean thinking regarding changeovers. The key steps to reducing changeover times are identified as separating internal and external changeover activities, converting internal activities to external where possible, and reducing all remaining activities through techniques like parallel operations and automation. The goal is to standardize and simplify changeovers to allow for smaller batch sizes and increased flexibility.
This document discusses quick changeover techniques to improve process efficiency. It begins by outlining an 8-step process improvement methodology. It then defines changeover times and differentiates between traditional and continuous process improvement thinking regarding changeovers. The document explains that quick changeovers can decrease downtime and waste, allowing for increased flexibility through smaller batch sizes. It provides steps to identify internal and external changeover activities, convert internal activities to external to reduce downtime, and further reduce all remaining activities through techniques like parallel operations and automation.
This document summarizes Justyna Zander-Nowicka's doctoral thesis defense on December 19th, 2008 regarding her research on model-based testing of embedded real-time systems in the automotive domain. The thesis proposed a model-based testing approach called MiLEST that uses signal features for automatic test data generation and evaluation. The approach aims to systematically generate functional test cases from models to test embedded systems starting from early development phases.
This document describes Global Catalyst, which provides engineering students projects in various domains and technologies to help nurture their talents. It highlights that projects are guided by experienced professionals and follow an SDLC model. Students can choose from titles in areas like cloud computing, mobile apps, software engineering and more, using technologies like Java, .NET and Android. The process involves initial research, deciding on a domain and title, meeting experts, and executing the project through reviews. Students have the option to obtain government certification for their project.
Research design for Evaluation of Strongly Sustainability Business Model Onto...Antony Upward
This document summarizes my overall research design for the strongly sustainable business model ontology (chapter 1) and then provides the detailed research design for the evaluation phase of my design science research in Environmental Studies (chapter 2-10)
For more details about the background on Strongly Sustainable Business Models please see http://slab.ocad.ca/SSBMs_Defining_the_Field and http://www.EdwardJames.biz/Research.
The document discusses fundamentals of performance evaluation of computer and telecommunication systems. It covers topics such as performance evaluation techniques, probability theory, measurement, benchmarking, queueing theory, simulation modeling, and analysis of simulation results. The intended audience are students and professionals interested in learning about performance evaluation of computer and telecommunication systems.
This document provides information on the improve phase of the CPI roadmap for a National Guard Black Belt training module. It outlines an 8-step process for improvement that includes identifying performance gaps, determining root causes, developing and testing countermeasures, and standardizing successful processes. The document also lists activities and tools that can be used in the improve phase, as well as mandatory and recommended deliverables for the improve tollgate, such as a future state process map, implementation plan, pilot results, and storyboard.
This document provides information on the 8-step CPI Roadmap process for improvement projects and the requirements to pass through the "Improve" tollgate. The 8 steps are: 1) Validate the problem 2) Identify performance gaps 3) Set improvement targets 4) Determine root cause 5) Develop countermeasures 6) See countermeasures through 7) Confirm results 8) Standardize successful processes. The tollgate requirements include delivering a solution prioritization, future state process map, implementation plan, pilot plan and results, process capability analysis, control charts, storyboard and barriers/risks identification.
Similar to Firewall Rule Review and Modelling (20)
Source Code Analyse - Ein praktikabler AnsatzMarc Ruef
Am vergangenen Mittwoch dieser Woche wurde erneut der Hacking Day der Digicomp durchgeführt. An diesem werden an einem ganzen Tag auf zwei Tracks Vorträge, Demos und Workshops zum Thema Informationssicherheit gehalten. Unter anderem hielt Marc Ruef einen Vortrag zum Thema Source Code Analyse.
Adventures in a Decade of Tracking and Consolidating Security VulnerabilitiesMarc Ruef
This document discusses the design and maintenance of a vulnerability database. It outlines important considerations for the design such as what information to include in each entry and prioritizing which details are most important to collect. It also evaluates different sources of vulnerability data like databases, vendor advisories, and vulnerability contributors. Key factors discussed include coverage, speed of updates, visibility of listings, inclusion of technical details and risk ratings.
Code Plagiarism - Technical Detection and Legal ProsecutionMarc Ruef
The talk is discussing the basic problem of code theft and violation of licenses. As an example the popular case "ATK vs. XXXX" is retold. With this case as an example the coderecon tool is introduced to show how to identify stolen code with technical utilities. Afterwards the legal aspects of plagiarism and code theft is discussed. This includes current law and articles of a statute in Switzerland, Europe/EU and worldwide.
Diese Präsentation wurde für den Bevölkerungsschutz Wettingen erstellt. Sie zeigt die neue Funktechnologie POLYCOM, die mitunter ebenfalls von der Polizei, dem Grenzwachtkorps und der Militärpolizei eingesetzt wird. Neben einem historischen Abriss wird die Integration in der Schweiz, einige technische Merkmale, die Bedienung sowie die Grundlagen der Funkdisziplin behandelt.
http://www.computec.ch/download.php?view.728
Security Scanner Design am Beispiel von httpreconMarc Ruef
Am 24. September 2009 hielt Marc Ruef (CTO der scip AG) einen Vortrag an der OpenExpo 2009 in Winterthur. Dieser trug den Titel Security Scanner Design am Beispiel von httprecon und sollte drei Aspekte besprechen:
* Grundlegende Funktionsweise von Security Scanner
* Ideale Umsetzung einer entsprechenden Lösung
* Konkreter Vergleich zum httprecon project mit seinen Vor- und Nachteilen
Link: http://www.scip.ch/?labs.20090925
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
FREE A4 Cyber Security Awareness Posters-Social Engineering part 3Data Hops
Free A4 downloadable and printable Cyber Security, Social Engineering Safety and security Training Posters . Promote security awareness in the home or workplace. Lock them Out From training providers datahops.com
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
This presentation provides valuable insights into effective cost-saving techniques on AWS. Learn how to optimize your AWS resources by rightsizing, increasing elasticity, picking the right storage class, and choosing the best pricing model. Additionally, discover essential governance mechanisms to ensure continuous cost efficiency. Whether you are new to AWS or an experienced user, this presentation provides clear and practical tips to help you reduce your cloud costs and get the most out of your budget.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
1. Firewall Rule Modelling and Review
Marc Ruef
www.scip.ch
SwiNOG 24
10. May 2012
Berne, Switzerland
2. Agenda | Firewall Rule Modelling and Review Intro
Who?
1. Intro What?
Modelling & Review
Introduction 2 min
Extract
Who am I? 2 min Parse
What is the Goal? 2 min Dissect
2. Firewall Rule Modelling and Review Review
Additional Settings
Extraction 4 min
Routing Criticality
Parsing 4 min Statistical Analysis
Dissection 4 min Outro
Review 10 min Summary
Questions
Additional Settings 10 min
Routing Criticality 7 min
Statistical Analysis 5 min
3. Outro
Summary 2 min
Questions 5 min
SwiNOG 24 2/28
3. Introduction | Who am I? Intro
Who?
What?
Name Marc Ruef
Modelling & Review
Job Co-Owner / CTO, scip AG, Zürich Extract
Parse
Private Website http://www.computec.ch Dissect
Last Book „The Art of Penetration Testing“, Review
Computer & Literatur Böblingen, Additional Settings
Routing Criticality
ISBN 3-936546-49-5
Statistical Analysis
Outro
Summary
Questions
Translation
SwiNOG 24 3/28
4. Introduction | What is our Goal? Intro
Who?
What?
◦ A Firewall Rule Review shall determine Modelling & Review
◦ Insecure rules Extract
◦ Wrong rules Parse
Dissect
◦ Inefficient rules Review
◦ Obsolete rules Additional Settings
Routing Criticality
◦ I will show Statistical Analysis
◦ Approaches Outro
◦ Our methodology Summary
Questions
◦ Possibilities
SwiNOG 24 4/28
21. Routing Criticality | Weight Indexing (Example)
Description Source Destination Port AV AC Au CI II AI Score
External Web to Web Server Internet DMZ t80 N L N N C C 9.4
External Web for Internal Clients (in) LAN Internet t80 N M N C C C 9.3
External Web to Customer Site Internet DMZ t443 N L S C C C 9.0
Intro
External Mail to Public Mail Server Internet DMZ t110 N M S C C
Who? C 8.5
What?
External Remote Access to Servers Internet DMZ t22 N M S C C C 8.5
Modelling & Review
Extract
Internal Access to DNS Servers LAN DMZ u53 L L N C C C 7.2
Parse
Intranet Access for Internal Clients LAN DMZ t80 L L N P Dissect C
C 6.8
Review
External Web for Internal Clients (out) LAN Internet t80 L L S C C C 6.8
Additional Settings
Routing Criticality
Internal Remote Access to Servers LAN DMZ t3389 L M S P C P 5.5
Statistical Analysis
Outro
Internal ICMP Echo for Servers DMZ Internet i0,8 L M S P P C 5.5
Summary
Questions
23. Statistical Analysis | Top Findings (Median Last 11 Projects)
Intro
Who?
What?
Modelling & Review
Extract
Parse
Dissect
Review
Additional Settings
Routing Criticality
Statistical Analysis
Outro
Summary
Questions
24. Statistical Analysis | Reasons for Risks Intro
Who?
What?
◦ There are several possible reasons, why FWs are Modelling & Review
not configured in the most secure way: Extract
◦ Mistakes (wrong click, wrong copy&paste, …) Parse
Dissect
◦ Forgotten/Laziness (“I will improve that later…”) Review
◦ Misinformation (vendor suggests ports 10000-50000) Additional Settings
◦ Misunderstanding (technical, conceptual) Routing Criticality
Statistical Analysis
◦ Unknown features (hidden settings) Outro
◦ Technical failure (e.g. broken backup import) Summary
Questions
SwiNOG 24 24/28
25. Outro | Summary Intro
Who?
What?
◦ Firewall Rule Reviews help to determine weaknesses in
Modelling & Review
firewall rulesets.
Extract
◦ The extraction, parsing and dissection of a ruleset allows Parse
to do the analysis. Dissect
Review
◦ Common weaknesses are broad definition of objects,
Additional Settings
overlapping rules and unsafe protocols. Routing Criticality
Statistical Analysis
Outro
Summary
Questions
SwiNOG 24 25/28
26. Outro | Literature Intro
Who?
What?
◦ Firewall Rule Parsing am Beispiel von SonicWALL, Modelling & Review
http://www.scip.ch/?labs.20110113 Extract
◦ Common Vulnerability Scoring System und seine Parse
Dissect
Probleme, http://www.scip.ch/?labs.20101209 Review
Additional Settings
Routing Criticality
Statistical Analysis
Outro
Summary
Questions
These slides and additional details will be published at
http://www.scip.ch/?labs
SwiNOG 24 26/28