Incident response, post-facto forensics, and network troubleshooting rely on the ability to quickly extract relevant information. To this end, security analysts and network operators need a system that (i) allows for directly expressing a query using domain-specific constructs, (ii) that delivers the performance required for interactive analysis, and (iii) that is not affected by a continuously arriving stream of semi-structured data.
This talk covers the design and implementation plans of a distributed analytics platform that meets these requirements. Well-proven Google architectures like GFS, BigTable, Chubby, and Dremel heavily influenced the design of the system, which leverages bitmap indexes to meet the interactive query requirements. The goal is to develop a prototype ready for production usage in the next few months and obtain feedback from using it on various large-scale sites serving tens of thousands of machines.
Network forensics - Follow the Bad Rabbit down the wirecasheeew
We will take a sneak peak into the rabbit hole of network analysis and forensics.
For this I will show you how the recent ransomware Bad Rabbit hops around the wire. We are going to take a look at
basic procedures and tools that help us follow its traces.
Be prepared to dig your own rabbit hole with the links I will offer at the end and follow them at your own risk (;
The document summarizes a presentation on network forensics and lessons learned from the July 2007 London attacks. The presentation covered early adoption of firewalls and DMZs, intrusion prevention systems, the use of fingerprints and DNA in forensics, the 2004 Madrid train bombings and 2005 London bombings. It discussed the police investigation into the London attacks including identifying suspects from CCTV footage and a practice run captured on video. The presentation proposed the use of network monitoring tools as a forensic technique and discussed challenges of detecting slow scan attacks and those using random ports or covert channels.
A 1-day short course developed for visiting guests from Tecsup on network forensics, prepared in a day : ]
The requirements/constraints were 5-7 hours of content and that the target audience had very little forensic or networking knowledge. [For that reason, flow analysis was not included as an exercise, discussion of network monitoring solutions was limited, and the focus was on end-node forensics, not networking devices/appliances themselves]
Network forensics is the capture, recording, and analysis of network events and traffic in order to discover the source of security attacks or other problem incidents. It involves systematically capturing and analyzing network traffic and events to trace and prove a network security incident. Network forensics provides crucial network-based evidence that can be used to successfully prosecute criminals. It is a difficult process that depends on maintaining high-quality network information.
This document provides an overview of network sniffing and packet analysis using Wireshark. It discusses why sniffing is useful for understanding network activity, troubleshooting issues, and performing computer forensics. The document outlines topics like the basic techniques of sniffing, an introduction to Wireshark and its features, analyzing common network protocols, and examples of case studies sniffing could be used for. It emphasizes that patience is a prerequisite and encourages interactive discussion.
Think network forensics is just for security? Not with today’s 10G (and tomorrow’s 40G/100G) traffic, not to mention new 802.11ac wireless networks with multi-gigabit data rates. Data is traversing these networks so quickly that detailed, real-time analysis is at best a challenge. Network forensics provides key real-time statistics while saving a complete, packet-level recording of all network activity. You don’t need to worry about capturing the problem – your network forensics solution already has, allowing you to go back in time and analyze any network, application, or security condition.
Network forensics - Follow the Bad Rabbit down the wirecasheeew
We will take a sneak peak into the rabbit hole of network analysis and forensics.
For this I will show you how the recent ransomware Bad Rabbit hops around the wire. We are going to take a look at
basic procedures and tools that help us follow its traces.
Be prepared to dig your own rabbit hole with the links I will offer at the end and follow them at your own risk (;
The document summarizes a presentation on network forensics and lessons learned from the July 2007 London attacks. The presentation covered early adoption of firewalls and DMZs, intrusion prevention systems, the use of fingerprints and DNA in forensics, the 2004 Madrid train bombings and 2005 London bombings. It discussed the police investigation into the London attacks including identifying suspects from CCTV footage and a practice run captured on video. The presentation proposed the use of network monitoring tools as a forensic technique and discussed challenges of detecting slow scan attacks and those using random ports or covert channels.
A 1-day short course developed for visiting guests from Tecsup on network forensics, prepared in a day : ]
The requirements/constraints were 5-7 hours of content and that the target audience had very little forensic or networking knowledge. [For that reason, flow analysis was not included as an exercise, discussion of network monitoring solutions was limited, and the focus was on end-node forensics, not networking devices/appliances themselves]
Network forensics is the capture, recording, and analysis of network events and traffic in order to discover the source of security attacks or other problem incidents. It involves systematically capturing and analyzing network traffic and events to trace and prove a network security incident. Network forensics provides crucial network-based evidence that can be used to successfully prosecute criminals. It is a difficult process that depends on maintaining high-quality network information.
This document provides an overview of network sniffing and packet analysis using Wireshark. It discusses why sniffing is useful for understanding network activity, troubleshooting issues, and performing computer forensics. The document outlines topics like the basic techniques of sniffing, an introduction to Wireshark and its features, analyzing common network protocols, and examples of case studies sniffing could be used for. It emphasizes that patience is a prerequisite and encourages interactive discussion.
Think network forensics is just for security? Not with today’s 10G (and tomorrow’s 40G/100G) traffic, not to mention new 802.11ac wireless networks with multi-gigabit data rates. Data is traversing these networks so quickly that detailed, real-time analysis is at best a challenge. Network forensics provides key real-time statistics while saving a complete, packet-level recording of all network activity. You don’t need to worry about capturing the problem – your network forensics solution already has, allowing you to go back in time and analyze any network, application, or security condition.
Cloud Forensics...this presentation shows you the current state of progress and challenges that stand today in the world of CLOUD FORENSICS.Based on lots of Google search and whites by Josiah Dykstra and Alan Sherman.The presentation builds right from basics and compares the conflicting requirements between traditional and Clod Forensics.
NIST Cloud Computing Forum and Workshop VIII
July 2015
Cloud Computing Forensic Science
Posted as a courtesy by:
Dave Sweigert
CISA CISSP HCISPP PMP SEC+
Team research paper and project on network vulnerabilities with multiple attacks and defesnses:
Cybersecurity
-For this project, our class was paired with teams to attempt to find vulnerabilities in other teams networks and to successfully beach their network.
-My role in this group was to help breach other team vulnerabilities through different attacks like responder attacks, honeypots, etc.
-The main challenges of this project were trying to find the vulnerabilities successfully, as the whole team had troubles with each of our different attacks and defenses.
-We learned how to use cybersecurity tools to help find vulnerabilities in networks and how to protect against them better. For example, in the honeypot we used we deployed it to port 80, when the attacker tried to access our fake server we were notified. We also deployed palto alto firewall to create our private and secure network. For an attack, we also used password crackers like john the ripper. This project taught us how to breach networks as a team.
SECURITY CONSIDERATION IN PEER-TO-PEER NETWORKS WITH A CASE STUDY APPLICATIONIJNSA Journal
Peer-to-Peer (P2P) overlay networks wide adoption has also created vast dangers due to the millions of users who are not conversant with the potential security risks. Lack of centralized control creates great risks to the P2P systems. This is mainly due to the inability to implement proper authentication approaches for threat management. The best possible solutions, however, include encryption, utilization of administration, implementing cryptographic protocols, avoiding personal file sharing, and unauthorized downloads. Recently a new non-DHT based structured P2P system is very suitable for designing secured communication protocols. This approach is based on Linear Diophantine Equation (LDE) [1]. The P2P architectures based on this protocol offer simplified methods to integrate symmetric and asymmetric cryptographies’ solutions into the P2P architecture with no need of utilizing Transport Layer Security (TLS), and its predecessor, Secure Sockets Layer (SSL) protocols.
DDOS ATTACK DETECTION ON INTERNET OF THINGS USING UNSUPERVISED ALGORITHMSijfls
The increase in the deployment of IoT networks has improved productivity of humans and organisations.
However, IoT networks are increasingly becoming platforms for launching DDoS attacks due to inherent
weaker security and resource-constrained nature of IoT devices. This paper focusses on detecting DDoS
attack in IoT networks by classifying incoming network packets on the transport layer as either
“Suspicious” or “Benign” using unsupervised machine learning algorithms. In this work, two deep
learning algorithms and two clustering algorithms were independently trained for mitigating DDoS
attacks. We lay emphasis on exploitation based DDOS attacks which include TCP SYN-Flood attacks and
UDP-Lag attacks. We use Mirai, BASHLITE and CICDDoS2019 dataset in training the algorithms during
the experimentation phase. The accuracy score and normalized-mutual-information score are used to
quantify the classification performance of the four algorithms. Our results show that the autoencoder
performed overall best with the highest accuracy across all the datasets.
1. Statisticians analyzed data from an experiment measuring HTTPS reachability to understand who may have difficulty moving to secure HTTPS-only services.
2. Their analysis found the HTTPS test was statistically harder than the HTTP control test, and factors like ASN, browser, and OS were better predictors of failure than region/country.
3. Multiple causes like network and on-host problems likely contribute to the problem, and while small, it exists globally and could impact some users at internet scale. Reproducible statistical analysis of blinded data is important for policy issues.
The document discusses the history and development of virtual private networks (VPNs). It explains that early VPNs used IPSec but had problems with complexity and interoperability. This led to the development of user-space VPNs using virtual network interfaces and encapsulating IP packets in UDP for transmission over public networks like the internet. OpenVPN is highlighted as an open-source user-space VPN that follows this model and provides a more portable and easier to configure alternative to IPSec VPNs.
Identity theft through keyloggers has become very popular the last years. One of the most common ways to intercept and steal victim's data are to use a keylogger that transfers data back to the attacker. Covert keyloggers exist either as hardware or software. In the former case they are introduced as devices that can be attached to a computer (e.g. USB sticks), while in the latter case they try to stay invisible and undetectable as a software in the operating system. Writing a static keylogger which operates locally in victim's machine is not very complex. In contrast, the creation of covert communication between the attacker and the victim, and still remain undetectable is more sophisticated. In such a scenario we have to define how data can be delivered to the attacker and how we can make an efficient use of the channel that transfers the information over the network in order to stay undetectable. In this paper we propose a system based on Steganography that takes advantage of a seemingly innocuous Social Network (Tumblr) in order to avoid direct communication between the victim and the attacker. A core part of this study is the security analysis which is also discussed by presenting experimental results of the system and describing issues regarding surveillance resistance of the system as well as limitations.
Layered Approach for Preprocessing of Data in Intrusion Prevention SystemsEditor IJCATR
Due to extensive growth of the Internet and increasing availability of tools and methods for intruding and attacking
networks, intrusion detection has become a critical component of network security parameters. TCP/IP protocol suite is the defacto
standard for communication on the Internet. The underlying vulnerabilities in the protocols is the root cause of intrusions. Therefor
Intrusion detection system becomes an important element in network security that controls real time data and leads to huge
dimensional problem. Processing large number of packets and data in real time is very difficult and costly. Therefor data preprocessing
is necessary to remove redundant and unwanted information from packets and clean network data. Here, we are focusing on
two important aspects of intrusion detection; one is accuracy and other is performance. The layered approach of TCP/IP model can be
applied to packet pre-processing to achieve early and faster intrusion detection. Motivation for the paper comes from the large impact
data preprocessing has on the accuracy and capability of anomaly-based NIPS. In this paper it is demonstrated that high attack
detection accuracy can be achieved by using layered approach for data preprocessing in Internet. To reduce false positive rate and to
increase efficiency of detection, the paper proposed framework for preprocessing in intrusion prevention system. We experimented
with real time network traffic as well as he KDDcup99 dataset for our research.
Network Security: Experiment of Network Health Analysis At An ISPCSCJournals
This paper presents the findings of an analysis performed at an internet service provider. Based on netflow data collected and analyzed using nfdump, it helped assess how healthy is the network of an Internet Service Providers (ISP). The findings have been instrumental in reflection about reshaping the network architecture. And they have also demonstrated the need for consistent monitoring system.
How to detect middleboxes guidelines on a methodologycsandit
Internet middleboxes such as VPNs, firewalls, and proxies can significantly change handling of
traffic streams. They play an increasingly important role in various types of IP networks. If end
hosts can detect them, these hosts can make beneficial, and in some cases, crucial improvements
in security and performance But because middleboxes have widely varying behavior and effects
on the traffic they handle, no single technique has been discovered that can detect all of them.
Devising a detection mechanism to detect any particular type of middlebox interference involves
many design decisions and has numerous dimensions. One approach to assist with the
complexity of this process is to provide a set of systematic guidelines. This paper is the first
attempt to introduce a set of general guidelines (as well as the rationale behind them) to assist
researchers with devising methodologies for end-hosts to detect middleboxes by the end-hosts.
The guidelines presented here take some inspiration from the previous work of other
researchers using various and often ad hoc approaches. These guidelines, however, are mainly
based on our own experience with research on the detection of middleboxes. To assist
researchers in using these guidelines, we also provide an example of how to bring them into
play for detection of network compression
This document discusses security challenges in wireless sensor networks. It outlines key challenges like limited energy and communication capabilities as sensors are often deployed in accessible areas. It discusses approaches for secure key establishment, privacy concerns around surveillance, threats like denial of service attacks, and the need for secure routing, intrusion detection, and data aggregation given the resource constraints of sensor networks. Research is still needed to address security challenges posed by the unique aspects of sensor network environments and applications.
Burning Down the Haystack to Find the Needle: Security Analytics in ActionJosh Sokol
This document discusses security analytics and how analyzing data from multiple security tools can provide greater visibility into threats. It introduces Josh Sokol and Walter Johnson who will discuss how security tools often work in silos and how an ecosystem where they can share data can help answer questions like whether a system is under attack. Network flow data is described as important "glue" that can tie events together to illustrate attack progressions.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
USING A DEEP UNDERSTANDING OF NETWORK ACTIVITIES FOR SECURITY EVENT MANAGEMENTIJNSA Journal
With the growing deployment of host-based and network-based intrusion detection systems in increasingly
large and complex communication networks, managing low-level alerts from these systems becomes
critically important. Probes of multiple distributed firewalls (FWs), intrusion detection systems (IDSs) or
intrusion prevention systems (IPSs) are collected throughout a monitored network such that large series of
alerts (alert streams) need to be fused. An alert indicates an abnormal behavior, which could potentially be
a sign for an ongoing cyber attack. Unfortunately, in a real data communication network, administrators
cannot manage the large number of alerts occurring per second, in particular since most alerts are false
positives. Hence, an emerging track of security research has focused on alert correlation to better identify
true positive and false positive. To achieve this goal we introduce Mission Oriented Network Analysis
(MONA). This method builds on data correlation to derive network dependencies and manage security
events by linking incoming alerts to network dependencies.
Network Attack and Intrusion Prevention System Deris Stiawan
(1) The document discusses network attack and intrusion prevention systems. It describes how intrusion prevention systems (IPS) aim to detect and block threats in online traffic in real-time, beyond just detecting threats like intrusion detection systems (IDS).
(2) Feature extraction from network traffic is important for IPS to analyze without being overwhelmed by raw data. The document examines relevant features to monitor and criteria for deciding what is important to track.
(3) Experimental testing is needed to evaluate IPS performance. The document outlines stages for training systems, testing methodsologies, and resuming test results. This helps IPS avoid unexpected outcomes and ensures continuous monitoring.
AUTHENTICATION USING TRUST TO DETECT MISBEHAVING NODES IN MOBILE AD HOC NETWO...IJNSA Journal
Providing security in Mobile Ad Hoc Network is crucial problem due to its open shared wireless medium,
multi-hop and dynamic nature, constrained resources, lack of administration and cooperation.
Traditionally routing protocols are designed to cope with routing operation but in practice they may be
affected by misbehaving nodes so that they try to disturb the normal routing operations by launching
different attacks with the intention to minimize or collapse the overall network performance. Therefore
detecting a trusted node means ensuring authentication and securing routing can be expected. In this
article we have proposed a Trust and Q-learning based Security (TQS) model to detect the misbehaving
nodes over Ad Hoc On Demand Distance-Vector (AODV) routing protocol. Here we avoid the misbehaving
nodes by calculating an aggregated reward, based on the Q-learning mechanism by using their historical
forwarding and responding behaviour by the way misbehaving nodes can be isolated.
[❤PDF❤] The Basics of Digital Forensics The Primer for Getting Started in Dig...AngelinaJacobs2
The Basics of Digital Forensics provides a foundation for people new to the digital forensics field. This book teaches you how to conduct examinations by discussing what digital forensics is, the methodologies used, key tactical concepts, and the tools needed to perform examinations. Details on digital forensics for computers, networks, cell phones, GPS, the cloud and the Internet are discussed. Also, learn how to collect evidence, document the scene, and how deleted data can be recovered. The new Second Edition of this book provides you with completely up to date real world examples and all the key technologies used in digital forensics, as well as new coverage of network intrusion response, how hard drives are organized, and electronic discovery. You'll also learn how to incorporate quality assurance into an investigation, how to prioritize evidence items to examine (triage), case processing, and what goes into making an expert witness. The Second Edition also features expanded resources and references, including online resources that keep you current, sample legal documents, and suggested further reading.Learn what Digital Forensics entailsBuild a toolkit and prepare an investigative planUnderstand the common artifacts to look for in an examSecond Edition features all new coverage of hard drives, triage, network intrusion response, and electronic discovery; as well as updated case studies, expert interviews, and expanded resources and references
LO-PHI: Low-Observable Physical Host Instrumentation for Malware AnalysisPietro De Nicolao
Presentation of paper "LO-PHI: Low-Observable Physical Host Instrumentation for Malware Analysis" for the course of Advanced Topics in Computer Security of prof. Stefano Zanero.
Source and further information: https://github.com/pietrodn/lo-phi
Collecting and analyzing network-based evidenceCSITiaesprime
Since nearly the beginning of the Internet, malware has been a significant deterrent to productivity for end users, both personal and business related. Due to the pervasiveness of digital technologies in all aspects of human lives, it is increasingly unlikely that a digital device is involved as goal, medium or simply ‘witness’ of a criminal event. Forensic investigations include collection, recovery, analysis, and presentation of information stored on network devices and related to network crimes. These activities often involve wide range of analysis tools and application of different methods. This work presents methods that helps digital investigators to correlate and present information acquired from forensic data, with the aim to get a more valuable reconstructions of events or action to reach case conclusions. Main aim of network forensic is to gather evidence. Additionally, the evidence obtained during the investigation must be produced through a rigorous investigation procedure in a legal context.
Despite billions spent on enterprise cyber security, breaches from advanced attacks, costing millions, are occurring on a daily basis.
Our Solution: Complete Near Real-time Network Security Visibility and Awareness: If security analysts could see everything occurring on their network in real-time, breaches would occur but there would never be catastrophic damage – breach reaction would be almost instantaneous. Novetta Cyber Analytics is a linchpin enterprise security solution that enables security analysts, for the first time, to see a complete, near real-time, uncorrupted picture of their entire network. Security analysts then ask and receive answers to subtle questions – at the speed of thought – to enable detection, triage and response to breaches as they occur.
The Benefits: Increase events-responded-to an estimated 30X over.
Substantially reduce or eliminate damage from breaches.
Create a dramatically more effective and efficient security team.
Maximize current security infrastructure investment.
Be far more confident that your network is actually secure.
OUR DIFFERENTIATORS:
Understands the truth of what is happening on your network.
Detects advanced attacks that have breached perimeter defenses.
Develops a complete, near real-time understanding of suspicious behaviour.
Develops a battleground understanding of your entire security situation.
Augments current security solutions.
Proven speed, scale and effectiveness on the largest, most attacked networks on earth.
Cloud Forensics...this presentation shows you the current state of progress and challenges that stand today in the world of CLOUD FORENSICS.Based on lots of Google search and whites by Josiah Dykstra and Alan Sherman.The presentation builds right from basics and compares the conflicting requirements between traditional and Clod Forensics.
NIST Cloud Computing Forum and Workshop VIII
July 2015
Cloud Computing Forensic Science
Posted as a courtesy by:
Dave Sweigert
CISA CISSP HCISPP PMP SEC+
Team research paper and project on network vulnerabilities with multiple attacks and defesnses:
Cybersecurity
-For this project, our class was paired with teams to attempt to find vulnerabilities in other teams networks and to successfully beach their network.
-My role in this group was to help breach other team vulnerabilities through different attacks like responder attacks, honeypots, etc.
-The main challenges of this project were trying to find the vulnerabilities successfully, as the whole team had troubles with each of our different attacks and defenses.
-We learned how to use cybersecurity tools to help find vulnerabilities in networks and how to protect against them better. For example, in the honeypot we used we deployed it to port 80, when the attacker tried to access our fake server we were notified. We also deployed palto alto firewall to create our private and secure network. For an attack, we also used password crackers like john the ripper. This project taught us how to breach networks as a team.
SECURITY CONSIDERATION IN PEER-TO-PEER NETWORKS WITH A CASE STUDY APPLICATIONIJNSA Journal
Peer-to-Peer (P2P) overlay networks wide adoption has also created vast dangers due to the millions of users who are not conversant with the potential security risks. Lack of centralized control creates great risks to the P2P systems. This is mainly due to the inability to implement proper authentication approaches for threat management. The best possible solutions, however, include encryption, utilization of administration, implementing cryptographic protocols, avoiding personal file sharing, and unauthorized downloads. Recently a new non-DHT based structured P2P system is very suitable for designing secured communication protocols. This approach is based on Linear Diophantine Equation (LDE) [1]. The P2P architectures based on this protocol offer simplified methods to integrate symmetric and asymmetric cryptographies’ solutions into the P2P architecture with no need of utilizing Transport Layer Security (TLS), and its predecessor, Secure Sockets Layer (SSL) protocols.
DDOS ATTACK DETECTION ON INTERNET OF THINGS USING UNSUPERVISED ALGORITHMSijfls
The increase in the deployment of IoT networks has improved productivity of humans and organisations.
However, IoT networks are increasingly becoming platforms for launching DDoS attacks due to inherent
weaker security and resource-constrained nature of IoT devices. This paper focusses on detecting DDoS
attack in IoT networks by classifying incoming network packets on the transport layer as either
“Suspicious” or “Benign” using unsupervised machine learning algorithms. In this work, two deep
learning algorithms and two clustering algorithms were independently trained for mitigating DDoS
attacks. We lay emphasis on exploitation based DDOS attacks which include TCP SYN-Flood attacks and
UDP-Lag attacks. We use Mirai, BASHLITE and CICDDoS2019 dataset in training the algorithms during
the experimentation phase. The accuracy score and normalized-mutual-information score are used to
quantify the classification performance of the four algorithms. Our results show that the autoencoder
performed overall best with the highest accuracy across all the datasets.
1. Statisticians analyzed data from an experiment measuring HTTPS reachability to understand who may have difficulty moving to secure HTTPS-only services.
2. Their analysis found the HTTPS test was statistically harder than the HTTP control test, and factors like ASN, browser, and OS were better predictors of failure than region/country.
3. Multiple causes like network and on-host problems likely contribute to the problem, and while small, it exists globally and could impact some users at internet scale. Reproducible statistical analysis of blinded data is important for policy issues.
The document discusses the history and development of virtual private networks (VPNs). It explains that early VPNs used IPSec but had problems with complexity and interoperability. This led to the development of user-space VPNs using virtual network interfaces and encapsulating IP packets in UDP for transmission over public networks like the internet. OpenVPN is highlighted as an open-source user-space VPN that follows this model and provides a more portable and easier to configure alternative to IPSec VPNs.
Identity theft through keyloggers has become very popular the last years. One of the most common ways to intercept and steal victim's data are to use a keylogger that transfers data back to the attacker. Covert keyloggers exist either as hardware or software. In the former case they are introduced as devices that can be attached to a computer (e.g. USB sticks), while in the latter case they try to stay invisible and undetectable as a software in the operating system. Writing a static keylogger which operates locally in victim's machine is not very complex. In contrast, the creation of covert communication between the attacker and the victim, and still remain undetectable is more sophisticated. In such a scenario we have to define how data can be delivered to the attacker and how we can make an efficient use of the channel that transfers the information over the network in order to stay undetectable. In this paper we propose a system based on Steganography that takes advantage of a seemingly innocuous Social Network (Tumblr) in order to avoid direct communication between the victim and the attacker. A core part of this study is the security analysis which is also discussed by presenting experimental results of the system and describing issues regarding surveillance resistance of the system as well as limitations.
Layered Approach for Preprocessing of Data in Intrusion Prevention SystemsEditor IJCATR
Due to extensive growth of the Internet and increasing availability of tools and methods for intruding and attacking
networks, intrusion detection has become a critical component of network security parameters. TCP/IP protocol suite is the defacto
standard for communication on the Internet. The underlying vulnerabilities in the protocols is the root cause of intrusions. Therefor
Intrusion detection system becomes an important element in network security that controls real time data and leads to huge
dimensional problem. Processing large number of packets and data in real time is very difficult and costly. Therefor data preprocessing
is necessary to remove redundant and unwanted information from packets and clean network data. Here, we are focusing on
two important aspects of intrusion detection; one is accuracy and other is performance. The layered approach of TCP/IP model can be
applied to packet pre-processing to achieve early and faster intrusion detection. Motivation for the paper comes from the large impact
data preprocessing has on the accuracy and capability of anomaly-based NIPS. In this paper it is demonstrated that high attack
detection accuracy can be achieved by using layered approach for data preprocessing in Internet. To reduce false positive rate and to
increase efficiency of detection, the paper proposed framework for preprocessing in intrusion prevention system. We experimented
with real time network traffic as well as he KDDcup99 dataset for our research.
Network Security: Experiment of Network Health Analysis At An ISPCSCJournals
This paper presents the findings of an analysis performed at an internet service provider. Based on netflow data collected and analyzed using nfdump, it helped assess how healthy is the network of an Internet Service Providers (ISP). The findings have been instrumental in reflection about reshaping the network architecture. And they have also demonstrated the need for consistent monitoring system.
How to detect middleboxes guidelines on a methodologycsandit
Internet middleboxes such as VPNs, firewalls, and proxies can significantly change handling of
traffic streams. They play an increasingly important role in various types of IP networks. If end
hosts can detect them, these hosts can make beneficial, and in some cases, crucial improvements
in security and performance But because middleboxes have widely varying behavior and effects
on the traffic they handle, no single technique has been discovered that can detect all of them.
Devising a detection mechanism to detect any particular type of middlebox interference involves
many design decisions and has numerous dimensions. One approach to assist with the
complexity of this process is to provide a set of systematic guidelines. This paper is the first
attempt to introduce a set of general guidelines (as well as the rationale behind them) to assist
researchers with devising methodologies for end-hosts to detect middleboxes by the end-hosts.
The guidelines presented here take some inspiration from the previous work of other
researchers using various and often ad hoc approaches. These guidelines, however, are mainly
based on our own experience with research on the detection of middleboxes. To assist
researchers in using these guidelines, we also provide an example of how to bring them into
play for detection of network compression
This document discusses security challenges in wireless sensor networks. It outlines key challenges like limited energy and communication capabilities as sensors are often deployed in accessible areas. It discusses approaches for secure key establishment, privacy concerns around surveillance, threats like denial of service attacks, and the need for secure routing, intrusion detection, and data aggregation given the resource constraints of sensor networks. Research is still needed to address security challenges posed by the unique aspects of sensor network environments and applications.
Burning Down the Haystack to Find the Needle: Security Analytics in ActionJosh Sokol
This document discusses security analytics and how analyzing data from multiple security tools can provide greater visibility into threats. It introduces Josh Sokol and Walter Johnson who will discuss how security tools often work in silos and how an ecosystem where they can share data can help answer questions like whether a system is under attack. Network flow data is described as important "glue" that can tie events together to illustrate attack progressions.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
USING A DEEP UNDERSTANDING OF NETWORK ACTIVITIES FOR SECURITY EVENT MANAGEMENTIJNSA Journal
With the growing deployment of host-based and network-based intrusion detection systems in increasingly
large and complex communication networks, managing low-level alerts from these systems becomes
critically important. Probes of multiple distributed firewalls (FWs), intrusion detection systems (IDSs) or
intrusion prevention systems (IPSs) are collected throughout a monitored network such that large series of
alerts (alert streams) need to be fused. An alert indicates an abnormal behavior, which could potentially be
a sign for an ongoing cyber attack. Unfortunately, in a real data communication network, administrators
cannot manage the large number of alerts occurring per second, in particular since most alerts are false
positives. Hence, an emerging track of security research has focused on alert correlation to better identify
true positive and false positive. To achieve this goal we introduce Mission Oriented Network Analysis
(MONA). This method builds on data correlation to derive network dependencies and manage security
events by linking incoming alerts to network dependencies.
Network Attack and Intrusion Prevention System Deris Stiawan
(1) The document discusses network attack and intrusion prevention systems. It describes how intrusion prevention systems (IPS) aim to detect and block threats in online traffic in real-time, beyond just detecting threats like intrusion detection systems (IDS).
(2) Feature extraction from network traffic is important for IPS to analyze without being overwhelmed by raw data. The document examines relevant features to monitor and criteria for deciding what is important to track.
(3) Experimental testing is needed to evaluate IPS performance. The document outlines stages for training systems, testing methodsologies, and resuming test results. This helps IPS avoid unexpected outcomes and ensures continuous monitoring.
AUTHENTICATION USING TRUST TO DETECT MISBEHAVING NODES IN MOBILE AD HOC NETWO...IJNSA Journal
Providing security in Mobile Ad Hoc Network is crucial problem due to its open shared wireless medium,
multi-hop and dynamic nature, constrained resources, lack of administration and cooperation.
Traditionally routing protocols are designed to cope with routing operation but in practice they may be
affected by misbehaving nodes so that they try to disturb the normal routing operations by launching
different attacks with the intention to minimize or collapse the overall network performance. Therefore
detecting a trusted node means ensuring authentication and securing routing can be expected. In this
article we have proposed a Trust and Q-learning based Security (TQS) model to detect the misbehaving
nodes over Ad Hoc On Demand Distance-Vector (AODV) routing protocol. Here we avoid the misbehaving
nodes by calculating an aggregated reward, based on the Q-learning mechanism by using their historical
forwarding and responding behaviour by the way misbehaving nodes can be isolated.
[❤PDF❤] The Basics of Digital Forensics The Primer for Getting Started in Dig...AngelinaJacobs2
The Basics of Digital Forensics provides a foundation for people new to the digital forensics field. This book teaches you how to conduct examinations by discussing what digital forensics is, the methodologies used, key tactical concepts, and the tools needed to perform examinations. Details on digital forensics for computers, networks, cell phones, GPS, the cloud and the Internet are discussed. Also, learn how to collect evidence, document the scene, and how deleted data can be recovered. The new Second Edition of this book provides you with completely up to date real world examples and all the key technologies used in digital forensics, as well as new coverage of network intrusion response, how hard drives are organized, and electronic discovery. You'll also learn how to incorporate quality assurance into an investigation, how to prioritize evidence items to examine (triage), case processing, and what goes into making an expert witness. The Second Edition also features expanded resources and references, including online resources that keep you current, sample legal documents, and suggested further reading.Learn what Digital Forensics entailsBuild a toolkit and prepare an investigative planUnderstand the common artifacts to look for in an examSecond Edition features all new coverage of hard drives, triage, network intrusion response, and electronic discovery; as well as updated case studies, expert interviews, and expanded resources and references
LO-PHI: Low-Observable Physical Host Instrumentation for Malware AnalysisPietro De Nicolao
Presentation of paper "LO-PHI: Low-Observable Physical Host Instrumentation for Malware Analysis" for the course of Advanced Topics in Computer Security of prof. Stefano Zanero.
Source and further information: https://github.com/pietrodn/lo-phi
Collecting and analyzing network-based evidenceCSITiaesprime
Since nearly the beginning of the Internet, malware has been a significant deterrent to productivity for end users, both personal and business related. Due to the pervasiveness of digital technologies in all aspects of human lives, it is increasingly unlikely that a digital device is involved as goal, medium or simply ‘witness’ of a criminal event. Forensic investigations include collection, recovery, analysis, and presentation of information stored on network devices and related to network crimes. These activities often involve wide range of analysis tools and application of different methods. This work presents methods that helps digital investigators to correlate and present information acquired from forensic data, with the aim to get a more valuable reconstructions of events or action to reach case conclusions. Main aim of network forensic is to gather evidence. Additionally, the evidence obtained during the investigation must be produced through a rigorous investigation procedure in a legal context.
Despite billions spent on enterprise cyber security, breaches from advanced attacks, costing millions, are occurring on a daily basis.
Our Solution: Complete Near Real-time Network Security Visibility and Awareness: If security analysts could see everything occurring on their network in real-time, breaches would occur but there would never be catastrophic damage – breach reaction would be almost instantaneous. Novetta Cyber Analytics is a linchpin enterprise security solution that enables security analysts, for the first time, to see a complete, near real-time, uncorrupted picture of their entire network. Security analysts then ask and receive answers to subtle questions – at the speed of thought – to enable detection, triage and response to breaches as they occur.
The Benefits: Increase events-responded-to an estimated 30X over.
Substantially reduce or eliminate damage from breaches.
Create a dramatically more effective and efficient security team.
Maximize current security infrastructure investment.
Be far more confident that your network is actually secure.
OUR DIFFERENTIATORS:
Understands the truth of what is happening on your network.
Detects advanced attacks that have breached perimeter defenses.
Develops a complete, near real-time understanding of suspicious behaviour.
Develops a battleground understanding of your entire security situation.
Augments current security solutions.
Proven speed, scale and effectiveness on the largest, most attacked networks on earth.
Keynote talk at the International Conference on Supercoming 2009, at IBM Yorktown in New York. This is a major update of a talk first given in New Zealand last January. The abstract follows.
The past decade has seen increasingly ambitious and successful methods for outsourcing computing. Approaches such as utility computing, on-demand computing, grid computing, software as a service, and cloud computing all seek to free computer applications from the limiting confines of a single computer. Software that thus runs "outside the box" can be more powerful (think Google, TeraGrid), dynamic (think Animoto, caBIG), and collaborative (think FaceBook, myExperiment). It can also be cheaper, due to economies of scale in hardware and software. The combination of new functionality and new economics inspires new applications, reduces barriers to entry for application providers, and in general disrupts the computing ecosystem. I discuss the new applications that outside-the-box computing enables, in both business and science, and the hardware and software architectures that make these new applications possible.
Apache Kafka and the Data Mesh | Ben Stopford and Michael Noll, ConfluentHostedbyConfluent
The document discusses the principles of a data mesh architecture using Apache Kafka for event streaming. It describes a data mesh as having four key principles: 1) domain-driven decentralization where each domain owns the data it creates, 2) treating data as a first-class product, 3) providing a self-serve data platform for easy access to real-time and historical data, and 4) establishing federated governance with global standards. Event streaming is presented as a good fit for data meshing due to its scalability, ability to handle real-time and historical data, and immutability. The document provides examples and recommendations for implementing each principle in a data mesh.
A comparative study of social network analysis toolsDavid Combe
This document compares several social network analysis tools based on their functionalities and benchmarks them using sample datasets. It finds that Pajek, Gephi, igraph, and NetworkX are mature tools that handle network representation, visualization, characterization with indicators, and community detection well. Gephi is interactive but community detection is experimental. NetworkX is attribute-friendly and handles large networks but lacks visualization. Igraph is optimized for clustering but not custom attributes. The best tool depends on the specific analysis needs.
Making Machine Learning Easy with H2O and WebFluxTrayan Iliev
Machine learning is becoming a must for many business domains and applications. H2O is a best-of-breed, open source, distributed machine learning library written in Java. The presentation shows how to create and train machine learning models easily using H2O Flow web interface, including Deep Learning Neural Networks (DNNs). The session provides a tutorial how to develop and deploy fullstack-reactive face recognition demo using React + RxJS WebSocket front-end, OpenCV, Caffe CNN for image segmentation, OpenFace CNN for feature extraction, H20 Flow for face recognition interactive model training and export as POJO. The trained POJO model is incorporated in a real-time streaming web service implemented using Spring 5 Web Flux and Spring Boot. All demo is 100% Java!
Proactive ops for container orchestration environmentsDocker, Inc.
This document discusses different approaches to monitoring systems from manual and reactive to proactive monitoring using container orchestration tools. It provides examples of metrics to monitor at the host/hardware, networking, application, and orchestration layers. The document emphasizes applying the principles of observability including structured logging, events and tracing with metadata, and monitoring the monitoring systems themselves. Speakers provide best practices around failure prediction, understanding failure modes, and using chaos engineering to build system resilience.
Sideband Networks uses behavior analytics to identify anomalous network activity targeting critical assets. It applies deep packet inspection, machine learning, and behavioral analysis to network traffic in real-time. This allows it to build a profile of normal communication and detect abnormalities. When it identifies potential security issues, it generates contextual alerts to help security teams quickly analyze and respond to network threats. The solution integrates with security information and event management systems and provides operators with a dashboard to view network activity and alerts.
Evolution from EDA to Data Mesh: Data in Motionconfluent
Thoughtworks Zhamak Dehghani observations on these traditional approaches’s failure modes, inspired her to develop an alternative big data management architecture that she aptly named the Data Mesh. This represents a paradigm shift that draws from modern distributed architecture and is founded on the principles of domain-driven design, self-serve platform, and product thinking with Data. In the last decade Apache Kafka has established a new category of data management infrastructure for data in motion that has been leveraged in modern distributed data architectures.
IRJET - Network Traffic Monitoring and Botnet Detection using K-ANN AlgorithmIRJET Journal
This document discusses a system for network traffic monitoring and botnet detection using the K-ANN (Kohonen Artificial Neural Network) algorithm. The system aims to address challenges in analyzing large amounts of network traffic data in real-time to accurately detect abnormal network activities and security threats. It involves collecting network packets using packet capture libraries, filtering the packets using K-ANN to detect botnet behavior, and notifying administrators of any detected threats. The system architecture and K-ANN algorithm are described. Results show the system was able to detect a SYN flood attack based on analyzing packet attributes. Future work will involve detecting additional types of network threats.
Cloud Camp Milan 2K9 Telecom Italia: Where P2P?Gabriele Bozzi
1. The document discusses the potential for peer-to-peer (P2P) computing as an alternative or complement to the traditional client-server model, especially in the context of cloud computing.
2. It notes challenges with P2P such as lack of centralized control and potential for freeloading, but also advantages like harnessing unused resources.
3. Emerging technologies like autonomic and cognitive networking aim to address P2P challenges by enabling self-configuration and optimization of distributed resources.
1. The document discusses the potential for peer-to-peer (P2P) computing as an alternative or complement to the traditional client-server model, especially in the context of cloud computing.
2. P2P systems offer access to distributed resources but lack centralized control, which makes it difficult to ensure reliability, performance, and security.
3. Autonomic and cognitive approaches may help address issues with P2P by enabling self-configuration, healing, optimization and protection of distributed resources.
4. Future networking approaches like DirecNet envision high-speed mobile mesh networks that could further enable wide-scale distributed computing architectures.
The document discusses how networks and applications can become more aware of each other to improve the experience for end users. Currently, networks and applications operate independently without much visibility into each other. The document proposes that applications share information about end users and traffic with networks, and networks share information about topology, bandwidth, and resources with applications. This would allow applications to optimize content placement and resource usage, and networks to gain insights to better optimize traffic and provide new services. The document argues this type of programmable network can improve areas like security, performance, analytics and more.
DoS Forensic Exemplar Comparison to a Known SampleCSCJournals
The investigation of any event or incident often involves the evaluation of physical evidence. Occasionally, a comparison is conducted between an evidentiary sample of unknown origin and that of an appropriate known sample. In a Denial of Service (DoS) attack, items of evidentiary value may cross the spectrum from anecdotes to useful information in firewall logs or complete packet captures. Because of the spoofed or reflective nature of DoS attacks, relevant information leading to the direct identification of the perpetrator is rarely available. In many instances, this underscores the significance of the investigator's ability to accurately identify the tool utilized by the suspect. For a DoS attack scenario, this would likely involve a commercially available stresser or criminal bot infrastructure. In this paper, we propose the concept of a DoS exemplar and determine if the comparison of evidentiary samples to an appropriate known sample of DoS attributes could add value in the investigative process. We also provide a simple tool to compare two DoS flows.
The document provides an overview of grid computing, including:
1) Grid computing involves sharing distributed computational resources over a network and providing single login access for users. Resources may be owned by different organizations.
2) Examples of current grids discussed include the NSF PACI/NCSA Alliance Grid, the NSF PACI/SDSC NPACI Grid, and the NASA Information Power Grid.
3) The document also discusses various grid middleware tools and projects for using grid resources, such as Globus, Condor, Legion, Harness, and the Internet Backplane Protocol.
EFFICIENT IDENTIFICATION AND REDUCTION OF MULTIPLE ATTACKS ADD VICTIMISATION ...IRJET Journal
This document discusses efficient identification and reduction of multiple attacks in IoT networks using deep learning techniques. It proposes a Deep Learning based secure RPL routing (DLRP) protocol to detect attacks like rank, version number, and Denial of Service attacks. The DLRP protocol first creates a complex dataset of normal and attack behaviors using network simulation. It then trains a machine learning model using this dataset to efficiently identify attack behaviors. Additionally, it classifies attack types using a Generative Adversarial Network to reduce the dataset dimensionality. Simulation results show the DLRP protocol improves attack detection accuracy and fits IoT environments well, achieving 80% packet delivery ratio using only 1474 control packets in a 30 node IoT scenario.
- The document discusses building a predictive anomaly detection model for network traffic using streaming data technologies.
- It proposes using Apache Kafka to ingest and process network packet and Netflow data in real-time, and Akka clustering to build predictive models that can guide human cybersecurity experts.
- The solution aims to more effectively guide human awareness of network threats by complementing localized rule-matching with predictive modeling of aggregate network behavior based on streaming metrics.
Applying Provenance in APT Monitoring and Analysis Practical Challenges for S...Graeme Jenkinson
Advanced Persistent Threats (APT) are a class of security threats in which a well-resourced attacker targets a specific individual or organisation with a predefined goal. This typically involves exfiltration of confidential material, although increasingly attacks target the encryption or destruction of mission critical data. With traditional prevention and detection mechanisms failing to stem the tide of such attacks, there is a pressing need for new monitoring and analysis tools that reduce both false-positive rates and the cognitive burden on human analysts. We propose that local and distributed provenance meta-data can simplify and improve monitoring and analysis of APTs by providing a single, authoritative sequence of events that captures the context (and side effects) of potentially malicious activities. Provenance metadata allows a human analyst to backtrack from detection of malicious activity to the point of intrusion and, similarly, to work forward to fully understand the consequences. Applying provenance to APT monitoring and analysis introduces some significantly different challenges and requirements in comparison to more traditional applications. Drawing from our experiences working with and adapting the OPUS (Observed Provenance in User Space) system to an APT monitoring and analysis use case, we introduce and discuss some of the key challenges in this space. These preliminary observations are intended [Copyright notice will appear here once 'preprint' option is removed.] to prime a discussion within the community about the design space for scalable, efficient and trustworthy distributed provenance for scenarios that impose different constraints from traditional provenance applications such as workflow and data processing frameworks.
Undertaking a digital journey starts with clearly articulating the success factors for the entire digital journey, and our experience from the field has shown it to be an Achilles heel for most CXOs, across Fortune 500 organizations. Our findings were corroborated when a Mckinsey study reported that only 15% of the organizations are able to calculate the ROI of a digital initiative.
In this talk we will deliberate on demonstrated examples from multi-billion dollar businesses around proven methodologies to measure the value of a digital enterprise. The panel will share experiences as well as provide actionable advice for immediate next steps around the following:
Successful metrics for measuring the value for Digital / IoT / AI/ Machine learning engagements
How can 'Digital Traction Metrics' help with actionable insights even before the Financial Metrics have been reported
What are the best in-class organizational constructs and futuristic employee engagement methods to facilitate the digital revolution
Panelists for this session include:
• Christian Bilien - Head of Global Data at Societe Generale
• Pierre Alexandre Pautrat – Head of Big Data at BPCE/Nattixis
• Ronny Fehling – VP , Airbus
• Juergen Urbanski – Silicon Valley Data Science
• Abhas Ricky - EMEA Lead, Innovation & Strategy, Hortonworks
Similar to Matthias Vallentin - Towards Interactive Network Forensics and Incident Response, Boundary Tech Talks November 17, 2011 (20)
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Trusted Execution Environment for Decentralized Process MiningLucaBarbaro3
Presentation of the paper "Trusted Execution Environment for Decentralized Process Mining" given during the CAiSE 2024 Conference in Cyprus on June 7, 2024.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Overcoming the PLG Trap: Lessons from Canva's Head of Sales & Head of EMEA Da...
Matthias Vallentin - Towards Interactive Network Forensics and Incident Response, Boundary Tech Talks November 17, 2011
1. Towards Interactive Network Forensics and
Incident Response
Matthias Vallentin
UC Berkeley / ICSI
vallentin@icir.org
Boundary Tech Talk
San Francisco, CA
November 17, 2011
2. Motivation
What do the following activities have in common?
Network troubleshooting
Incident response
Network forensics
2 / 36
3. Motivation
What do the following activities have in common?
Network troubleshooting
Incident response
Network forensics
→ Data-intensive analysis of past activity
→ Interactive response times often critical
2 / 36
4. Motivation
What do the following activities have in common?
Network troubleshooting
Incident response
Network forensics
→ Data-intensive analysis of past activity
→ Interactive response times often critical
“How to build a platform that efficiently supports these activities?”
2 / 36
5. Outline
1. Incident Response and Network Forensics
2. Operational Network Monitoring using Bro
3. Building an Interactive Analytics Platform
3 / 36
6. About
4th -year PhD student at UC Berkeley, advised by Vern Paxson
Working with researchers at ICSI/ICIR and the AMPlab
Interests
Large-scale network intrusion detection
High-performance traffic analysis
Network forensics and incident response
→ with strong operational emphasis
Projects
The Bro network security monitor
VAST: Visibility Across Space and Time
HILTI: High-Level Intermediate Language for Traffic Inspection
4 / 36
7. Outline
1. Incident Response and Network Forensics
2. Operational Network Monitoring using Bro
3. Building an Interactive Analytics Platform
5 / 36
8. Use Case #1: Classic Incident Response
Goal: fast and comprehensive analysis of security incidents
Often begins with an external piece of intelligence
“IP X serves malware over HTTP”
“This MD5 hash is malware”
“Connections to 128.11.5.0/27 at port 42000 are malicious”
Analysis style: Ad-hoc, interactive, several refinements/adaptions
Typical operations
Filter: project, select
Aggregate: mean, sum, quantile, min/max, histogram, top-k,
unique
⇒ Concrete starting point, then widen scope (bottom-up)
6 / 36
9. Use Case #2: Network Troubleshooting
Goal: find root cause of component failure
Often no specific hint, merely symptomatic feedback
“I can’t access my Gmail”
Typical operations
Zoom: slice activity at different granularities
Time: seconds, minutes, days, . . .
Space: layer 2/3/4/7, host, subnet, port, URL, . . .
Study time series data of activity aggregates
Find abnormal activity
“Today we see 20% less outbound DNS compared to yesterday”
Infer dependency graphs: use joint behavior from past to asses present
impact [KMV+ 09]
Judicious machine learning [SP10]
⇒ No concrete starting point, narrow scope (top-down)
7 / 36
10. Use Case #3: Combating Insider Abuse
Goal: uncover policy violations of personnel
Analysis procedure: connect the dots
Insider attack:
Chain of authorized actions, hard to detect individually
E.g., data exfiltration
1. User logs in to internal machine
2. Copies sensitive document to local machine
3. Sends document to third party via email
Typical operations
Compare activity profiles
“Jon never logs in to our backup machine at 3am”
“Seth accessed 10x more files on our servers today”
⇒ Relate temporally distant events, behavior-based detection
8 / 36
11. Outline
1. Incident Response and Network Forensics
2. Operational Network Monitoring using Bro
3. Building an Interactive Analytics Platform
9 / 36
12. Basic Network Monitoring
Internet Tap Local Network
Monitor
Sites
UC Berkeley (10 Gbps, 50,000 hosts)
NCSA, IL (8×10 Gbps, 10,000 hosts)
LBNL, Berkeley (10 Gbps, 12,000 hosts)
ICSI, Berkeley (100 Mbps, 250 hosts)
AirJaldi, India (10 Mbps, 500 hosts)
10 / 36
13. High-Performance Network Monitoring:
The NIDS Cluster [VSL+ 07]
Internet Tap Local Network
Frontend
Worker ... Worker ... Worker
Proxy
Manager
Packets
Logs
State User
11 / 36
14. The Bro Cluster
Internet Tap Local Network
We run it operationally at: Frontend
UC Berkeley (26 workers)
LBNL (15 workers)
Proxy
NCSA (10 4-core workers) Worker Worker Worker
Runs at numerous large sites: Proxy
...
...
Industry Worker Worker Worker
Proxy
Academia
Government
Packets
Logs
Manager
State
12 / 36
15. The Bro Network Security Monitor
Fundamentally different from other IDS
Real-time network analysis framework User Interface
Policy-neutral at the core Logs Notifications
Highly stateful
Script Interpreter
Key components
Events
1. Event engine
TCP stream reassembly
Event Engine
Protocol analysis
Policy-neutral
Packets
2. Script interpreter
“Domain-specific Python” Network
Generate extensive logs
Apply site policy
13 / 36
16. From Packets to High-Level Descriptions of Activity
Event declaration
type connection: record { orig: addr, resp: addr, ... }
event connection_established(c: connection)
event http_request(c: connection, method: string, URI: string)
event http_reply(c: connection, status: string, data: string)
14 / 36
21. Log Analysis
What do we do with Bro logs?
Process (ad-hoc analysis)
Summarize (time series data, histogram/top-k, quantile)
Correlate (machine learning, statistical tests)
Age (elevate old data into higher levels of abstraction)
Visualize
18 / 36
22. Log Analysis
What do we do with Bro logs?
Process (ad-hoc analysis)
Summarize (time series data, histogram/top-k, quantile)
Correlate (machine learning, statistical tests)
Age (elevate old data into higher levels of abstraction)
Visualize
How do we do it?
All eggs in one basket
SIEM: Splunk, ArcSight, NarusInsight, . . . $$$
VAST
In-situ processing
Tools of the trade (awk, sort, uniq, . . . )
MapReduce / Hadoop
18 / 36
23. Outline
1. Incident Response and Network Forensics
2. Operational Network Monitoring using Bro
3. Building an Interactive Analytics Platform
19 / 36
24. From Ephemeral to Persistent Activity
Bro events
User Interface
Policy-neutral activity
Ephemeral Logs Notifications
Only inside the Bro process
Script Interpreter
→ Can I haz access?
Broccoli 3rd-party Events
Application
Send/Receive Bro events
Comm
Events
Broccoli Event Engine
Written in C
Language bindings
Packets
Ruby
Python
Network
Perl
→ Send-them-while-they-are-hot
(Broccoli = Bro client communications library)
20 / 36
25. From Ephemeral to Persistent Activity
Bro
Apache
Events Query Result
Broccoli Events
OpenSSH
Query
Events Result
User
Broccoli
21 / 36
28. Inspiration
1. Dremel
Columnar storage
Nested data model
2. Bigtable
Sharding: distributed tablets
3. GFS
Single master with meta data
Locate chunks via master
4. Sawzall
Aggregators: collection, sample, sum, maximum, quantile, top-k,
unique
5. FastBit
Bitmap indexes “work” for high-cardinality attributes
24 / 36
29. Design Philosophy Touch Stones [Lam11]
Storage
Keep data sorted → reduce seeks, easy random entry
Shard with access locality → minimize involved nodes
Store data in columns → don’t waste I/O
Use append-only disk format → avoid expensive index updates
25 / 36
30. Design Philosophy Touch Stones [Lam11]
Storage
Keep data sorted → reduce seeks, easy random entry
Shard with access locality → minimize involved nodes
Store data in columns → don’t waste I/O
Use append-only disk format → avoid expensive index updates
Compute
Use disk appropriately → large sequential reads
Trade CPU for I/O → type-specific, aggressive compression
Use pipelined parallelism → hide latency
Ship compute to data → aggregation serving tree
25 / 36
31. Design Philosophy Touch Stones [Lam11]
Storage
Keep data sorted → reduce seeks, easy random entry
Shard with access locality → minimize involved nodes
Store data in columns → don’t waste I/O
Use append-only disk format → avoid expensive index updates
Compute
Use disk appropriately → large sequential reads
Trade CPU for I/O → type-specific, aggressive compression
Use pipelined parallelism → hide latency
Ship compute to data → aggregation serving tree
Query
Make it user-friendly → declarative query interface
Provide query hooks → support complex analysis
25 / 36
32. VAST: Visibility Across Space and Time
Visibility
Deep understanding of the data
Visualization: you know how to do that already. . .
Across space:
Unify heterogeneous data formats
One query language
Apache logs, SSH logs, Bro events, sensor data, . . .
Across time:
1. From the ancient past (old historical data)
2. To subscribing to data that may arrive in the future
26 / 36
33. Queries
Two types
1. Search: historical query
2. Feed: live query
→ use case: crawl archive first, then make query permanent
Unify two ends of a spectrum
Live Historical
Operation Push Pull
Latency O(|Xresult |) O(|Xdata |)
Data location In-memory Disk (ideally cached)
Flexibility Predefined Ad-hoc, adjustable
Cost Pay-As-You-Go Lumpsum
27 / 36
34. VAST: Architecture Overview
Ingest Query
Distributed architecture
Elasticity via MQ middle layer
Store
Few component dependencies
DFS: fault-tolerance, replication
Archive: key-value store
Contains serialized events Archive
Index: sharded column-store
Index
Compressed bitmap indexes
In-memory store
Caches tablets (LRU)
DFS
Flushes in batches
28 / 36
35. VAST: Ingestion Architecture
Store
Event
Indexer
Router
1. Events arrive at Event Router
1.1 Assign UUID x write
put
1.2 Put (x, event) in archive Tablets
1.3 Forward event to Indexer
ripe?
2. Indexer writes event into tablet
Tablet
and updates indexes Manager
3. Tablet Manager flushes “ripe” flush
tablets Archive
Capacity (space/rows)
Tablets
Lifetime
Index
DFS
29 / 36
36. VAST: Query Architecture
Store
Query Query
Manager Proxy
query
1. User or NIDS issues query get
Tablets
2. Query Manager distributes it
to relevant nodes LRU
3. Tablet Manager load tablets Tablet
Manager
4. Query Proxy hits index flush
a Returns direct result Archive load
b Returns set of UUIDs
Tablets
Index
DFS
30 / 36
37. Bitmap Indexes
Data Bitmap Index
b0 b1 b2 b3
Column cardinality: # distinct values
2 0 0 1 0
One bitmap bi for each value i
1 0 1 0 0
Sparse, but compressible
2 0 0 1 0
WAH [WOSN01]
COMPAX [FSV10] 0 1 0 0 0
Consice [CDP10] 0 1 0 0 0
Can operate on compressed bitmaps
1 0 1 0 0
No need to decompress
3 0 0 0 1
31 / 36
38. Conclusion
1. Motivation: incident response, network troubleshooting, insider abuse
2. The Bro network security monitor
High-performance network monitoring
Expressive representation of activity
Publish/subscribe event model
3. Design sketch of a distributed analytics platform
32 / 36
40. References I
A. Colantonio and R. Di Pietro.
Concise: Compressed ’n’ Composable Integer Set.
Information Processing Letters, 110(16):644–650, 2010.
Francesco Fusco, Marc Ph. Stoecklin, and Michail Vlachos.
NET-FLi: On-the-fly Compression, Archiving and Indexing of
Streaming Network Traffic.
Proceedings of the VLDB Endowment, 3:1382–1393, September 2010.
Srikanth Kandula, Ratul Mahajan, Patrick Verkaik, Sharad Agarwal,
Jitendra Padhye, and Paramvir Bahl.
Detailed Diagnosis in Enterprise Networks.
In Proceedings of the ACM SIGCOMM 2009 Conference on Data
Communication, SIGCOMM ’09, pages 243–254, New York, NY, USA,
2009. ACM.
34 / 36
41. References II
Andrew Lamb.
Building Blocks for Large Analytic Systems.
In 5th Extremely Large Databases Conference, XLDB ’11, Menlo Park,
California, October 2011.
Robin Sommer and Vern Paxson.
Outside the Closed World: On Using Machine Learning for Network
Intrusion Detection.
In Proceedings of the 2010 IEEE Symposium on Security and Privacy,
SP ’10, pages 305–316, Washington, DC, USA, 2010. IEEE Computer
Society.
35 / 36
42. References III
Matthias Vallentin, Robin Sommer, Jason Lee, Craig Leres, Vern
Paxson, and Brian Tierney.
The NIDS Cluster: Scalably Stateful Network Intrusion Detection on
Commodity Hardware.
In Proceedings of the 10th International Conference on Recent
Advances in Intrusion Detection, RAID’07, pages 107–126.
Springer-Verlag, September 2007.
Kesheng Wu, Ekow J. Otoo, Arie Shoshani, and Henrik Nordberg.
Notes on Design and Implementation of Compressed Bit Vectors.
Technical Report LBNL-3161, Lawrence Berkeley National Laboratory,
Berkeley, CA, USA, 94720, 2001.
36 / 36