A Survey on DPI Techniques for Regular Expression Detection in Network Intrus...ijsrd.com
Deep Packet Inspection (DPI) is becoming more widely used in virtually all applications or services like Intrusion Detection System (IDS), which operate with or within a network. DPI analyzes all data present in the packet as it passes an inspection to determine the application transported and protocol. Deep packet inspection typically uses regular expression matching as a core operator. Regular expressions (RegExes) are used to flexibly represent complex string patterns in many applications ranging from network intrusion detection and prevention systems (NIDPSs). Regular expressions represent complex string pattern as attack signatures in DPI. It examine whether a packet’s payload matches any of a set of predefined regular expressions. There are various techniques developed in DPI for deep packet inspection for regular expression. We survey on these techniques for further improvement in regular expression detection in this paper. In the result we found that it is possible to reduce RegEx transaction memory required in network intrusion detection. We made this survey with possible use of DPI techniques in the wireless network.
Pre-filters in-transit malware packets detection in the networkTELKOMNIKA JOURNAL
Conventional malware detection systems cannot detect most of the new malware in the network
without the availability of their signatures. In order to solve this problem, this paper proposes a technique
to detect both metamorphic (mutated malware) and general (non-mutated) malware in the network using a
combination of known malware sub-signature and machine learning classification. This network-based
malware detection is achieved through a middle path for efficient processing of non-malware packets.
The proposed technique has been tested and verified using multiple data sets (metamorphic malware,
non-mutated malware, and UTM real traffic), this technique can detect most of malware packets in
the network-based before they reached the host better than the previous works which detect malware in
host-based. Experimental results showed that the proposed technique can speed up the transmission of
more than 98% normal packets without sending them to the slow path, and more than 97% of malware
packets are detected and dropped in the middle path. Furthermore, more than 75% of metamorphic
malware packets in the test dataset could be detected. The proposed technique is 37 times faster than
existing technique.
FLOODING ATTACK DETECTION AND MITIGATION IN SDN WITH MODIFIED ADAPTIVE THRESH...IJCNCJournal
Flooding attack is a network attack that sends a large amount of traffic to the victim networks or services to cause denial-of-service. In Software-Defined Networking (SDN) environment, this attack might not only breach the hosts and services but also the SDN controller. Besides, it will also cause a disconnection of links between the controller and the switches. Thus, an effective detection and mitigation technique of flooding attacks is required. Statistical analysis techniques are widely used for the detection and mitigation of flooding attacks. However, the effectiveness of these techniques strongly depends on the defined threshold. Defining the static threshold is a tedious job and most of the time produces a high false positive alarm .In this paper, we proposed the dynamic threshold which is calculated using modified adaptive threshold algorithm (MATA). The original ATA is based on the Exponential Weighted Moving Average (EWMA) formula which produces the high number of false alarms. To reduce the false alarms, the alarm signal will only be generated after a minimum number of consecutive violations of the threshold. This, however, has increased the false negative rate when the network is under attack. In order to reduce this false negative rate, MATA adapted the baseline traffic info of the network infrastructure. The comparative analysis of MATA and ATA are performed through the measurement of false negative rate, and accuracy of detection rate. Our experimental results show that MATA is able to reduce false negative rates up to 17.74% and increase the detection accuracy of 16.11%over the various types of flooding attacks at the transport layer.
A Survey on DPI Techniques for Regular Expression Detection in Network Intrus...ijsrd.com
Deep Packet Inspection (DPI) is becoming more widely used in virtually all applications or services like Intrusion Detection System (IDS), which operate with or within a network. DPI analyzes all data present in the packet as it passes an inspection to determine the application transported and protocol. Deep packet inspection typically uses regular expression matching as a core operator. Regular expressions (RegExes) are used to flexibly represent complex string patterns in many applications ranging from network intrusion detection and prevention systems (NIDPSs). Regular expressions represent complex string pattern as attack signatures in DPI. It examine whether a packet’s payload matches any of a set of predefined regular expressions. There are various techniques developed in DPI for deep packet inspection for regular expression. We survey on these techniques for further improvement in regular expression detection in this paper. In the result we found that it is possible to reduce RegEx transaction memory required in network intrusion detection. We made this survey with possible use of DPI techniques in the wireless network.
Pre-filters in-transit malware packets detection in the networkTELKOMNIKA JOURNAL
Conventional malware detection systems cannot detect most of the new malware in the network
without the availability of their signatures. In order to solve this problem, this paper proposes a technique
to detect both metamorphic (mutated malware) and general (non-mutated) malware in the network using a
combination of known malware sub-signature and machine learning classification. This network-based
malware detection is achieved through a middle path for efficient processing of non-malware packets.
The proposed technique has been tested and verified using multiple data sets (metamorphic malware,
non-mutated malware, and UTM real traffic), this technique can detect most of malware packets in
the network-based before they reached the host better than the previous works which detect malware in
host-based. Experimental results showed that the proposed technique can speed up the transmission of
more than 98% normal packets without sending them to the slow path, and more than 97% of malware
packets are detected and dropped in the middle path. Furthermore, more than 75% of metamorphic
malware packets in the test dataset could be detected. The proposed technique is 37 times faster than
existing technique.
FLOODING ATTACK DETECTION AND MITIGATION IN SDN WITH MODIFIED ADAPTIVE THRESH...IJCNCJournal
Flooding attack is a network attack that sends a large amount of traffic to the victim networks or services to cause denial-of-service. In Software-Defined Networking (SDN) environment, this attack might not only breach the hosts and services but also the SDN controller. Besides, it will also cause a disconnection of links between the controller and the switches. Thus, an effective detection and mitigation technique of flooding attacks is required. Statistical analysis techniques are widely used for the detection and mitigation of flooding attacks. However, the effectiveness of these techniques strongly depends on the defined threshold. Defining the static threshold is a tedious job and most of the time produces a high false positive alarm .In this paper, we proposed the dynamic threshold which is calculated using modified adaptive threshold algorithm (MATA). The original ATA is based on the Exponential Weighted Moving Average (EWMA) formula which produces the high number of false alarms. To reduce the false alarms, the alarm signal will only be generated after a minimum number of consecutive violations of the threshold. This, however, has increased the false negative rate when the network is under attack. In order to reduce this false negative rate, MATA adapted the baseline traffic info of the network infrastructure. The comparative analysis of MATA and ATA are performed through the measurement of false negative rate, and accuracy of detection rate. Our experimental results show that MATA is able to reduce false negative rates up to 17.74% and increase the detection accuracy of 16.11%over the various types of flooding attacks at the transport layer.
Zmap fast internet wide scanning and its security applicationslosalamos
Internet-wide network scanning has numerous security
applications, including exposing new vulnerabilities and
tracking the adoption of defensive mechanisms, but probing the entire public address space with existing tools is
both difficult and slow. We introduce ZMap, a modular,
open-source network scanner specifically architected to
perform Internet-wide scans and capable of surveying
the entire IPv4 address space in under 45 minutes from
user space on a single machine,
ADRISYA: A FLOW BASED ANOMALY DETECTION SYSTEM FOR SLOW AND FAST SCANIJNSA Journal
Attackers perform port scan to find reachability, liveness and running services in a system or network. Current day scanning tools provide different scanning options and capable of evading various security tools like firewall, IDS and IPS. So in order to detect and prevent attacks in the early stages, an accurate detection of scanning activity in real time is very much essential. In this paper we present a flow based protocol behaviour analysis system to detect TCP based slow and fast scan. This system provides scalable, accurate and generic solution to TCP based scanning by means of automatic behaviour analysis of the network traffic. Detection capability of proposed system is compared with SNORT and result proves the high detection rate of the system over SNORT.
TRACEBACK OF DOS OVER AUTONOMOUS SYSTEMSIJNSA Journal
Denial of service (DoS) is a significant security threat in open networks such as the Internet. The existing limitations of the Internet protocols and the common availability tools make a DoS attack both effective and easy to launch. There are many different forms of DoS attack and the attack size could be amplified from a single attacker to a distributed attack such as a distributed denial of service (DDoS). IP traceback is one important tool proposed as part of DoS mitigation and a number of traceback techniques have been proposed including probabilistic packet marking (PPM). PPM is a promising technique that can be used to trace the complete path back from a victim to the attacker by encoding of each router's 32-bit IP address in at least one packet of a traffic flow. However, in a network with multiple hops through a number of autonomous systems (AS), as is common with most Internet services, it may be undesirable for every router to contribute to packet marking or for an AS to reveal its internal routing structure. This paper proposes two new efficient autonomous system (AS) traceback techniques to identify the AS of the attacker by probabilistically marking the packets. Traceback on the AS level has a number of advantages including a reduction in the number of bits to be encoded and a reduction in the number of routers that need to participate in the marking. Our results show a better performance comparing to PPM and other techniques.
Optimal remote access trojans detection based on network behaviorIJECEIAES
RAT is one of the most infected malware in the hyper-connected world. Data is being leaked or disclosed every day because new remote access Trojans are emerging and they are used to steal confidential data from target hosts. Network behavior-based detection has been used to provide an effective detection model for Remote Access Trojans. However, there is still short comings: to detect as early as possible, some False Negative Rate and accuracy that may vary depending on ratio of normal and malicious RAT sessions. As typical network contains large amount of normal traffic and small amount of malicious traffic, the detection model was built based on the different ratio of normal and malicious sessions in previous works. At that time false negative rate is less than 2%, and it varies depending on different ratio of normal and malicious instances. An unbalanced dataset will bias the prediction model towards the more common class. In this paper, each RAT is run many times in order to capture variant behavior of a Remote Access Trojan in the early stage, and balanced instances of normal applications and Remote Access Trojans are used for detection model. Our approach achieves 99 % accuracy and 0.3% False Negative Rate by Random Forest Algorithm.
This is a Brief overview of what Vulnerability and Penetration Testing are in the Information Technology Security. The focus is on the issues that always arise within a Security Network. How you as an IT can identify or notice activity of any the Attacks from Hackers or unknown Individual that are a Client.
WLI-FCM and Artificial Neural Network Based Cloud Intrusion Detection SystemEswar Publications
Security and Performance aspects of cloud computing are the major issues which have to be tended to in Cloud Computing. Intrusion is one such basic and imperative security problem for Cloud Computing. Consequently, it is essential to create an Intrusion Detection System (IDS) to detect both inside and outside assaults with high detection precision in cloud environment. In this paper, cloud intrusion detection system at hypervisor layer is developed and assesses to detect the depraved activities in cloud computing environment. The cloud intrusion detection system uses a hybrid algorithm which is a fusion of WLI- FCM clustering algorithm and Back propagation artificial Neural Network to improve the detection accuracy of the cloud intrusion detection system. The proposed system is implemented and compared with K-means and classic FCM. The DARPA’s KDD cup dataset 1999 is used for simulation. From the detailed performance analysis, it is clear that the proposed system is able to detect the anomalies with high detection accuracy and low false alarm rate.
Open source network forensics and advanced pcap analysisGTKlondike
Speaker: GTKlondike
There is a lot of information freely available out on the internet to get network administrators and security professionals started with network analysis tools such as Wireshark. However, there is a well defined limit on how in depth the topic is covered. This intermediate level talk aims to bridge the gap between a basic understanding of protocol analyzers (I.e. Wireshark and TCPdump), and practical real world usage. Things that will be covered include: network file carving, statistical flow analysis, GeoIP, exfiltration, limitations of Wireshark, and other network based attacks. It is assumed the audience has working knowledge of protocol analysis tools (I.e. Wireshark and TCPdump), OSI and TCP/IP model, and major protocols (I.e. DNS, HTTP(s), TCP, UDP, DHCP, ARP, IP, etc.).
Bio
GTKlondike is a local hacker/independent security researcher who has a passion for network security, both attack and defense. He has several years experience working as an network infrastructure and security consultant mainly dealing with switching, routing, firewalls, and servers. Currently attending graduate school, he is constantly studying and learning new techniques to better defend or bypass network security mechanisms.
Network traffic analysis with cyber securityKAMALI PRIYA P
We are students from SRM University pursuing B.TECH in Computer Science Department. We took a small initiative to make a PPT about how network traffic can be analyzed through Cyber Security. We have also mentioned the known network analyzers and future scope for network traffic analysis with cyber security.
Scaling DDS to Millions of Computers and DevicesRick Warren
I gave this presentation at an Object Management Group (OMG) workshop in Arlington, VA in March, 2010. It describes some of the concerns that will impact DDS as it is scaled to very large, geographically distributed systems. It also describes possible ways these challenges can be addressed.
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
IEEE 2014 DOTNET PARALLEL DISTRIBUTED PROJECTS A system-for-denial-of-service...IEEEMEMTECHSTUDENTPROJECTS
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
Zmap fast internet wide scanning and its security applicationslosalamos
Internet-wide network scanning has numerous security
applications, including exposing new vulnerabilities and
tracking the adoption of defensive mechanisms, but probing the entire public address space with existing tools is
both difficult and slow. We introduce ZMap, a modular,
open-source network scanner specifically architected to
perform Internet-wide scans and capable of surveying
the entire IPv4 address space in under 45 minutes from
user space on a single machine,
ADRISYA: A FLOW BASED ANOMALY DETECTION SYSTEM FOR SLOW AND FAST SCANIJNSA Journal
Attackers perform port scan to find reachability, liveness and running services in a system or network. Current day scanning tools provide different scanning options and capable of evading various security tools like firewall, IDS and IPS. So in order to detect and prevent attacks in the early stages, an accurate detection of scanning activity in real time is very much essential. In this paper we present a flow based protocol behaviour analysis system to detect TCP based slow and fast scan. This system provides scalable, accurate and generic solution to TCP based scanning by means of automatic behaviour analysis of the network traffic. Detection capability of proposed system is compared with SNORT and result proves the high detection rate of the system over SNORT.
TRACEBACK OF DOS OVER AUTONOMOUS SYSTEMSIJNSA Journal
Denial of service (DoS) is a significant security threat in open networks such as the Internet. The existing limitations of the Internet protocols and the common availability tools make a DoS attack both effective and easy to launch. There are many different forms of DoS attack and the attack size could be amplified from a single attacker to a distributed attack such as a distributed denial of service (DDoS). IP traceback is one important tool proposed as part of DoS mitigation and a number of traceback techniques have been proposed including probabilistic packet marking (PPM). PPM is a promising technique that can be used to trace the complete path back from a victim to the attacker by encoding of each router's 32-bit IP address in at least one packet of a traffic flow. However, in a network with multiple hops through a number of autonomous systems (AS), as is common with most Internet services, it may be undesirable for every router to contribute to packet marking or for an AS to reveal its internal routing structure. This paper proposes two new efficient autonomous system (AS) traceback techniques to identify the AS of the attacker by probabilistically marking the packets. Traceback on the AS level has a number of advantages including a reduction in the number of bits to be encoded and a reduction in the number of routers that need to participate in the marking. Our results show a better performance comparing to PPM and other techniques.
Optimal remote access trojans detection based on network behaviorIJECEIAES
RAT is one of the most infected malware in the hyper-connected world. Data is being leaked or disclosed every day because new remote access Trojans are emerging and they are used to steal confidential data from target hosts. Network behavior-based detection has been used to provide an effective detection model for Remote Access Trojans. However, there is still short comings: to detect as early as possible, some False Negative Rate and accuracy that may vary depending on ratio of normal and malicious RAT sessions. As typical network contains large amount of normal traffic and small amount of malicious traffic, the detection model was built based on the different ratio of normal and malicious sessions in previous works. At that time false negative rate is less than 2%, and it varies depending on different ratio of normal and malicious instances. An unbalanced dataset will bias the prediction model towards the more common class. In this paper, each RAT is run many times in order to capture variant behavior of a Remote Access Trojan in the early stage, and balanced instances of normal applications and Remote Access Trojans are used for detection model. Our approach achieves 99 % accuracy and 0.3% False Negative Rate by Random Forest Algorithm.
This is a Brief overview of what Vulnerability and Penetration Testing are in the Information Technology Security. The focus is on the issues that always arise within a Security Network. How you as an IT can identify or notice activity of any the Attacks from Hackers or unknown Individual that are a Client.
WLI-FCM and Artificial Neural Network Based Cloud Intrusion Detection SystemEswar Publications
Security and Performance aspects of cloud computing are the major issues which have to be tended to in Cloud Computing. Intrusion is one such basic and imperative security problem for Cloud Computing. Consequently, it is essential to create an Intrusion Detection System (IDS) to detect both inside and outside assaults with high detection precision in cloud environment. In this paper, cloud intrusion detection system at hypervisor layer is developed and assesses to detect the depraved activities in cloud computing environment. The cloud intrusion detection system uses a hybrid algorithm which is a fusion of WLI- FCM clustering algorithm and Back propagation artificial Neural Network to improve the detection accuracy of the cloud intrusion detection system. The proposed system is implemented and compared with K-means and classic FCM. The DARPA’s KDD cup dataset 1999 is used for simulation. From the detailed performance analysis, it is clear that the proposed system is able to detect the anomalies with high detection accuracy and low false alarm rate.
Open source network forensics and advanced pcap analysisGTKlondike
Speaker: GTKlondike
There is a lot of information freely available out on the internet to get network administrators and security professionals started with network analysis tools such as Wireshark. However, there is a well defined limit on how in depth the topic is covered. This intermediate level talk aims to bridge the gap between a basic understanding of protocol analyzers (I.e. Wireshark and TCPdump), and practical real world usage. Things that will be covered include: network file carving, statistical flow analysis, GeoIP, exfiltration, limitations of Wireshark, and other network based attacks. It is assumed the audience has working knowledge of protocol analysis tools (I.e. Wireshark and TCPdump), OSI and TCP/IP model, and major protocols (I.e. DNS, HTTP(s), TCP, UDP, DHCP, ARP, IP, etc.).
Bio
GTKlondike is a local hacker/independent security researcher who has a passion for network security, both attack and defense. He has several years experience working as an network infrastructure and security consultant mainly dealing with switching, routing, firewalls, and servers. Currently attending graduate school, he is constantly studying and learning new techniques to better defend or bypass network security mechanisms.
Network traffic analysis with cyber securityKAMALI PRIYA P
We are students from SRM University pursuing B.TECH in Computer Science Department. We took a small initiative to make a PPT about how network traffic can be analyzed through Cyber Security. We have also mentioned the known network analyzers and future scope for network traffic analysis with cyber security.
Scaling DDS to Millions of Computers and DevicesRick Warren
I gave this presentation at an Object Management Group (OMG) workshop in Arlington, VA in March, 2010. It describes some of the concerns that will impact DDS as it is scaled to very large, geographically distributed systems. It also describes possible ways these challenges can be addressed.
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
IEEE 2014 DOTNET PARALLEL DISTRIBUTED PROJECTS A system-for-denial-of-service...IEEEMEMTECHSTUDENTPROJECTS
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
Here are few of the top problems a network analysis tool will help you to solve. Are you worried about any of these network related issues? Know how ManageEngine NetFlow Analyzer get to the root cause and lets you solve before it affects end user.
Online stream mining approach for clustering network trafficeSAT Journals
Abstract A large number of research have been proposed on intrusion detection system, which leads to the implementation of agent based intelligent IDS (IIDS), Non – intelligent IDS (NIDS), signature based IDS etc. While building such IDS models, learning algorithms from flow of network traffic plays crucial role in accuracy of IDS systems. The proposed work focuses on implementing the novel method to cluster network traffic which eliminates the limitations in existing online clustering algorithms and prove the robustness and accuracy over large stream of network traffic arriving at extremely high rate. We compare the existing algorithm with novel methods to analyse the accuracy and complexity. Keywords— NIDS, Data Stream Mining, Online Clustering, RAH algorithm, Online Efficient Incremental Clustering algorithm
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
A New Way of Identifying DOS Attack Using Multivariate Correlation Analysisijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
JPD1424 A System for Denial-of-Service Attack Detection Based on Multivariat...chennaijp
We have best 2014 free dot not projects topics are available along with all document, you can easy to find out number of documents for various projects titles.
For More Details:
http://jpinfotech.org/final-year-ieee-projects/2014-ieee-projects/dot-net-projects/
Network Traffic Trends Prediction Using Machine Learning Modelling of Packet ...Rangaprasad Sampath
This deck proposes a method to predict network traffic trends ans spot traffic anomalies. The machine learning modelling is done on the packet lengths that constitute network traffic and this provides an elegant way to digest a histogram of packet lengths in time t into a pair of data points. An unsupervised machine learning method is applied on the obtained dataset and the resulting clusters are labeled. Changes in cluster composition indicate traffic trends that may be then interpreted for network insights.
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The papers for publication in The International Journal of Engineering& Science are selected through rigorous peer reviews to ensure originality, timeliness, relevance, and readability.
BIG DATA ANALYTICS FOR USER-ACTIVITY ANALYSIS AND USER-ANOMALY DETECTION IN...Nexgen Technology
GET IEEE BIG DATA,JAVA ,DOTNET,ANDROID ,NS2,MATLAB,EMBEDED AT LOW COST WITH BEST QUALITY PLEASE CONTACT BELOW NUMBER
FOR MORE INFORMATION PLEASE FIND THE BELOW DETAILS:
Nexgen Technology
No :66,4th cross,Venkata nagar,
Near SBI ATM,
Puducherry.
Email Id: praveen@nexgenproject.com
Mobile: 9791938249
Telephone: 0413-2211159
www.nexgenproject.com
Network security monitoring elastic webinar - 16 june 2021Mouaz Alnouri
The difference between successfully defending an attack or failing to compromise is your ability to understand what’s happening in your network better than your adversary. Choosing the right network security monitoring (NSM) toolset is crucial to effectively monitor, detect, and respond to any potential threats in an organisation’s network.
In this webinar, we’ll uncover the best practices, trends, and challenges in network security monitoring (NSM) and how Elastic is being used as a core component to network security monitoring.
Highlights:
- What is network security monitoring (NSM)?
- Types of network data
- Common toolset
- Overcoming challenges with network security monitoring
- Using Machine Learning for network security monitoring
- Demo
Network Security: Experiment of Network Health Analysis At An ISPCSCJournals
This paper presents the findings of an analysis performed at an internet service provider. Based on netflow data collected and analyzed using nfdump, it helped assess how healthy is the network of an Internet Service Providers (ISP). The findings have been instrumental in reflection about reshaping the network architecture. And they have also demonstrated the need for consistent monitoring system.
COPYRIGHTThis thesis is copyright materials protected under the .docxvoversbyobersby
COPYRIGHT
This thesis is copyright materials protected under the Berne Convection, the copyright Act 1999 and other international and national enactments in that behalf, on intellectual property. It may not be reproduced by any means in full or in part except for short extracts in fair dealing so for research or private study, critical scholarly review or discourse with acknowledgment, with written permission of the Dean School of Graduate Studies on behalf of both the author and XXX XXX University.ABSTRACT
With Fast growing internet world the risk of intrusion has also increased, as a result Intrusion Detection System (IDS) is the admired key research field. IDS are used to identify any suspicious activity or patterns in the network or machine, which endeavors the security features or compromise the machine. IDS majorly use all the features of the data. It is a keen observation that all the features are not of equal relevance for the detection of attacks. Moreover every feature does not contribute in enhancing the system performance significantly. The main aim of the work done is to develop an efficient denial of service network intrusion classification model. The specific objectives included: to analyse existing literature in intrusion detection systems; what are the techniques used to model IDS, types of network attacks, performance of various machine learning tools, how are network intrusion detection systems assessed; to find out top network traffic attributes that can be used to model denial of service intrusion detection; to develop a machine learning model for detection of denial of service network intrusion.Methods: The research design was experimental and data was collected by simulation using NSL-KDD dataset. By implementing Correlation Feature Selection (CFS) mechanism using three search algorithms, a smallest set of features is selected with all the features that are selected very frequently. Findings: The smallest subset of features chosen is the most nominal among all the feature subset found. Further, the performances using Artificial neural networks(ANN), decision trees, Support Vector Machines (SVM) and K-Nearest Neighbour (KNN) classifiers is compared for 7 subsets found by filter model and 41 attributes. Results: The outcome indicates a remarkable improvement in the performance metrics used for comparison of the two classifiers. The results show that using 17/18 selected features improves DOS types classification accuracies as compared to using the 41 features in the NSL-KDD dataset. It was further observed that using an ensemble of three classifiers with decision fusion performs better as compared to using a single classifier for DOS type’s classification. Among machine learning tools experimented, ANN achieved best classification accuracies followed by SVM and DT. KNN registered the lowest classification accuracies. Application: The proposed work with such an improved detection rate and lesser classification time and lar.
Detecting Hacks: Anomaly Detection on Networking DataJames Sirota
See https://medium.com/@jamessirota for a series of blog entries that goes with this deck...
Defense in Depth for Big Data
Network Anomaly Detection Overview
Volume Anomaly Detection
Feature Anomaly Detection
Model Architecture
Deployment on OpenSOC Platform
Questions
A novel signature based traffic classification engine to reduce false alarms ...IJCNCJournal
Pattern matching plays a significant role in ascertaining network attacks and the foremost prerequisite for a trusted intrusion detection system (IDS) is accurate pattern matching. During the pattern matching process packets are scanned against a pre-defined rule sets. After getting scanned, the packets are marked as alert or benign by the detection system. Sometimes the detection system generates false alarms i.e., good traffic being identified as bad traffic. The ratio of generating the false positives varies from the performance of the detection engines used to scan incoming packets. Intrusion detection systems use to deploy algorithmic procedures to reduce false positives though producing a good number of false alarms. As the necessities, we have been working on the optimization of the algorithms and procedures so that false positives can be reduced to a great extent. As an effort we have proposed a signature-based traffic classification technique that can categorize the incoming packets based on the traffic characteristics and behaviour which would eventually reduce the rate of false alarms
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Subhajit Sahu
Abstract — Levelwise PageRank is an alternative method of PageRank computation which decomposes the input graph into a directed acyclic block-graph of strongly connected components, and processes them in topological order, one level at a time. This enables calculation for ranks in a distributed fashion without per-iteration communication, unlike the standard method where all vertices are processed in each iteration. It however comes with a precondition of the absence of dead ends in the input graph. Here, the native non-distributed performance of Levelwise PageRank was compared against Monolithic PageRank on a CPU as well as a GPU. To ensure a fair comparison, Monolithic PageRank was also performed on a graph where vertices were split by components. Results indicate that Levelwise PageRank is about as fast as Monolithic PageRank on the CPU, but quite a bit slower on the GPU. Slowdown on the GPU is likely caused by a large submission of small workloads, and expected to be non-issue when the computation is performed on massive graphs.
Adjusting primitives for graph : SHORT REPORT / NOTESSubhajit Sahu
Graph algorithms, like PageRank Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is
Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).
2. ExperimentTime:2017/05/20
PROPOSAL FOR SYSTEM ANALYSIS AND DESIGN
COMPUTER NETWORK TRAFFIC ANALYSIS
Project description
An unknown number of attacks on government computer networksoccur every day.
Some of these attacks are successful and/or undetected and can have disastrous
consequences. One of the aims of this project is to detectand ultimately prevent
these attacks. In today’s digital age, we are surrounded by massive amounts of data.
In many cases, we do not know the best way to store, manage, integrate, obtain
information from, or visualize it. Such is the case for data regarding packet flows
over a network. Research involving the analysis of this type of data is in its early
stages. Interesting problems such as behavioral authentication of server flows and
intrusion detection are beginning to be solved using this type of data. We are
particularly interested in analyzing network data for the purposes of anomaly
detection (attacks, masquerades, and networkinterruptions), user profiling,
workload management, and application verification.
3. Our tasks include:
1. processing the data consisting of packets into a useful format
2. extracting information from the data flows
3. developing traffic flow models for the purposes mentioned above
4. visualizing the data
5. recognizing data patterns for the purposes mentioned above.
The client for this projectis my honorable prof. Mr. Yi Ding from University of
Electronic Science and Technology of China.
Computer Network Traffic Analysis Requirements:
Proper network planning can save time and expense, and can ensure a timely
deployment of Microsoft Speech Server (MSS).
Monitoring network bandwidth and traffic patterns at an interface specific level
Drill drown into interface level details to discover traffic patterns and device
performance
Get real-time-insight into your network bandwidth with one-minute granularity
reports
Network forensics and security analysis-detect a broad spectrum and internal
security threads using continuous stream mining engine technology.
Track network anomalies that surpass your network firewall.
Network planning involves: knowing the number of telephone lines and the types of
associated services and equipment that are needed to support telephony (voice-only)
applications; anticipating increased TCP/IP network traffic; and subsequently
determining the optimal network architecture needed for the system.
TCP/IP Network: A physical TCP/IP network is required for MSS. All MSS
computers, Web servers and load balancers communicate using this network. Install
at least one network adapter in each computer running MSS. The use of a firewall
between MSS computers is not supported. To determine network planning
requirements
Load Balancers – This section applies to Enterprise Edition only. Load balancing is
required whenever two or more computers are used for running Speech Engine
Services (SES), Telephony Application Services (TAS), or Web server software in a
server farm or cluster configuration. Either hardware or software load balancing can
be used.
A TAS server farm, a Private Branch Exchange (PBX) unit is needed to provide load
balancing and call routing functionality.
Telephony Boards – Each computer that runs Telephony Application Services (TAS)
for supporting telephony (voice-only) applications requires telephony interface
manager software and possibly a hardware telephony board that accepts telephone
line connections.
4. Data Sets-Testing and evaluating is an important of network traffic analysis. In
order to evaluate the effectiveness of all research works using similar standard list is
recommended to use standard data set. There are several standard data sets used
throughout the recent years. We enlist a few important data sets that are being used
by researchers for network traffic analysis.
DARPA data set: KDD cup data has been the most widely used for evaluating of
network traffic analysis with respect to intrusion detection. This data set is
presented by Stolon at al.
NSL-KDD data set: The NSL-KDD is publicly available for researchers and it is
improved version of original KDD cup data set
CAIDA data sets: This data set contains DoS attacks
Waikato data set: It contains internet storage
Supervised and Unsupervised method.
Global and Local methods
Top-down and bottom-up: Top-down (splitting) discretization methods begin with
long as and value of interval then divide values into smaller intervals at each
iteration.
Direct and Incremental method.
Feature Selection methods: Feature selection (FS) is a preprocessing method to be
applied before applying data mining techniques. Feature selection used to improve
the data mining techniques performance through the removal of redundant or
irrelevant attributes.
We have identified some techniques including principal component analysis,
information entropy, rough set theory, feature selection is used frequently for
preprocessing network traffic data
Data mining: Data mining plays an important role in analyzing network traffic.
Clustering technique: Clustering is the process of partitioning data into groups
according to certain characteristics of data
Hybrid models-The hybrid models are a combination of two or more approaches for
analysis of network traffic. The hybrid model achieved good results in the analysis
of network traffic.
time-series Graph Mining for detecting anomalous packets from network traffic.
Evaluation metrics:
-In data mining techniques, many different metrics are used to investigate
the data mining techniques. The detection rate, false positive rate, accuracy and time
cost metrics are employed for measuring the performance of classifier for different
data set. A number of metrics exist to express predictive accuracy. The metrics used
using confusion matrix. Each metric is defined as below
a) True negatives (TN)
Total number of packets correctly classified.
b) True positives(TP)
Total numbers of malicious packets correctly classified.
c) False negatives(FN)
False Negatives is total numbers of malicious packets incorrectly classified as
normal packets.
d) False Positives (FP)
False positive is Total numbers of normal packets incorrectly classified as
malicious packets.
e) Detection Rate (DR)
5. It is the ratio of total numbers of attacks detected divided by total numbers of
false positive plus total number of true negative
f) Precision Rate (PR)
It is the ratio of total numbers of TP divided by total number of TP plus total
number of FP.
g) Recall Rate (RR)
It is ratio of total numbers of TP divided by total number of TP plus total number
of FN.
h) Overall Rate (OR)
It is ratio of total numbers of TP pulse total number of TN divided by total
number of TP plus total number of FP plus total number of plus total number of
TN.
i) Sensitivity
It is the ratio of total numbers of TP divided by total number of FP
j) Specificity
It is the ratio of total numbers of TN divided by total number of FN.
k) Accuracy
It is the ratio of total numbers of TP plus total numbers of TN divided by total
number of FP plus total number of FN.
l) Percentage of Successful prediction (PSP)
It is the ratio of total numbers of successful instances classified divided by the
total numbers of actual instance.
Traffic Flows:
The nature of internet traffic can better be understood by knowing the concept of
the flow. Flow is the sequence of packets or a packet that belonged to certain
network sessions between two hosts but delimited by the setting of flow
generation or analyzing tool. the definition of flow may also be coined as, a series
of packets that share the same source IP, destination IP, source port, destination
port and the protocol.
E-R Diagram:
Yes
No
Application generates traffic
Sends Packet to socket
Sends packets to transport
layer
Sends packet to network layer
Packet arrives at device
Packet
for host?
Drops packet
Sends packet to
network layer
Forward
packet
Sends packet to
transport layer
Drops packet
Looks up route to
destination
TRANSPORT LAYER (IP)
6. Experiment Results:
App-centric Monitoring and Shape app traffic: -
Recognize and classify non-standard application that hog your network
bandwidth using NetFlow Analyzer.
Reconfigure policieswith traffic shaping technique via ACL or class-based policy
to gain control over bandwidth-hungry application.
NetFlow analyzer leverages on Cisco NBAR to give you deep visibilityinto layer
7 traffic and recognize applications that use dynamic port numbers or hide
behind well-known ports.
Capacity Planning and Billing:
Make informed decisions on your bandwidth using capacity planning reports.
Measure your bandwidth growth over a period time long term reporting.
Accurate trend over extended historic periods
Generate on demand billing for accounting and departmental chargebacks.
Monitor Voice, Video and Data effectively:
Analyze IP service levels for network-based applications and services using
NetFlow analyzer IP SLA monitor
Ensure high level of data and voice communication quality using Cisco IP SLA
technology
Keep a tap on key performance metrics of voice and data traffic.
7. Some common thingsthat we need:
A computer Mouse
A touch screen/Normalscreen
A program on your Mac or Windows that include a translation, icons of disk
drives, and folder.
Pull-down menus
Principles of Human-Computer Interface Design:
Recognize Diversity- In order to recognize diversity, the designer, must take into
account the type of user frequenting system, ranging from novice user, knowledgeable but
intermittentuser and expert frequent user. Each type of user expectsthe screen layout to
accommodate their desires, novicesneeding extensive help, experts wanting to get where
they want to go as quickly as possible. Accommodating both styles on the same page can be
quite challenging. You can addressthe differences in users by including both menu or icon
choices as well as commands (i.e. Command or Control P for Print as well as an icon or
menu entry), or providing an option for both full descriptive menus and single letter
commands.
8. Eight Golden Rules of Interface Design:
1. Strive for consistency
consistent sequences of actions should be required in similar situations
identical terminology should be used in prompts, menus, and help screens
consistent color, layout, capitalization, fonts, and so on should be employed
throughout
2. Enable frequent users to use shortcuts
to increase the pace of interaction use abbreviations, special keys, hidden
commands, and macros
3. Offer informative feedback
for every user action, the system should respond in some way (in web
design, this can be accomplished by DHTML - for example, a button will
make a clicking sound or change color when clicked to show the user
something has happened)
4. Design dialogs to yield closure
Sequences of actions should be organized into groups with a beginning,
middle, and end. The informative feedback at the completion of a group of
actions shows the user their activity has completed successfully
5. Offer error prevention and simple error handling
design the form so that users cannot make a serious error; for example,
prefer menu selection to form fill-in and do not allow alphabetic characters
in numeric entry fields
if users make an error, instructions should be written to detect the error
and offer simple, constructive, and specific instructions for recovery
segment long forms and send sections separately so that the user is not
penalized by having to fill the form in again - but make sure you inform
the user that multiple sections are comingup
6. Permit easy reversal of actions
7. Support internal locus of control
Experienced users want to be in charge. Surprising system actions, tedious
sequences of data entries, inability or difficulty in obtaining necessary
information, and inability to produce the action desired all build anxiety
and dissatisfaction
8. Reduce short-term memory load
9. A human can store only 7 (plus or minus 2) pieces of information in their
short term memory. You can reduce shortterm memory load by designing
screens where options are clearly visible, or using pull-down menus and
icons
Prevent Errors - The third principle is to prevent errors whenever possible. Steps
can be taken to design so that errors are less likely to occur, using methods such as
organizing screensand menus functionally, designing screensto be distinctive and
making it difficult for usersto commit irreversible actions. Expect users to make
errors, try to anticipate where they will go wrong and design with those actions in
mind.
Norman's Research
One researcher who has contributed extensively to the field of human-computer interface
design is Donald Norman. This psychologist has taken insights from the field of industrial
product design and applied them to the design of user interfaces. According to Norman,
design should:
Use both knowledge in the world and knowledge in the head. Knowledge in the
world is overt - we don't have to overload our short term memoryby having to remember
too many things (icons, buttons and menus provide us with knowledge in the world - we
don't have to remember the command for printing, it's there in front of us). On the other
hand, while knowledge in the head may be harder to retrieve and involves learning, it is
more efficient for tasks which are used over and over again “make it easy to determine
what actions are possible at any moment (make use of constraints)".
For example:
well-designed things can only be put together certain ways (the trapezoidal
SCSI cable is an example of good design - I can only plug it in one way)
menus only display the actions which can be carried out at that time (other
options are dimmed).
"Make things visible, including the conceptual model of the system, the alternative actions
and the results of actions". You can also provide an overview map of your site so that your
user can design their own mental map of how things work.
"Make it easy to evaluate the current state of the system". You can do that by providing
feedback in the form of messages or flashing buttons.
"Follow natural mappingsbetween intentions and the required actions, between actions
and the resulting effect; and between the information that is visible and the interpretation
of the system state".
For example:
10. It should be obvious what the function of a button or menu is - use
conventionsalready established for the web, don't try to design something
which changes what people are familiar with.
The underlined phrase on a web page is a well-known clue that a link is
present. From past experience, users understand that clicking on an
underlined phrase should take them somewhere else.
"In other words, make sure that the user can figure out what to do, and (2) the user can tell
what is going on.
Summary
How can we relate the recommendations from human-computer interface design research
directly to web design?
1. Recognize Diversity
make your main navigation area fast loading for repeat users
provide a detailed explanation of your topics, symbols, and navigation
options for new users
provide a text index for quick access to all pages of the site
ensure your pages are readable in many formats, to accommodate users
who are blind or deaf, users with old versions of browsers, lynx users,
users on slow modems or those with graphics turned off
2. Strive for consistency in:
menus
help screens
color
layout
capitalization
fonts
sequences of actions
3. Offer informative feedback - rollover buttons, sounds when clicked
4. Build in error prevention in online forms
5. Give users control as much as possible
6. Reduce short term memory load by providing menus, buttons or icons. If you use
icons, make sure you have a section which explains what they mean. Make things
obvious by using constraints - grayed out items in menus for options not available in
that page
7. Make use of web conventions such as underlined links, color change in links for
visited pages, common terminology
8. Provide a conceptual model of your site using a site map or an index