The investigation of any event or incident often involves the evaluation of physical evidence. Occasionally, a comparison is conducted between an evidentiary sample of unknown origin and that of an appropriate known sample. In a Denial of Service (DoS) attack, items of evidentiary value may cross the spectrum from anecdotes to useful information in firewall logs or complete packet captures. Because of the spoofed or reflective nature of DoS attacks, relevant information leading to the direct identification of the perpetrator is rarely available. In many instances, this underscores the significance of the investigator's ability to accurately identify the tool utilized by the suspect. For a DoS attack scenario, this would likely involve a commercially available stresser or criminal bot infrastructure. In this paper, we propose the concept of a DoS exemplar and determine if the comparison of evidentiary samples to an appropriate known sample of DoS attributes could add value in the investigative process. We also provide a simple tool to compare two DoS flows.
This document compares the k-means data mining and outlier detection approaches for network-based intrusion detection. It analyzes four datasets capturing network traffic using both approaches. The k-means approach clusters traffic into normal and abnormal flows, while outlier detection calculates an outlier score for each flow. The document finds that k-means was more accurate and precise, with a better classification rate than outlier detection. It requires less computer resources than outlier detection. This comparison of the approaches can help network administrators choose the best intrusion detection method.
DETECTION OF APPLICATION LAYER DDOS ATTACKS USING INFORMATION THEORY BASED ME...cscpconf
Distributed Denial-of-Service (DDoS) attacks are a critical threat to the Internet. Recently,
there are an increasing number of DDoS attacks against online services and Web applications.
These attacks are targeting the application level. Detecting application layer DDOS attack is
not an easy task. A more sophisticated mechanism is required to distinguish the malicious flow
from the legitimate ones. This paper proposes a detection scheme based on the information
theory based metrics. The proposed scheme has two phases: Behaviour monitoring and
Detection. In the first phase, the Web user browsing behaviour (HTTP request rate, page
viewing time and sequence of the requested objects) is captured from the system log during nonattack
cases. Based on the observation, Entropy of requests per session and the trust score for
each user is calculated. In the detection phase, the suspicious requests are identified based on
the variation in entropy and a rate limiter is introduced to downgrade services to malicious
users. In addition, a scheduler is included to schedule the session based on the trust score of the
user and the system workload.
DETECTION OF ALGORITHMICALLYGENERATED MALICIOUS DOMAIN USING FREQUENCY ANALYSISijcsit
Malicious use and exploitation of Dynamic Domain Name Services (DDNS) capabilities poses a serious threat to the information security of organisations and businesses. In recent times, many malware writers have relied on DDNS to maintain their Command and Control (C&C) network infrastructure to ensure a persistence presence on a compromised host. Amongst the various DDNS techniques, Domain Generation
Algorithm (DGA) is often perceived as the most elusive and difficult to detect using traditional methods. This paper presents an approach for detecting DGA using frequency analysis of the character distribution and the weighted scores of the domain names. The approach’s feasibility is demonstrated using a range of legitimate domains and a number of malicious algorithmically-generated domain names. When a weighted
score of < 45 is applied to the Alexa one million list of domain names, only 15% of the domain names were treated as non-human generated.
This document presents a multi-classification approach for detecting network attacks using a layered model. The proposed system consists of two stages - the first stage classifies network records as normal or an attack, while the second stage further classifies any detected attacks into four categories (DoS, Probe, R2L, U2R) using separate layers. Experimental results on the NSL-KDD dataset show the layered approach using the JRip classifier achieved very high classification accuracy of over 99% for each attack category, outperforming existing approaches. The multi-layered model is effective for improving detection of minority attack classes without reducing performance on majority classes.
A New Way of Identifying DOS Attack Using Multivariate Correlation Analysisijceronline
This document summarizes a research paper that proposes a new method for identifying denial of service (DoS) attacks using multivariate correlation analysis (MCA). The method involves three main steps: 1) generating basic features from network traffic, 2) using MCA to extract correlations between features and generate triangle area maps, and 3) using an anomaly-based detection mechanism to distinguish attacks from normal traffic based on differences from pre-generated normal profiles. The researchers evaluate their method on the KDD Cup 99 dataset and achieve moderate detection performance. However, they identify issues related to differences in feature scales that reduce detection of some attacks. They propose using statistical normalization to address this.
PUBLIC INTEGRIYT AUDITING FOR SHARED DYNAMIC DATA STORAGE UNDER ONTIME GENERA...paperpublications3
Abstract: Nowadays verifying the result of the remote computation plays a crucial role in addressing in issue of trust. The outsourced data collection comes for multiple data sources to diagnose the originator of errors by allotting each data sources a unique secrete key which requires the inner product conformation to be performed under any two parties different keys. The proposed methods outperform AISM technique to minimize the running time. The multi-key setting is given different secrete keys, multiple data sources can be upload their data streams along with their respective verifiable homomorphic tag. The AISM consist of three novel join techniques depending on the ADS availability: (i) Authenticated Indexed Sort Merge Join (AISM), which utilizes a single ADS on the join attribute, (ii) Authenticated Index Merge Join (AIM) that requires an ADS (on the join attribute) for both relations, and (iii) Authenticated Sort Merge Join (ASM), which does not rely on any ADS. The client should allow choosing any portion in the data streams for queries. The communication between the client and server is independent of input size. The inner product evaluation can be performed by any two sources and the result can be verified by using the particular tag.
Keywords: Computation of outsourcing, Data Stream, Multiple Key, Homomorphic encryption.
Title: PUBLIC INTEGRIYT AUDITING FOR SHARED DYNAMIC DATA STORAGE UNDER ONTIME GENERATED MULTIPLE KEYS
Author: C. NISHA MALAR, M. S. BONSHIA BINU
ISSN 2350-1049
International Journal of Recent Research in Interdisciplinary Sciences (IJRRIS)
Paper Publications
Image Based Relational Database Watermarking: A Surveyiosrjce
This document summarizes and analyzes several existing image-based relational database watermarking techniques. It begins with background on database watermarking, including applications, classifications of techniques, desired characteristics of watermarked databases, and types of attacks. It then reviews four specific algorithms that embed image watermarks into database attributes. The algorithms are analyzed for robustness against different attacks like modification, deletion and addition of tuples. The document concludes various image-based techniques are effective for copyright protection and survive attacks while preserving data integrity.
FLOODING ATTACKS DETECTION OF MOBILE AGENTS IN IP NETWORKScsandit
This document summarizes a research paper that proposes a new framework for detecting flooding attacks in mobile agent networks. The framework integrates divergence measures like Hellinger distance and Chi-square over a sketch data structure. The sketch data structure is used to derive probability distributions from traffic data in fixed memory. Divergence measures compare the current and prior probability distributions to detect deviations indicating attacks. The performance of detecting attacks while minimizing false alarms is evaluated using real network traces with injected flooding attacks. Experimental results show the proposed approach outperforms existing solutions.
This document compares the k-means data mining and outlier detection approaches for network-based intrusion detection. It analyzes four datasets capturing network traffic using both approaches. The k-means approach clusters traffic into normal and abnormal flows, while outlier detection calculates an outlier score for each flow. The document finds that k-means was more accurate and precise, with a better classification rate than outlier detection. It requires less computer resources than outlier detection. This comparison of the approaches can help network administrators choose the best intrusion detection method.
DETECTION OF APPLICATION LAYER DDOS ATTACKS USING INFORMATION THEORY BASED ME...cscpconf
Distributed Denial-of-Service (DDoS) attacks are a critical threat to the Internet. Recently,
there are an increasing number of DDoS attacks against online services and Web applications.
These attacks are targeting the application level. Detecting application layer DDOS attack is
not an easy task. A more sophisticated mechanism is required to distinguish the malicious flow
from the legitimate ones. This paper proposes a detection scheme based on the information
theory based metrics. The proposed scheme has two phases: Behaviour monitoring and
Detection. In the first phase, the Web user browsing behaviour (HTTP request rate, page
viewing time and sequence of the requested objects) is captured from the system log during nonattack
cases. Based on the observation, Entropy of requests per session and the trust score for
each user is calculated. In the detection phase, the suspicious requests are identified based on
the variation in entropy and a rate limiter is introduced to downgrade services to malicious
users. In addition, a scheduler is included to schedule the session based on the trust score of the
user and the system workload.
DETECTION OF ALGORITHMICALLYGENERATED MALICIOUS DOMAIN USING FREQUENCY ANALYSISijcsit
Malicious use and exploitation of Dynamic Domain Name Services (DDNS) capabilities poses a serious threat to the information security of organisations and businesses. In recent times, many malware writers have relied on DDNS to maintain their Command and Control (C&C) network infrastructure to ensure a persistence presence on a compromised host. Amongst the various DDNS techniques, Domain Generation
Algorithm (DGA) is often perceived as the most elusive and difficult to detect using traditional methods. This paper presents an approach for detecting DGA using frequency analysis of the character distribution and the weighted scores of the domain names. The approach’s feasibility is demonstrated using a range of legitimate domains and a number of malicious algorithmically-generated domain names. When a weighted
score of < 45 is applied to the Alexa one million list of domain names, only 15% of the domain names were treated as non-human generated.
This document presents a multi-classification approach for detecting network attacks using a layered model. The proposed system consists of two stages - the first stage classifies network records as normal or an attack, while the second stage further classifies any detected attacks into four categories (DoS, Probe, R2L, U2R) using separate layers. Experimental results on the NSL-KDD dataset show the layered approach using the JRip classifier achieved very high classification accuracy of over 99% for each attack category, outperforming existing approaches. The multi-layered model is effective for improving detection of minority attack classes without reducing performance on majority classes.
A New Way of Identifying DOS Attack Using Multivariate Correlation Analysisijceronline
This document summarizes a research paper that proposes a new method for identifying denial of service (DoS) attacks using multivariate correlation analysis (MCA). The method involves three main steps: 1) generating basic features from network traffic, 2) using MCA to extract correlations between features and generate triangle area maps, and 3) using an anomaly-based detection mechanism to distinguish attacks from normal traffic based on differences from pre-generated normal profiles. The researchers evaluate their method on the KDD Cup 99 dataset and achieve moderate detection performance. However, they identify issues related to differences in feature scales that reduce detection of some attacks. They propose using statistical normalization to address this.
PUBLIC INTEGRIYT AUDITING FOR SHARED DYNAMIC DATA STORAGE UNDER ONTIME GENERA...paperpublications3
Abstract: Nowadays verifying the result of the remote computation plays a crucial role in addressing in issue of trust. The outsourced data collection comes for multiple data sources to diagnose the originator of errors by allotting each data sources a unique secrete key which requires the inner product conformation to be performed under any two parties different keys. The proposed methods outperform AISM technique to minimize the running time. The multi-key setting is given different secrete keys, multiple data sources can be upload their data streams along with their respective verifiable homomorphic tag. The AISM consist of three novel join techniques depending on the ADS availability: (i) Authenticated Indexed Sort Merge Join (AISM), which utilizes a single ADS on the join attribute, (ii) Authenticated Index Merge Join (AIM) that requires an ADS (on the join attribute) for both relations, and (iii) Authenticated Sort Merge Join (ASM), which does not rely on any ADS. The client should allow choosing any portion in the data streams for queries. The communication between the client and server is independent of input size. The inner product evaluation can be performed by any two sources and the result can be verified by using the particular tag.
Keywords: Computation of outsourcing, Data Stream, Multiple Key, Homomorphic encryption.
Title: PUBLIC INTEGRIYT AUDITING FOR SHARED DYNAMIC DATA STORAGE UNDER ONTIME GENERATED MULTIPLE KEYS
Author: C. NISHA MALAR, M. S. BONSHIA BINU
ISSN 2350-1049
International Journal of Recent Research in Interdisciplinary Sciences (IJRRIS)
Paper Publications
Image Based Relational Database Watermarking: A Surveyiosrjce
This document summarizes and analyzes several existing image-based relational database watermarking techniques. It begins with background on database watermarking, including applications, classifications of techniques, desired characteristics of watermarked databases, and types of attacks. It then reviews four specific algorithms that embed image watermarks into database attributes. The algorithms are analyzed for robustness against different attacks like modification, deletion and addition of tuples. The document concludes various image-based techniques are effective for copyright protection and survive attacks while preserving data integrity.
FLOODING ATTACKS DETECTION OF MOBILE AGENTS IN IP NETWORKScsandit
This document summarizes a research paper that proposes a new framework for detecting flooding attacks in mobile agent networks. The framework integrates divergence measures like Hellinger distance and Chi-square over a sketch data structure. The sketch data structure is used to derive probability distributions from traffic data in fixed memory. Divergence measures compare the current and prior probability distributions to detect deviations indicating attacks. The performance of detecting attacks while minimizing false alarms is evaluated using real network traces with injected flooding attacks. Experimental results show the proposed approach outperforms existing solutions.
Limiting Self-Propagating Malware Based on Connection Failure Behavior csandit
Self-propagating malware (e.g., an Internet worm) exploits security loopholes in software to
infect servers and then use them to scan the Internet for more vulnerable servers. While the
mechanisms of worm infection and their propagation models are well understood, defense
against worms remains an open problem. One branch of defense research investigates the
behavioral difference between worm-infected hosts and normal hosts to set them apart. One
particular observation is that a worm-infected host, which scans the Internet with randomly
selected addresses, has a much higher connection-failure rate than a normal host. Rate-limit
algorithms have been proposed to control the spread of worms by traffic shaping based on
connection failure rate. However, these rate-limit algorithms can work properly only if it is
possible to measure failure rates of individual hosts efficiently and accurately. This paper points
out a serious problem in the prior method and proposes a new solution based on a highly
efficient double-bitmap data structure, which places only a small memory footprint on the
routers, while providing good measurement of connection failure rates whose accuracy can be
tuned by system parameters.
The document describes a system for automatically classifying bug reports from cloud infrastructure software using natural language processing techniques. It preprocesses bug descriptions to extract keywords, encodes them as vectors, performs hierarchical clustering to group similar bugs, and assigns classifications to new unlabeled bugs based on the cluster labels. Preliminary results show the system can accurately predict categories and specific classes for bug reports, especially with a large dataset. Future work aims to improve performance and apply the system to analyze bugs in other cloud infrastructures.
COMPARISON BETWEEN DIVERGENCE MEASURES FOR ANOMALY DETECTION OF MOBILE AGENTS...ijwmn
This paper deals with detection of SYN flooding attacks which are the most common type of attacks in a Mobile Agent World. We propose a new framework for the detection of flooding attacks by integrating Divergence measures over Sketch data structure. We compare three divergence measures (Hellinger Distance, Chi-square and Power divergence) to analyze their detection accuracy. The performance of the proposed framework is investigated in terms of detection probability and false alarm ratio. We focus on
tuning the parameter of Divergence Measures to optimize the performance. We conduct performance analysis over publicly available real IP traces, in Mobile Agent Network, integrated with flooding attacks. Our experimental results show that Power Divergence outperforms Chi-square divergence and Hellinger
distance in network anomalies detection in terms of detection and false alarm.
INFRINGEMENT PRECLUSION SYSTEM VIA SADEC: STEALTHY ATTACK DETECTION AND COUNT...ijp2p
In this paper we are providing a implementation details about simulated solution of stealthy packet drop
attack. Stealthy packet drop attack is a suite of four attack types, includes colluding collision, packet
misrouting, identity delegation and power control. Stealthy packet drop attacks disrupts the packet from
reaching to it’s destination through malicious behaviour. These attacks can be easily breakdown the
multi-hop wireless ad-hoc networks. Most widely preferred method for detecting attacks in wireless
network is behaviour based detection method. In this method a normal network overhears
communication from its neighbourhood. Here we are implementing a SADEC protocol which is
proposed solution of stealthy packet drop attacks. SADEC overlaid the base line local monitoring. In
base line local monitoring each neighbour maintains additional information about routing path also it
adds some checking responsibility to all its neighbours. SADEC proves more efficient than baseline local
monitoring to mitigate successfully all the stealthy attack types.
Preemptive modelling towards classifying vulnerability of DDoS attack in SDN ...IJECEIAES
Software-Defined Networking (SDN) has become an essential networking concept towards escalating the networking capabilities that are highly demanded future internet system, which is immensely distributed in nature. Owing to the novel concept in the field of network, it is still shrouded with security problems. It is also found that the Distributed Denial-of-Service (DDoS) attack is one of the prominent problems in the SDN environment. After reviewing existing research solutions towards resisting DDoS attack in SDN, it is found that still there are many open-end issues. Therefore, these issues are identified and are addressed in this paper in the form of a preemptive model of security. Different from existing approaches, this model is capable of identifying any malicious activity that leads to a DDoS attack by performing a correct classification of attack strategy using a machine learning approach. The paper also discusses the applicability of best classifiers using machine learning that is effective against DDoS attack.
Current issues - International Journal of Network Security & Its Applications...IJNSA Journal
nternational Journal of Network Security & Its Applications (IJNSA) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of the computer Network Security & its applications. The journal focuses on all technical and practical aspects of security and its applications for wired and wireless networks. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on understanding Modern security threats and countermeasures, and establishing new collaborations in these areas.
A novel algorithm to protect and manage memory locationsiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Internet Worm Classification and Detection using Data Mining Techniquesiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Outsourced kp abe with chosen ciphertext securitycsandit
Key-Policy Attribute Based Encryption (KP-ABE) has always been criticized for its inefficiency
drawbacks. Based on the cloud computing technology, computation outsourcing is one of the
effective solution to this problem. Some papers have proposed their schemes; however,
adversaries in their attack models were divided into two categories and they are assumed not to
communicate with each other, which is obviously unrealistic. In this paper, we first proved there
exist severe security vulnerabilities in these schemes for such an assumption, and then proposed
a security enhanced Chosen Ciphertext Attack (SE-CCA) model, which eliminates the improper
limitations. By utilizing Proxy Re-Encryption (PRE) and one-time signature technology, we also
constructed a concrete KP-ABE outsourcing scheme (O-KP-ABE) and proved its security under
SE-CCA model. Comparisons with existing schemes show that our constructions have obvious
comprehensive advantages in security and efficiency.
Toward a statistical framework for source anonymity in sensor networksJPINFOTECH JAYAPRAKASH
This document proposes a new statistical framework for modeling, analyzing, and evaluating anonymity in sensor networks. The framework introduces the notion of "interval indistinguishability" to quantitatively measure anonymity. It maps the source anonymity problem to the statistical problem of binary hypothesis testing with nuisance parameters. Existing solutions are analyzed using this framework, showing how anonymity can be improved by finding an appropriate data transformation to remove nuisance information. Mapping the problem to binary hypothesis testing opens opportunities to apply coding theory to anonymous sensor networks.
The document presents a compartmental model for characterizing the spread of malware in peer-to-peer (P2P) networks like Gnutella. The model partitions peers into compartments based on their state - those wishing to download (S), currently downloading (E), having downloaded (I), and no longer interested (R). Differential equations track changes between compartments over time. Simulation results show the model effectively captures the impact of parameters like peer online/offline switching rates and quarantine strategies on malware intensity. The model improves on prior work by incorporating user behavior dynamics and limiting malware spread to a node's time-to-live range.
This document summarizes a research paper that proposes a new correlation approach based on watermarking to identify the source of attacks that occur through intermediary nodes. The approach embeds watermarks in encrypted network flows by slightly adjusting the timing of selected packets. It is designed to be robust against timing perturbations intentionally introduced by attackers. Experimental results show that the watermark-based approach achieves close to 100% true positives in identifying the source of attacks.
A novel signature based traffic classification engine to reduce false alarms ...IJCNCJournal
Pattern matching plays a significant role in ascertaining network attacks and the foremost prerequisite for a trusted intrusion detection system (IDS) is accurate pattern matching. During the pattern matching process packets are scanned against a pre-defined rule sets. After getting scanned, the packets are marked as alert or benign by the detection system. Sometimes the detection system generates false alarms i.e., good traffic being identified as bad traffic. The ratio of generating the false positives varies from the performance of the detection engines used to scan incoming packets. Intrusion detection systems use to deploy algorithmic procedures to reduce false positives though producing a good number of false alarms. As the necessities, we have been working on the optimization of the algorithms and procedures so that false positives can be reduced to a great extent. As an effort we have proposed a signature-based traffic classification technique that can categorize the incoming packets based on the traffic characteristics and behaviour which would eventually reduce the rate of false alarms
This document proposes using a linear prediction model to detect a wide range of flooding distributed denial of service (DDoS) attacks. It models the entropy of incoming network traffic over time using a linear prediction technique commonly applied to financial time series. The model is tested on simulated network data containing normal traffic and introduced attacks of varying rates. Results show the linear prediction model can successfully detect attacks with low rates and delays by identifying anomalies in the modeled entropy time series compared to normal traffic patterns. This approach aims to provide a fast and effective method for detecting different types of flooding DDoS attacks.
Security Event Analysis Through CorrelationAnton Chuvakin
This paper covers several of the security event correlation methods, utilized by Security Information Management (SIM) solutions for better attack and misuse detection. We describe these correlation methods, show their corresponding advantages and disadvantages and explain how they work together for maximum security.
A RESOURCE-EFFICIENT COLLABORATIVE SYSTEM FOR DDOS ATTACK DETECTION AND VICTI...IJNSA Journal
Distributed Denial of Service (DDoS) attacks seriously threaten network security. Most countermeasures perceive attacks after the damage has been down. This paper thus focuses on the detection of DDoS attacks, and more importantly, victim identification as early as possible, so asto promote attack reaction in time. We present a resource-efficient collaborative DDoS detection system, called F-LOW. Profiting from bitwise-based hash function, split sketch, and lightweight IP reconstruction, F-LOW can defeat shortcomings of principle component analysis (PCA) and regular sketch. With a certain number of distributed detection nodes, F-LOW can detect DDoS attacks and identify victim IPs before the attack traffic arrives victim network. Outperforming previous work, our system fits all Four-LOW properties, low profile, low dimensional, low overhead and low transmission, of a promising DDoS countermeasure. Through simulation and theoretical analysis, we demonstrate such properties and remarkable efficacy of our approach in DDoS mitigation.
Machine Learning Techniques Used for the Detection and Analysis of Modern Typ...IRJET Journal
This document summarizes research on using machine learning techniques to detect distributed denial-of-service (DDoS) attacks. It discusses how DDoS attacks have become more sophisticated over time. The document examines using naïve Bayes, multilayer perceptron, and other machine learning classifiers on a new dataset containing modern DDoS attack types like HTTP floods and SQL injection DDoS. According to the experimental results, multilayer perceptron achieved the highest accuracy for detecting these modern DDoS attacks.
This document describes CrowdSource, a system that uses natural language processing to infer high-level malware capabilities based on low-level strings extracted from malware binaries. It trains a machine learning model on millions of technical documents from StackExchange to map low-level strings to high-level capabilities. The system was evaluated on 1,457 malware samples and shown to detect 14 capabilities with an average F1-score of 0.86 and can analyze tens of thousands of samples per day.
FEATURE EXTRACTION AND FEATURE SELECTION: REDUCING DATA COMPLEXITY WITH APACH...IJNSA Journal
Feature extraction and feature selection are the first tasks in pre-processing of input logs in order to detect cyber security threats and attacks while utilizing machine learning. When it comes to the analysis of heterogeneous data derived from different sources, these tasks are found to be time-consuming and difficult to be managed efficiently. In this paper, we present an approach for handling feature extraction and feature selection for security analytics of heterogeneous data derived from different network sensors. The approach is implemented in Apache Spark, using its python API, named pyspark.
APPLICATION-LAYER DDOS DETECTION BASED ON A ONE-CLASS SUPPORT VECTOR MACHINEIJNSA Journal
Application-layer Distributed Denial-of-Service (DDoS) attack takes advantage of the complexity and
diversity of network protocols and services. This kind of attacks is more difficult to prevent than other kinds
of DDoS attacks. This paper introduces a novel detection mechanism for application-layer DDoS attack
based on a One-Class Support Vector Machine (OC-SVM). Support vector machine (SVM) is a relatively
new machine learning technique based on statistics. OC-SVM is a special variant of the SVM and since
only the normal data is required for training, it is effective for detection of application-layer DDoS attack.
In this detection strategy, we first extract 7 features from normal users’ sessions. Then, we build normal
users’ browsing models by using OC-SVM. Finally, we use these models to detect application-layer DDoS
attacks. Numerical results based on simulation experiments demonstrate the efficacy of our detection
method.
Limiting Self-Propagating Malware Based on Connection Failure Behavior csandit
Self-propagating malware (e.g., an Internet worm) exploits security loopholes in software to
infect servers and then use them to scan the Internet for more vulnerable servers. While the
mechanisms of worm infection and their propagation models are well understood, defense
against worms remains an open problem. One branch of defense research investigates the
behavioral difference between worm-infected hosts and normal hosts to set them apart. One
particular observation is that a worm-infected host, which scans the Internet with randomly
selected addresses, has a much higher connection-failure rate than a normal host. Rate-limit
algorithms have been proposed to control the spread of worms by traffic shaping based on
connection failure rate. However, these rate-limit algorithms can work properly only if it is
possible to measure failure rates of individual hosts efficiently and accurately. This paper points
out a serious problem in the prior method and proposes a new solution based on a highly
efficient double-bitmap data structure, which places only a small memory footprint on the
routers, while providing good measurement of connection failure rates whose accuracy can be
tuned by system parameters.
The document describes a system for automatically classifying bug reports from cloud infrastructure software using natural language processing techniques. It preprocesses bug descriptions to extract keywords, encodes them as vectors, performs hierarchical clustering to group similar bugs, and assigns classifications to new unlabeled bugs based on the cluster labels. Preliminary results show the system can accurately predict categories and specific classes for bug reports, especially with a large dataset. Future work aims to improve performance and apply the system to analyze bugs in other cloud infrastructures.
COMPARISON BETWEEN DIVERGENCE MEASURES FOR ANOMALY DETECTION OF MOBILE AGENTS...ijwmn
This paper deals with detection of SYN flooding attacks which are the most common type of attacks in a Mobile Agent World. We propose a new framework for the detection of flooding attacks by integrating Divergence measures over Sketch data structure. We compare three divergence measures (Hellinger Distance, Chi-square and Power divergence) to analyze their detection accuracy. The performance of the proposed framework is investigated in terms of detection probability and false alarm ratio. We focus on
tuning the parameter of Divergence Measures to optimize the performance. We conduct performance analysis over publicly available real IP traces, in Mobile Agent Network, integrated with flooding attacks. Our experimental results show that Power Divergence outperforms Chi-square divergence and Hellinger
distance in network anomalies detection in terms of detection and false alarm.
INFRINGEMENT PRECLUSION SYSTEM VIA SADEC: STEALTHY ATTACK DETECTION AND COUNT...ijp2p
In this paper we are providing a implementation details about simulated solution of stealthy packet drop
attack. Stealthy packet drop attack is a suite of four attack types, includes colluding collision, packet
misrouting, identity delegation and power control. Stealthy packet drop attacks disrupts the packet from
reaching to it’s destination through malicious behaviour. These attacks can be easily breakdown the
multi-hop wireless ad-hoc networks. Most widely preferred method for detecting attacks in wireless
network is behaviour based detection method. In this method a normal network overhears
communication from its neighbourhood. Here we are implementing a SADEC protocol which is
proposed solution of stealthy packet drop attacks. SADEC overlaid the base line local monitoring. In
base line local monitoring each neighbour maintains additional information about routing path also it
adds some checking responsibility to all its neighbours. SADEC proves more efficient than baseline local
monitoring to mitigate successfully all the stealthy attack types.
Preemptive modelling towards classifying vulnerability of DDoS attack in SDN ...IJECEIAES
Software-Defined Networking (SDN) has become an essential networking concept towards escalating the networking capabilities that are highly demanded future internet system, which is immensely distributed in nature. Owing to the novel concept in the field of network, it is still shrouded with security problems. It is also found that the Distributed Denial-of-Service (DDoS) attack is one of the prominent problems in the SDN environment. After reviewing existing research solutions towards resisting DDoS attack in SDN, it is found that still there are many open-end issues. Therefore, these issues are identified and are addressed in this paper in the form of a preemptive model of security. Different from existing approaches, this model is capable of identifying any malicious activity that leads to a DDoS attack by performing a correct classification of attack strategy using a machine learning approach. The paper also discusses the applicability of best classifiers using machine learning that is effective against DDoS attack.
Current issues - International Journal of Network Security & Its Applications...IJNSA Journal
nternational Journal of Network Security & Its Applications (IJNSA) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of the computer Network Security & its applications. The journal focuses on all technical and practical aspects of security and its applications for wired and wireless networks. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on understanding Modern security threats and countermeasures, and establishing new collaborations in these areas.
A novel algorithm to protect and manage memory locationsiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Internet Worm Classification and Detection using Data Mining Techniquesiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Outsourced kp abe with chosen ciphertext securitycsandit
Key-Policy Attribute Based Encryption (KP-ABE) has always been criticized for its inefficiency
drawbacks. Based on the cloud computing technology, computation outsourcing is one of the
effective solution to this problem. Some papers have proposed their schemes; however,
adversaries in their attack models were divided into two categories and they are assumed not to
communicate with each other, which is obviously unrealistic. In this paper, we first proved there
exist severe security vulnerabilities in these schemes for such an assumption, and then proposed
a security enhanced Chosen Ciphertext Attack (SE-CCA) model, which eliminates the improper
limitations. By utilizing Proxy Re-Encryption (PRE) and one-time signature technology, we also
constructed a concrete KP-ABE outsourcing scheme (O-KP-ABE) and proved its security under
SE-CCA model. Comparisons with existing schemes show that our constructions have obvious
comprehensive advantages in security and efficiency.
Toward a statistical framework for source anonymity in sensor networksJPINFOTECH JAYAPRAKASH
This document proposes a new statistical framework for modeling, analyzing, and evaluating anonymity in sensor networks. The framework introduces the notion of "interval indistinguishability" to quantitatively measure anonymity. It maps the source anonymity problem to the statistical problem of binary hypothesis testing with nuisance parameters. Existing solutions are analyzed using this framework, showing how anonymity can be improved by finding an appropriate data transformation to remove nuisance information. Mapping the problem to binary hypothesis testing opens opportunities to apply coding theory to anonymous sensor networks.
The document presents a compartmental model for characterizing the spread of malware in peer-to-peer (P2P) networks like Gnutella. The model partitions peers into compartments based on their state - those wishing to download (S), currently downloading (E), having downloaded (I), and no longer interested (R). Differential equations track changes between compartments over time. Simulation results show the model effectively captures the impact of parameters like peer online/offline switching rates and quarantine strategies on malware intensity. The model improves on prior work by incorporating user behavior dynamics and limiting malware spread to a node's time-to-live range.
This document summarizes a research paper that proposes a new correlation approach based on watermarking to identify the source of attacks that occur through intermediary nodes. The approach embeds watermarks in encrypted network flows by slightly adjusting the timing of selected packets. It is designed to be robust against timing perturbations intentionally introduced by attackers. Experimental results show that the watermark-based approach achieves close to 100% true positives in identifying the source of attacks.
A novel signature based traffic classification engine to reduce false alarms ...IJCNCJournal
Pattern matching plays a significant role in ascertaining network attacks and the foremost prerequisite for a trusted intrusion detection system (IDS) is accurate pattern matching. During the pattern matching process packets are scanned against a pre-defined rule sets. After getting scanned, the packets are marked as alert or benign by the detection system. Sometimes the detection system generates false alarms i.e., good traffic being identified as bad traffic. The ratio of generating the false positives varies from the performance of the detection engines used to scan incoming packets. Intrusion detection systems use to deploy algorithmic procedures to reduce false positives though producing a good number of false alarms. As the necessities, we have been working on the optimization of the algorithms and procedures so that false positives can be reduced to a great extent. As an effort we have proposed a signature-based traffic classification technique that can categorize the incoming packets based on the traffic characteristics and behaviour which would eventually reduce the rate of false alarms
This document proposes using a linear prediction model to detect a wide range of flooding distributed denial of service (DDoS) attacks. It models the entropy of incoming network traffic over time using a linear prediction technique commonly applied to financial time series. The model is tested on simulated network data containing normal traffic and introduced attacks of varying rates. Results show the linear prediction model can successfully detect attacks with low rates and delays by identifying anomalies in the modeled entropy time series compared to normal traffic patterns. This approach aims to provide a fast and effective method for detecting different types of flooding DDoS attacks.
Security Event Analysis Through CorrelationAnton Chuvakin
This paper covers several of the security event correlation methods, utilized by Security Information Management (SIM) solutions for better attack and misuse detection. We describe these correlation methods, show their corresponding advantages and disadvantages and explain how they work together for maximum security.
A RESOURCE-EFFICIENT COLLABORATIVE SYSTEM FOR DDOS ATTACK DETECTION AND VICTI...IJNSA Journal
Distributed Denial of Service (DDoS) attacks seriously threaten network security. Most countermeasures perceive attacks after the damage has been down. This paper thus focuses on the detection of DDoS attacks, and more importantly, victim identification as early as possible, so asto promote attack reaction in time. We present a resource-efficient collaborative DDoS detection system, called F-LOW. Profiting from bitwise-based hash function, split sketch, and lightweight IP reconstruction, F-LOW can defeat shortcomings of principle component analysis (PCA) and regular sketch. With a certain number of distributed detection nodes, F-LOW can detect DDoS attacks and identify victim IPs before the attack traffic arrives victim network. Outperforming previous work, our system fits all Four-LOW properties, low profile, low dimensional, low overhead and low transmission, of a promising DDoS countermeasure. Through simulation and theoretical analysis, we demonstrate such properties and remarkable efficacy of our approach in DDoS mitigation.
Machine Learning Techniques Used for the Detection and Analysis of Modern Typ...IRJET Journal
This document summarizes research on using machine learning techniques to detect distributed denial-of-service (DDoS) attacks. It discusses how DDoS attacks have become more sophisticated over time. The document examines using naïve Bayes, multilayer perceptron, and other machine learning classifiers on a new dataset containing modern DDoS attack types like HTTP floods and SQL injection DDoS. According to the experimental results, multilayer perceptron achieved the highest accuracy for detecting these modern DDoS attacks.
This document describes CrowdSource, a system that uses natural language processing to infer high-level malware capabilities based on low-level strings extracted from malware binaries. It trains a machine learning model on millions of technical documents from StackExchange to map low-level strings to high-level capabilities. The system was evaluated on 1,457 malware samples and shown to detect 14 capabilities with an average F1-score of 0.86 and can analyze tens of thousands of samples per day.
FEATURE EXTRACTION AND FEATURE SELECTION: REDUCING DATA COMPLEXITY WITH APACH...IJNSA Journal
Feature extraction and feature selection are the first tasks in pre-processing of input logs in order to detect cyber security threats and attacks while utilizing machine learning. When it comes to the analysis of heterogeneous data derived from different sources, these tasks are found to be time-consuming and difficult to be managed efficiently. In this paper, we present an approach for handling feature extraction and feature selection for security analytics of heterogeneous data derived from different network sensors. The approach is implemented in Apache Spark, using its python API, named pyspark.
APPLICATION-LAYER DDOS DETECTION BASED ON A ONE-CLASS SUPPORT VECTOR MACHINEIJNSA Journal
Application-layer Distributed Denial-of-Service (DDoS) attack takes advantage of the complexity and
diversity of network protocols and services. This kind of attacks is more difficult to prevent than other kinds
of DDoS attacks. This paper introduces a novel detection mechanism for application-layer DDoS attack
based on a One-Class Support Vector Machine (OC-SVM). Support vector machine (SVM) is a relatively
new machine learning technique based on statistics. OC-SVM is a special variant of the SVM and since
only the normal data is required for training, it is effective for detection of application-layer DDoS attack.
In this detection strategy, we first extract 7 features from normal users’ sessions. Then, we build normal
users’ browsing models by using OC-SVM. Finally, we use these models to detect application-layer DDoS
attacks. Numerical results based on simulation experiments demonstrate the efficacy of our detection
method.
COMPARISON OF MALWARE CLASSIFICATION METHODS USING CONVOLUTIONAL NEURAL NETWO...IJNSA Journal
Malicious software is constantly being developed and improved, so detection and classification of malwareis an ever-evolving problem. Since traditional malware detection techniques fail to detect new/unknown malware, machine learning algorithms have been used to overcome this disadvantage. We present a Convolutional Neural Network (CNN) for malware type classification based on the API (Application Program Interface) calls. This research uses a database of 7107 instances of API call streams and 8 different malware types:Adware, Backdoor, Downloader, Dropper, Spyware, Trojan, Virus,Worm. We used a 1-Dimensional CNN by mapping API calls as categorical and term frequency-inverse document frequency (TF-IDF) vectors and compared the results to other classification techniques.The proposed 1-D CNN outperformed other classification techniques with 91% overall accuracy for both categorical and TF-IDF vectors.
Layered approach using conditional random fields for intrusion detection (syn...Mumbai Academisc
The document proposes a layered approach using conditional random fields for intrusion detection. It aims to improve accuracy and efficiency. Each layer is trained separately to detect different attack types (probe, DoS, R2L, U2R) using relevant features. The layers act as filters to quickly detect and block attacks without passing connections to subsequent layers. Experimental results show the proposed system outperforms other methods with high improvements in detecting certain attack types. Hardware requirements include a Pentium IV PC with 256MB RAM and software requirements include Windows XP and development tools like Java and Eclipse.
Ransomware Attack Detection based on Pertinent System Calls Using Machine Lea...IJCNCJournal
In the last few years, the evolution of information technology has resulted in the development of several interesting and sensitive fields such as the dark Web and cyber-criminality, especially using ransomware attacks. This paper aims to bring out only critical features and make their observation, or not, in software behaviour sufficient to decide whether it is ransomware or not. Therefore, we propose a new solution for ransomware detection based on machine learning algorithms and system calls. First, we introduce our produced dataset of collected system calls of both ransomware and Benignware. Then, we push preprocessing steps deeply to reduce efficiently data dimensionality. After that, we introduce a new technique to select pertinent features. Next, we bring out the critical system calls, their importance and their contribution to the distinction between dataset elements. Finally, we present our model that achieves an overall accuracy of 99.81% after K-Fold cross-validation.
Ransomware Attack Detection Based on Pertinent System Calls Using Machine Lea...IJCNCJournal
In the last few years, the evolution of information technology has resulted in thedevelopmentof several interesting and sensitive fields such as the dark Web and cyber-criminality, especially using ransomware attacks. This paper aims to bring out only critical features and make their observation, or not, in software behaviour sufficient to decide whether it is ransomware or not. Therefore, we propose a new solution for ransomware detection based on machine learning algorithms and system calls. First, we introduce our produced dataset of collected system calls of both ransomware and Benignware. Then, we push pre-processing steps deeply to reduce efficiently data dimensionality. After that, we introduce a new technique to select pertinent features. Next, we bring out the critical system calls, their importance and their contribution to the distinction between dataset elements. Finally, we present our model that achieves an overall accuracy of 99.81% after K-Fold cross-validation.
COPYRIGHTThis thesis is copyright materials protected under the .docxvoversbyobersby
COPYRIGHT
This thesis is copyright materials protected under the Berne Convection, the copyright Act 1999 and other international and national enactments in that behalf, on intellectual property. It may not be reproduced by any means in full or in part except for short extracts in fair dealing so for research or private study, critical scholarly review or discourse with acknowledgment, with written permission of the Dean School of Graduate Studies on behalf of both the author and XXX XXX University.ABSTRACT
With Fast growing internet world the risk of intrusion has also increased, as a result Intrusion Detection System (IDS) is the admired key research field. IDS are used to identify any suspicious activity or patterns in the network or machine, which endeavors the security features or compromise the machine. IDS majorly use all the features of the data. It is a keen observation that all the features are not of equal relevance for the detection of attacks. Moreover every feature does not contribute in enhancing the system performance significantly. The main aim of the work done is to develop an efficient denial of service network intrusion classification model. The specific objectives included: to analyse existing literature in intrusion detection systems; what are the techniques used to model IDS, types of network attacks, performance of various machine learning tools, how are network intrusion detection systems assessed; to find out top network traffic attributes that can be used to model denial of service intrusion detection; to develop a machine learning model for detection of denial of service network intrusion.Methods: The research design was experimental and data was collected by simulation using NSL-KDD dataset. By implementing Correlation Feature Selection (CFS) mechanism using three search algorithms, a smallest set of features is selected with all the features that are selected very frequently. Findings: The smallest subset of features chosen is the most nominal among all the feature subset found. Further, the performances using Artificial neural networks(ANN), decision trees, Support Vector Machines (SVM) and K-Nearest Neighbour (KNN) classifiers is compared for 7 subsets found by filter model and 41 attributes. Results: The outcome indicates a remarkable improvement in the performance metrics used for comparison of the two classifiers. The results show that using 17/18 selected features improves DOS types classification accuracies as compared to using the 41 features in the NSL-KDD dataset. It was further observed that using an ensemble of three classifiers with decision fusion performs better as compared to using a single classifier for DOS type’s classification. Among machine learning tools experimented, ANN achieved best classification accuracies followed by SVM and DT. KNN registered the lowest classification accuracies. Application: The proposed work with such an improved detection rate and lesser classification time and lar.
Optimised Malware Detection in Digital Forensics IJNSA Journal
This summarizes a research paper that proposes developing a new framework to optimize malware detection in digital forensics investigations. The paper discusses challenges with existing detection methods, such as signature-based approaches requiring extensive manual analysis. Through a market research survey of forensics professionals, the paper finds weaknesses in current skills, tools, and accuracy rates. Most respondents agreed a new customized detection tool is needed that employs both dynamic and static analysis methods. The proposed framework aims to address these issues to more effectively detect and analyze malware.
DDOS DETECTION IN SOFTWARE-DEFINED NETWORK (SDN) USING MACHINE LEARNINGIJCI JOURNAL
In recent years, the concept of cloud computing and the software-defined network (SDN) have spread
widely. The services provided by many sectors such as medicine, education, banking, and transportation
are being replaced gradually with cloud-based applications. Consequently, the availability of these
services is critical. However, the cloud infrastructure and services are vulnerable to attackers who aim to
breach its availability. One of the major threats to any system availability is a Denial-of-Service (DoS)
attack, which is intended to deny the legitimate user from accessing cloud resources. The Distributed
Denial-of-Service attack (DDoS) is a type of DoS attack which is considerably more effective and
dangerous. A lot of efforts have been made by the research community to detect DDoS attacks, however,
there is still a need for further efforts in this germane field. In this paper, machine learning techniques are
utilized to build a model that can detect DDoS attacks in Software-Defined Networks (SDN). The used ML
algorithms have shown high performance in the earliest studies; hence they have been used in this study
along with feature selection technique. Therefore, our model utilized these algorithms to detect DDoS
attacks in network traffic. The outcome of this experiment shows the impact of feature selection in
improving the model performance. Eventually, The Random Forest classifier has achieved the highest
accuracy of 0.99 in detecting DDoS attack.
STATISTICAL QUALITY CONTROL APPROACHES TO NETWORK INTRUSION DETECTIONIJNSA Journal
In the study of network intrusion, much attention has been drawn to on-time detection of intrusion to safeguard public and private interest and to capture the law-breakers. Even though various methods have been found in literature, some situations warrant us to determine intrusions of network in real-time to prevent further undue harm to the computer network as and when they occur. This approach helps detect the intrusion and has a greater potential to apprehend the law-breaker. The purpose of this article is to formulate a method to this effect that is based on the statistical quality control techniques widely used in the manufacturing and production processes.
A MECHANISM FOR EARLY DETECTING DDOS ATTACKS BASED ON M/G/R PS QUEUEIJNSA Journal
1) The document proposes a method for early detection of DDoS attacks by modeling a service system as an M/G/R processor sharing queue.
2) It calculates monitoring parameters based on the queue model in order to detect symptoms of DDoS attacks early.
3) Experimental results show the proposed method detects DDoS attacks with high true positive and true negative rates compared to an entropy-based detection method.
A MECHANISM FOR EARLY DETECTING DDOS ATTACKS BASED ON M/G/R PS QUEUEIJNSA Journal
1) The document proposes a method for early detection of DDoS attacks based on modeling a service system as an M/G/R processor sharing queue.
2) It involves sampling system resources over time, calculating resource utilization and variance parameters, and comparing a degradation index to a threshold to detect anomalous degradation indicative of an attack.
3) Experimental results show the proposed method achieved higher rates of true positives and true negatives for DDoS detection compared to an entropy-based detection method, demonstrating its effectiveness at early detection.
A MECHANISM FOR EARLY DETECTING DDOS ATTACKS BASED ON M/G/R PS QUEUEIJNSA Journal
When service system is under DDoS attacks, it is important to detect anomaly signature at starting time of attack for timely applying prevention solutions. However, early DDoS detection is difficult task because the velocity of DDoS attacks is very high. This paper proposes a DDoS attack detection method by modeling service system as M/G/R PS queue and calculating monitoring parameters based on the model in odder to
early detect symptom of DDoS attacks. The proposed method is validated by experimental system and it gives good results.
Replay of Malicious Traffic in Network TestbedsDETER-Project
In this paper we present tools and methods to integrate attack measurements from the Internet with controlled experimentation on a network testbed. We show that this approach provides greater fidelity than synthetic models. We compare the statistical properties of real-world attacks with synthetically generated constant bit rate attacks on the testbed. Our results indicate that trace replay provides fine time-scale details that may be absent in constant bit rate attacks. Additionally, we demonstrate the effectiveness of our approach to study new and emerging attacks. We replay an Internet attack captured by the LANDER system on the DETERLab testbed within two hours.
Data and tools from the paper are available at: http://montage.deterlab.net/magi/hst2013tools
Also read the LANDER Blog entry at: http://ant.isi.edu/blog/?p=411
A system for denial of-service attack detection based on multivariate correla...IGEEKS TECHNOLOGIES
The document proposes a denial-of-service (DoS) attack detection system using multivariate correlation analysis (MCA) to accurately characterize network traffic. It extracts geometric correlations between network traffic features using triangle area maps. This anomaly-based system can detect both known and unknown DoS attacks by learning patterns in legitimate traffic. Evaluation on the KDD Cup 99 dataset shows it outperforms two previous state-of-the-art methods in detection accuracy.
Similar to DoS Forensic Exemplar Comparison to a Known Sample (20)
Using Query Store in Azure PostgreSQL to Understand Query PerformanceGrant Fritchey
Microsoft has added an excellent new extension in PostgreSQL on their Azure Platform. This session, presented at Posette 2024, covers what Query Store is and the types of information you can get out of it.
Measures in SQL (SIGMOD 2024, Santiago, Chile)Julian Hyde
SQL has attained widespread adoption, but Business Intelligence tools still use their own higher level languages based upon a multidimensional paradigm. Composable calculations are what is missing from SQL, and we propose a new kind of column, called a measure, that attaches a calculation to a table. Like regular tables, tables with measures are composable and closed when used in queries.
SQL-with-measures has the power, conciseness and reusability of multidimensional languages but retains SQL semantics. Measure invocations can be expanded in place to simple, clear SQL.
To define the evaluation semantics for measures, we introduce context-sensitive expressions (a way to evaluate multidimensional expressions that is consistent with existing SQL semantics), a concept called evaluation context, and several operations for setting and modifying the evaluation context.
A talk at SIGMOD, June 9–15, 2024, Santiago, Chile
Authors: Julian Hyde (Google) and John Fremlin (Google)
https://doi.org/10.1145/3626246.3653374
E-commerce Development Services- Hornet DynamicsHornet Dynamics
For any business hoping to succeed in the digital age, having a strong online presence is crucial. We offer Ecommerce Development Services that are customized according to your business requirements and client preferences, enabling you to create a dynamic, safe, and user-friendly online store.
Artificia Intellicence and XPath Extension FunctionsOctavian Nadolu
The purpose of this presentation is to provide an overview of how you can use AI from XSLT, XQuery, Schematron, or XML Refactoring operations, the potential benefits of using AI, and some of the challenges we face.
When it is all about ERP solutions, companies typically meet their needs with common ERP solutions like SAP, Oracle, and Microsoft Dynamics. These big players have demonstrated that ERP systems can be either simple or highly comprehensive. This remains true today, but there are new factors to consider, including a promising new contender in the market that’s Odoo. This blog compares Odoo ERP with traditional ERP systems and explains why many companies now see Odoo ERP as the best choice.
What are ERP Systems?
An ERP, or Enterprise Resource Planning, system provides your company with valuable information to help you make better decisions and boost your ROI. You should choose an ERP system based on your company’s specific needs. For instance, if you run a manufacturing or retail business, you will need an ERP system that efficiently manages inventory. A consulting firm, on the other hand, would benefit from an ERP system that enhances daily operations. Similarly, eCommerce stores would select an ERP system tailored to their needs.
Because different businesses have different requirements, ERP system functionalities can vary. Among the various ERP systems available, Odoo ERP is considered one of the best in the ERp market with more than 12 million global users today.
Odoo is an open-source ERP system initially designed for small to medium-sized businesses but now suitable for a wide range of companies. Odoo offers a scalable and configurable point-of-sale management solution and allows you to create customised modules for specific industries. Odoo is gaining more popularity because it is built in a way that allows easy customisation, has a user-friendly interface, and is affordable. Here, you will cover the main differences and get to know why Odoo is gaining attention despite the many other ERP systems available in the market.
Microservice Teams - How the cloud changes the way we workSven Peters
A lot of technical challenges and complexity come with building a cloud-native and distributed architecture. The way we develop backend software has fundamentally changed in the last ten years. Managing a microservices architecture demands a lot of us to ensure observability and operational resiliency. But did you also change the way you run your development teams?
Sven will talk about Atlassian’s journey from a monolith to a multi-tenanted architecture and how it affected the way the engineering teams work. You will learn how we shifted to service ownership, moved to more autonomous teams (and its challenges), and established platform and enablement teams.
WWDC 2024 Keynote Review: For CocoaCoders AustinPatrick Weigel
Overview of WWDC 2024 Keynote Address.
Covers: Apple Intelligence, iOS18, macOS Sequoia, iPadOS, watchOS, visionOS, and Apple TV+.
Understandable dialogue on Apple TV+
On-device app controlling AI.
Access to ChatGPT with a guest appearance by Chief Data Thief Sam Altman!
App Locking! iPhone Mirroring! And a Calculator!!
Need for Speed: Removing speed bumps from your Symfony projects ⚡️Łukasz Chruściel
No one wants their application to drag like a car stuck in the slow lane! Yet it’s all too common to encounter bumpy, pothole-filled solutions that slow the speed of any application. Symfony apps are not an exception.
In this talk, I will take you for a spin around the performance racetrack. We’ll explore common pitfalls - those hidden potholes on your application that can cause unexpected slowdowns. Learn how to spot these performance bumps early, and more importantly, how to navigate around them to keep your application running at top speed.
We will focus in particular on tuning your engine at the application level, making the right adjustments to ensure that your system responds like a well-oiled, high-performance race car.
SOCRadar's Aviation Industry Q1 Incident Report is out now!
The aviation industry has always been a prime target for cybercriminals due to its critical infrastructure and high stakes. In the first quarter of 2024, the sector faced an alarming surge in cybersecurity threats, revealing its vulnerabilities and the relentless sophistication of cyber attackers.
SOCRadar’s Aviation Industry, Quarterly Incident Report, provides an in-depth analysis of these threats, detected and examined through our extensive monitoring of hacker forums, Telegram channels, and dark web platforms.
Transform Your Communication with Cloud-Based IVR SolutionsTheSMSPoint
Discover the power of Cloud-Based IVR Solutions to streamline communication processes. Embrace scalability and cost-efficiency while enhancing customer experiences with features like automated call routing and voice recognition. Accessible from anywhere, these solutions integrate seamlessly with existing systems, providing real-time analytics for continuous improvement. Revolutionize your communication strategy today with Cloud-Based IVR Solutions. Learn more at: https://thesmspoint.com/channel/cloud-telephony
Malibou Pitch Deck For Its €3M Seed Roundsjcobrien
French start-up Malibou raised a €3 million Seed Round to develop its payroll and human resources
management platform for VSEs and SMEs. The financing round was led by investors Breega, Y Combinator, and FCVC.
DoS Forensic Exemplar Comparison to a Known Sample
1. Paul Knight & Narasimha Shashidhar
International Journal of Computer Science and Security (IJCSS), Volume (12) : Issue (1) : 2018 11
DoS Forensic Exemplar Comparison to a Known Sample
Paul Knight prk009@shsu.edu
Department of Computer Science
Sam Houston State University
Huntsville, TX 77341, USA
Narasimha Shashidhar nks001@shsu.edu
Department of Computer Science
Sam Houston State University
Huntsville, TX 77341, USA
Abstract
The investigation of any event or incident often involves the evaluation of physical evidence.
Occasionally, a comparison is conducted between an evidentiary sample of unknown origin and
that of an appropriate known sample. In a Denial of Service (DoS) attack, items of evidentiary
value may cross the spectrum from anecdotes to useful information in firewall logs or complete
packet captures. Because of the spoofed or reflective nature of DoS attacks, relevant information
leading to the direct identification of the perpetrator is rarely available. In many instances, this
underscores the significance of the investigator’s ability to accurately identify the tool utilized by
the suspect. For a DoS attack scenario, this would likely involve a commercially available stresser
or criminal bot infrastructure. In this paper, we propose the concept of a DoS exemplar and
determine if the comparison of evidentiary samples to an appropriate known sample of DoS
attributes could add value in the investigative process. We also provide a simple tool to compare
two DoS flows.
Keywords: Denial of Service Flow Comparison, DoS Similarity Score, DoS Exemplar, Stresser.
1. INTRODUCTION
1.1 The Need
DoS attacks can occasionally result in a loss to commerce, but it almost always results in
embarrassment to the victims. Investigators assigned to these cases may have a variety of
artifacts to evaluate, ranging from full-packet capture to basic firewall logs. DoS attacks range
from the simplest SYN flood to application specific resource depletion. Attacks can originate from
a single computer, a commercial stresser, or a criminal bot. Because of the spoofed nature of
DoS traffic, attribution to the source is not traditionally part of the investigative process. The
purpose of our work is to establish a repeatable and forensically sound methodology for the
investigator to compare DoS flows in an effort to make a determination if these streams originated
from the same source. The potential attribution to the tool used to facilitate the attack could then
result in legal process authorizing the seizure of the suspect server and the related databases.
1.2 Solution for Everyone
In this work, we present our method which includes a simple tool and the accompanying
algorithm along with the scoring of DoS floods from five different stressers to validate the
feasibility of including a “similarity score” into the investigative work flow. Reflective floods from
each of the five sample stressers were captured as *.pcap files. Floods of 30 seconds or less
were used, as they provide adequate traffic for analysis, yet are small enough to be manageable
with freely available tools. The weaknesses discovered through the evaluation of these samples
will be presented. Finally, the concept of flow cross-section will be introduced. Additional
limitations created by the possibility of shared infrastructure will be discussed as well.
2. Paul Knight & Narasimha Shashidhar
International Journal of Computer Science and Security (IJCSS), Volume (12) : Issue (1) : 2018 12
2. PRIOR WORK
2.1 Complications of Attribution
There is an assortment of excellent prior research efforts on both reflective DoS attacks and
attribution, but, to the best of our knowledge, it appears that most researches have stayed away
from the concept of identifying a particular ‘booter’ used in an attack. This is primarily due to the
inherent spoofed nature of the DoS traffic and the complication of attribution when a zombie or
botnet is utilized.
In the research article, An Analysis of DDoS-as-a-Service Attacks [1], Santanna et al. analyzed
the attack characteristics of 14 distinct booters. The goal of their analysis was to add value to
attack victims in the mitigation and prevention of attacks, as well as to help other researchers in
understanding how stressers work, in general. They used Jaccard’s Similarity Coefficient as the
metric to compare flow characteristics and based on this metric concluded that, at the time,
stressers only share minimal infrastructures. However, it appeared as if the future of infrastructure
sharing was unpredictable.
Wheeler and Larsen, in their work titled Techniques for Cyber Attack Attribution [2], discussed 17
potential methods of attack attribution techniques, but conceded the difficulty and inherent
limitations of such techniques, primarily because of the use of intermediaries, which both spoof
traffic and use reflective protocols [2].
Hunker et al. embarked on a research effort where the focus was not on the technical aspects of
attribution, but on a more general attribution framework for showing that the characteristic
associated with an entity has value in the overall attribution process. This report is titled ‘Report
on Attribution for GENI’ [3].
Kuhrer et al.’s research work ‘Exit from Hell? Reducing the Impact of Amplification DDoS
Attacks’, provided insight into the root cause of DDoS amplification attacks through four methods:
utilizing protocol specific fingerprinting, creating advisories, and analyzing both future attack
vectors and the root cause of amplification attacks [4].
2.2 Building on Previous Work
Based on our literature survey of the vast body of work, we have identified Karami’s work to be
the first as it relates to validating the concept of DoS attribution to a particular stresser. He
accomplished this through a collection of 30 second attacks from 23 different commercially
available stressers. Karami evaluated the reflective traffic of four DoS protocols (NTP, DNS,
CharGen, and SSDP) using the Jaccard’s Similarity Coefficient, which provides a value of 0-1 for
the similarity of IP addresses shared by two flow collections. Through the information collected in
2,667 attacks, Karami was able to propose minimum threshold scores for four of the five
protocols likely to indicate a correct classification: CharGen t=.55, DNS t=.60, NTP t=.55, SSDP
t=.45. DNS did not provide the same classification accuracy as the other three protocols, so
Karami added additional features to the algorithm to improve performance, namely resolved
domain names [5].
2.3 Prior Work Meets Current Need
In our present work, similarity scores were compared to Karami’s findings for CharGen, NTP,
DNS, and SSDP attacks. Similarity scores were also obtained in the SNMP and TFTP protocols.
DNS findings were not as predictable, even though the samples shared characteristics like name
resolution.
3. APPLICATION
3.1 Implementing a Similarity Score
First, we present our approach to computing a similarity score. We derive a similarity score
between two data flows using a three step process. First, the relevant parts of the flow are
converted to text, so that they can be sorted and counted. Log files, which are already in text
3. Paul Knight & Narasimha Shashidhar
International Journal of Computer Science and Security (IJCSS), Volume (12) : Issue (1) : 2018 13
format, do not require this conversion step. Second, duplicate information present in a single line
is removed to prevent the same value from being inappropriately included. The final step includes
the sorting, counting, and comparison of the values in the two groups. Linux is the perfect tool for
the final step of this data manipulation as it requires no special programming knowledge, just the
basic Linux skill set and the Bash shell. For this project, a collection of known origin is referred to
as an “exemplar”. A collection of unknown origin is referred to as a “suspect”.
We also created an application with a graphical user interface (GUI) to assist in the evaluation of
two flow collections and provide the similarity score. Several possible use cases were taken into
consideration. The first was a tool that automatically collects traffic, makes a comparison against
any other flow stored in the system, and reports any score above the predetermined threshold.
The second was a tool that would allow the analyst to upload evidence and make a comparison
against a known exemplar for a specific stresser. The compromise was a tool that allows the
analyst to upload either text or *.pcap captures from a known exemplar or a suspect and receive
the similarity score. The GUI application also compares all of the suspect files in the system to all
the exemplars in the system and provides the related similarity scores. The application was
developed using easily repeatable and portable Bash scripts accessed through a PHP based web
interface. As such, the GUI application is best deployed in a client-server environment and made
accessible to users over the Internet.
Lastly, a terminal version, which only provides a similarity score between two flow collections,
was also created as a more portable alternative.
3.2 Graphical User Interface – GUI Uploads
The user interface is constructed of familiar textboxes, radio buttons, and submits buttons.
Obtaining a score is a three step process; upload, evaluate, and score. The steps were separated
to create maximum workflow clarity and minimize loads on the processor. Uploading large files
over the Internet can create hourglass moments, as can the sorting and counting of a million IP
addresses.
FIGURE 1: Snapshot of Tool GUI.
4. Paul Knight & Narasimha Shashidhar
International Journal of Computer Science and Security (IJCSS), Volume (12) : Issue (1) : 2018 14
The first step is to upload a file into the system. The upload is the only action taken in this step.
The user indicates if the file is a suspect (unknown) or exemplar (known) and if the file is *.pcap
or text. The file upload limitation was set in the php.ini at less than 500MB. Since traffic captures
can be virtually unlimited in size, this reasonable, yet arbitrary file size, limitation was used. Some
problems with file size limitations are demonstrated later in the discussion of flow cross-sections.
During upload, a unique identifier, like a case number, is associated with the file.
3.3 GUI Evaluations
The second step is the evaluation, which is processed independent of the upload. The action is
initiated with the unique identifier. The browser’s autocomplete assists with accuracy. The
creation of comparable IP lists consumes processor cycles, as some of the files can contain one
million IP addresses. The files that are uploaded with the *.pcap radio button selected are
processed via tcpdump with the –nnq switch. The switch –nn eliminates the resolution of host or
port names. The –q switch displays just enough protocol information to eliminate the duplication
of IP addresses further into the packet. Text uploads are not processed by tcpdump. The
evaluation process results in a glimpse of the IP addresses collected, organized by number of
appearances. The analysts can determine for themselves if a similarity score would have
relevance because the top 100 addresses are displayed. This is another arbitrary value, but
shows if the traffic is reflected or spoofed.
Spoofed Reflected
count address count address
1 100.149.210.163 115145 63.74.109.2
1 100.149.228.44 114601 63.74.109.10
1 100.149.95.17 113789 63.74.109.6
1 100.149.96.231 107584 63.74.109.20
1 100.15.100.145 87234 13.66.51.105
1 100.15.108.251 55658 96.7.49.67
FIGURE 2: Comparison of Spoofed vs. Reflected Traffic.
The evaluate function will also report if the two files required for a similarity score are present in
the system.
FIGURE 3: Snapshot of File Evaluation.
The list created in this instance (CharGen traffic) demonstrates the view of a reflected attack.
5. Paul Knight & Narasimha Shashidhar
International Journal of Computer Science and Security (IJCSS), Volume (12) : Issue (1) : 2018 15
FIGURE 4: Snapshot of Top CharGen Attack IP Addresses.
3.4 GUI Scores
The third step is to obtain a similarity score. This process is case specific. The unique number
added during upload is entered to initiate a comparison between the suspect and exemplar files
for any single case. The similarity score is simply the Jaccard’s Similarity Coefficient of the two
lists. In this case, the two lists are the previously processed set of unique IP addresses for both
the exemplar and suspect in each case.
FIGURE 5: Snapshot of SynStresser CharGen Similarity Score.
3.5 Heavy Lifting with Bash
The Bash script performs the comparison and provides the similarity score. The $union is every
IP address identified in either of the two lists (exemplar or suspect). The $intersection is the IP
addresses the two lists have in common. The score is determined using the Linux bc command
line calculator. The application calculates the intersection/union to four decimal places.
The script is displayed below:
##output Common to Both Columns
comm -12 <(sort $suspect) <(sort $exemplar) | uniq
##count of addresses in both columns & create the variable
intersection=$(comm -12 <(sort $suspect) <(sort $exemplar) | uniq | wc -l)
##count of unique values in both & create the variable
union=$(cat $suspect $exemplar | sort | uniq | wc -l)
##show the integers
echo "$intersection/$union"
##show the float score
bc <<< "scale=4;$intersection/$union"
##put the score into a variable
score=$(bc <<< "scale=4;$intersection/$union")
echo "Jaccard's Similarity Coefficient for case $1 is "$score
FIGURE 6: Bash Script for Similarity Score.
3.6 Auto-Scoring
The “Auto Score” is an additional feature that compares every exemplar to every suspect file in
the system. The process uses a lot of cycles to compare all the values in the system, so it is an
on-demand function. The results shown in the box on the GUI in Figure 1 are legitimate, but are
present mainly for appearance.
6. Paul Knight & Narasimha Shashidhar
International Journal of Computer Science and Security (IJCSS), Volume (12) : Issue (1) : 2018 16
FIGURE 7: Snapshot of Auto Score Results.
3.7 Terminal Version
The terminal version of the tool is capable of providing the same similarity score as the GUI, but
does not auto-score against the other samples in the system. The syntax is-
jaccard file1 -p file2 –p
The –p switch is replaced with –t in the event the file is text instead of .pcap. A whitespace
separates the variables. The source code that accompanies this paper is available in the GitHub
repository [6].
4. INFORMATION COLLECTION
4.1 Stresser Selection
Reflective attack traffic was collected from Beta Booter, Critical Boot, SynStresser, and Poly
Stresser. Network Stresser was never functional. Beta Booter, Poly Stresser, and SynStresser
had intermittent periods of inoperability. Stressers were selected via Internet searches and based
on literature survey. There were only three criteria; provide reflective attacks, accept Bitcoin, and
function. The more reliable stressers were used with greater frequency.
4.2 Fire The Stressers
There were 24 .pcap collections conducted between April 10 and April 23, 2017. The two files
collected during the proof of concept on February 20 were also included in the dataset. Pairs of
DNS, TFTP, SNMP, CharGen, NTP, and SSDP attacks were collected and scored with the tool.
The dataset was captured with the use of two Linux Ubuntu servers running Wireshark and
libpcap. The first server was located at Microsoft Azure in Dallas, Texas. The speedtest.com
python script measured the download speed at 101 Mbit/s and the upload speed at 298 Mbits/s.
A second server was deployed at Linode in Dallas following the Critical Boot Stresser’s refusal to
attack Microsoft IP addresses. The Linode download speeds were significantly better at 343
Mbit/s. The upload speed was less at 196 Mbits/s. Because of the higher download speeds,
Linode was used in the collection of the vast majority of traffic.
DoS attacks were initially set at 30 seconds. The packet capture was stopped manually instead of
preselecting a value (i.e. one million packets). Packet sizes were significantly different based on
the type of flood utilized. Average packet sizes ranged from 253 bytes in Critical Boot TFTP to
1177 bytes in SynStresser CharGen. Attacks were reduced to 15 seconds in many cases to avoid
over collection, since the premise was to keep the files down to a manageable size less than
500MB. Even with the reduction of attack duration, many files were manually reduced in size. For
consistency, any reduction contained the initial flow from the beginning to the desired length or
number of packets. It is imperative to compare like sections in the flow. This will be demonstrated
in detail in the subsequent sections. Flows up to 125,000 packets per second for Poly Stresser
NTP Attack were observed.
7. Paul Knight & Narasimha Shashidhar
International Journal of Computer Science and Security (IJCSS), Volume (12) : Issue (1) : 2018 17
5. LIMITATIONS
5.1 Flow Cross-Section
Two potential limitations to a successful comparison should be considered. The first will be
referred to as the concept of flow ‘cross-section’. Because of latency, it is a foregone conclusion
that attack traffic will arrive at the target at different times in any collection. Similarity scores will
be affected by the point in the flow in which the IP addresses are compared. A stresser that is
allowed to run through the entire list of compromised hosts would create a complete list of source
IP addresses. A flow cross-section is an incomplete portion of any attack flow. Investigators
hampered by the limitations of the evidence presented to them by their customers must figure out
a way to compare like portions.
The graph in Figure 8 demonstrates how a 30 second attack can continue for 148 seconds with
diminishing packet presentation. Though the flow diminishes, it can still skew the similarity score
because addresses that appear at the beginning are not present toward the end.
The significance of comparing a similar cross-section of traffic by time can be demonstrated by
the dissection of the 148 second DNS flow into three even parts consisting of 50 seconds each.
Different IP addresses appear at different points in time, as seen in the following:
In seconds 0-49, 675,056 packets were seen. The IP address 50.19.58.26
located at Amazon in Seattle, Washington was seen 38 times between 0
and .02 seconds of the flow. The address 91.109.0.201 at Mesh Digital in
Sarria, Spain was seen 96 times between 0.79 and 29.38 seconds. The
address 118.163.97.26 at HINET-NET in Taipei, Taiwan was seen 1435
times between 7.97 and 31.96 seconds.
In seconds 50-99, 193 packets were seen. The address 59.104.139.111
at SEEDNET-NET in Taipei, Taiwan was seen three times between 50
and 62 seconds. In seconds 100-148, 124 packets were seen. The
address 122.18.108.16 at NTT Communications in Tokyo, Japan was
seen six times between 110.70 and 122.62 seconds.
8. Paul Knight & Narasimha Shashidhar
International Journal of Computer Science and Security (IJCSS), Volume (12) : Issue (1) : 2018 18
FIGURE 8: Beta Booter DNS (573 MBPS).
The graph in Figure 9 demonstrates how a 15 second attack from the Poly Stresser DNS trickled
in for 110 seconds.
FIGURE 9: Poly Stresser DNS (1.9 GBs)
9. Paul Knight & Narasimha Shashidhar
International Journal of Computer Science and Security (IJCSS), Volume (12) : Issue (1) : 2018 19
To further illustrate the cross-section concept, a 3 GB Poly Stresser NTP attack was divided into
three equal parts of one million packets each, irrespective of arrival time, to demonstrate the
inconsistency of scores based on the cross-section evaluated.
The similarity score of the comparison of the first million packets of the NTP suspect to its
exemplar counterpart was .73 (688/942). The score of the second million packets of the NTP
suspect to the original exemplar only decreased to .69 (671/965). However, the score of the third
million packets of the NTP suspect to the original exemplar decreased to .06 (58/872).
In a final illustration of cross-section limitations, the same NTP flow previously divided into three
equal portions of one million packets was compared to each another. Figure 10 demonstrates the
inconsistency. A similarity score of 1 is expected between like sections of the NTP flow. A high
similarity score of .88 is obtained between the 1st one million packets and the 2nd one million
packets (1-2). However, there is only a .12 similarity between the 2nd and 3rd million packets (2-
3), and a .08 similarity between the 1st and 3rd million packets (1-3). As illustrated, the similarity
scores involving the third section barely resemble those of the first two.
FIGURE 10: NTP Cross-Section Comparison.
Because of this cross-section issue, comparisons in our research project were made strictly to the
first package of the attack to the predetermined limit; file sizes of less than 500MB or one million
packets. Comparing from the beginning of the flow provided the required consistency.
5.2 Shared Infrastructure
The second limitation may be shared infrastructure. Karami [5] observed that shared
infrastructure could impact the validity of similarity scores. However, comparison of like protocols
from different stressers in this project matched Santanna et al.’s observations [1] - limited
indication of shared infrastructure. Some overlap was expected. Because each comparison
includes an exemplar and a suspect file from each stresser, the higher of the two scores is
shown.
SynStresser NTP and Poly Stresser NTP .03
SynStresser NTP and Critical Boot NTP 0
Poly Stresser NTP and Critical Boot NTP 0
SynStresser CharGen and Poly Stresser CharGen .10
Beta Booter DNS and Poly Stresser DNS .02
SynStresser DNS and Poly Stresser DNS .19
Beta Booter DNS and SynStresser DNS .28
FIGURE 11: Similarity Scores for Different Stressers (like protocols).
As indicated, the DNS attacks had the highest probability of sharing compromised hosts, but
there is insufficient evidence to substantiate that they share anything other than some number of
compromised hosts. These findings in no way eliminate the possibility of the type of infrastructure
10. Paul Knight & Narasimha Shashidhar
International Journal of Computer Science and Security (IJCSS), Volume (12) : Issue (1) : 2018 20
sharing that could result in high similarity scores among different stressers. The possibility must
be taken into consideration.
6. RESULTS
The results of the project’s sample collection and tool comparisons are as follows:
NTP - Critical Boot, SynStresser, and Poly Stresser provided NTP attacks. Similarity scores
between two flows of the same stresser were greater than those accepted by Karami’s [5]
minimum threshold score for the NTP protocol (.55). The similarity scores were as follows:
Critical Boot NTP .64, SynStresser NTP .79, and Poly Stresser NTP .73.
CharGen – SynStresser and Poly Stresser both provided CharGen attacks. Similarity scores
between two flows of the same stresser were greater than those accepted by Karami’s [5]
minimum threshold score for the CharGen protocol (.55). The similarity scores were as follows:
SynStresser CharGen .96 and Poly Stresser CharGen .75.
SNMP – Critical Boot was the sole provider of SNMP attacks of the stressers evaluated. Karami
did not evaluate the SNMP protocol [5]. The similarity score was:
Critical Boot SNMP .98.
TFTP – Critical Boot was the sole provider of TFTP attacks for the stressers evaluated. Karami
did not evaluate the TFTP protocol [5]. The similarity score was:
Critical Boot TFTP .88.
SSDP - Critical Boot was the sole provider of SSDP attack of the stressers evaluated. Similarity
scores between two flows were greater than those accepted by Karami’s [5] minimum threshold
score for the SSDP protocol (.45). The similarity score was:
SynStyresser SSDP .71
DNS – DNS attack traffic was compared in Beta Booter, Poly Stresser, and SynStresser. Only
one of the three similarity scores between flows of the same stresser was greater than those
accepted by Karami’s [5] minimum threshold score (.60). The similarity scores were as follows:
Beta Booter DNS .03, Poly Stresser DNS .51, and SynStresser DNS .77.
7. CONCLUSION
By collecting the traffic in a consistent manner and comparing like cross-sections of the attack
flow, similarity scores greater than those accepted as correct classifications by Karami [5] can be
achieved. In most of the tests, the flows compared were far less than 30 seconds, creating the
potential for success with only a limited amount of information.
The reflective attacks NTP, TFTP, SNMP, CharGen, and SSDP provided consistent comparison
results. Of course, the ideal comparison would be against every source IP address participating
in every reflective attack. In the absence of that much data, consideration must be given to the
cross-section of the flow; to such a degree that scores may fall below an acceptable level, even
when known to originate from the same source.
In a real-life scenario, the investigator may have no control over the evidence provided by the
victim. They may not know if the information was derived from a firewall capable of writing to a log
at speeds sufficient for an accurate collection of the evidence or if the victim’s network
configuration somehow limited the volume of evidence collected. Finally, the possibility of shared
11. Paul Knight & Narasimha Shashidhar
International Journal of Computer Science and Security (IJCSS), Volume (12) : Issue (1) : 2018 21
infrastructure must be taken into consideration when making a final determination of the identity
of a stresser, even though our research project’s sample data provided little indication of the
likelihood of infrastructure sharing.
Intentionally limiting by the size of the collection or using a purpose-built tool is in no way a
requirement for success. Separating a list of source IP addresses from a packet capture can be
accomplished with tcpdump from the terminal. Sorting, counting, and basic math tools like bc are
also available in most Linux Distributions.
8. REFERENCES
[1] J. Santanna, R. van Rijswijk-Deij, R. Hofstede, A. Sperotto, M. Wierbosch, L. Z. Granville, &
A. Pras. (2015) “Booters—An analysis of DDoS-as-a-service attacks”. In Integrated Network
Management (IM), 2015 IFIP/IEEE International Symposium on (pp. 243-251). IEEE.
[2] D. A. Wheeler, & G. N. Larsen (2003). “Techniques for cyber-attack attribution” (No. IDA-P-
3792). Institute for Defense Analyses, Alexandria, VA.
[3] J. Hunker, M. Bishop, & C. Gates. (2010). “Report on Attribution for GENI”. In National
Science Foundation Project 1776, 2010.
[4] M. Kührer, T. Hupperich, C. Rossow, & T. Holz. (2014, Aug). “Exit from Hell? Reducing the
Impact of Amplification DDoS Attacks”. In USENIX Security Symposium (pp. 111-125).
[5] M. Karami, “Understanding and Undermining the Business of DDoS Booter Services,” Ph.D.
dissertation, Dept. Comp. Sci., George Mason Univ., Fairfax, VA, 2016.
[6] GitHub – prknight/Sam_Project [Online]. Available: https://github.com/prknight/Sam_Project.