This document proposes a new intrusion detection system called PAIDS (Proximity-Assisted Intrusion Detection System) to identify unknown worms. Existing signature-based and anomaly-based detection systems are ineffective against new worms that spread quickly. PAIDS takes advantage of the clustered spread of worms among nearby hosts, especially in the early stages, rather than relying on signatures. It aims to detect worm outbreaks when they first begin spreading to limit their propagation. Preliminary simulations show PAIDS has a high detection rate and low false positive rate.
Enhancing Intrusion Detection System with Proximity InformationZhenyun Zhuang
This document proposes PAIDS, a Proximity-Assisted Intrusion Detection System that identifies unknown worm outbreaks by leveraging proximity information of compromised hosts. PAIDS operates independently from existing signature-based and anomaly-based IDS approaches. It observes that compromised hosts tend to cluster geographically and remain active for long periods, allowing proximity to infected machines to indicate higher infection risk. The document motivates PAIDS based on limitations of other IDSes and clustered/long-term nature of worm spread. It then outlines PAIDS design, deployment model, software architecture, and key components for detecting outbreaks using proximity information.
This document summarizes a research paper that proposes new schemes called Power Spectral Density (PSD) and Spectral Flatness Measure (SFM) to detect camouflaging worms (C-worms). C-worms are a new type of worm that can hide their traffic patterns to avoid detection by existing anti-worm software. The proposed schemes aim to differentiate C-worm traffic from normal background traffic and normal worm traffic in the frequency domain, since their traffic patterns cannot be differentiated in the time domain. The results of applying PSD and SFM showed they were effective in detecting C-worms while existing detection systems could not distinguish C-worm and normal worm traffic.
Autonomic Anomaly Detection System in Computer Networksijsrd.com
This paper describes how you can protect your system from Intrusion, which is the method of Intrusion Prevention and Intrusion Detection .The underlying premise of our Intrusion detection system is to describe attack as instance of ontology and its first need is to detect attack. In this paper, we propose a novel framework of autonomic intrusion detection that fulfills online and adaptive intrusion detection over unlabeled HTTP traffic streams in computer networks. The framework holds potential for self-governing: self-labeling, self-updating and self-adapting. Our structure employs the Affinity Propagation (AP) algorithm to learn a subject’s behaviors through dynamical clustering of the streaming data. It automatically labels the data and adapts to normal behavior changes while identifies anomalies.
This document summarizes a research paper analyzing a layered defense system in a virtual lab environment. The paper discusses using tools like honeypots, pfSense firewall, and an intrusion detection system together to form a layered defense model. The researchers used various tools in Kali Linux to simulate attacks and analyze vulnerabilities in the defensive systems. Literature on topics like honeypots, Nmap, pfSense, firewalls, and penetration testing was also reviewed to support the research. The virtual lab experiment tested the layered defense approach against simulated attacks.
Modeling and Containment of Uniform Scanning WormsIOSR Journals
This document presents a branching process model for characterizing the propagation of uniform scanning worms on the Internet. The model models both the inter-host and intra-host spreading of worms. It then describes an automatic worm containment strategy that aims to contain uniform scanning worms by detecting infected machines through scanning and deleting worm files. The model and containment strategy are validated through simulations. The document concludes by discussing modeling topology-aware worms and designing containment mechanisms for them.
This document discusses cyber crime trends in 2013 and cyber security solutions. It begins with an introduction of the author and their background. It then defines various types of cyber crimes like online scams, identity theft, fraud, and embezzlement. International cyber crime trends are discussed along with increasing sophistication of attacks. Solutions discussed include integrated threat protection through application control, intrusion prevention, web filtering, vulnerability management, antispam, and antivirus technologies. The document concludes with information about the author's company and resources.
Presentation describes the idea of heuristic scanning - method used for malware detection and recognition by almost every modern antivirus product. I explain how heuristic scanning works, why it is better than conventional solutions like signature scan, how it bypasses antiheuristic techniques used by malware. Finally I present modern and even future solutions such as Nereus - genetic heuristic engine, developed by Panda Security.
Clustering Categorical Data for Internet Security ApplicationsIJSTA
This document summarizes research on clustering categorical data for internet security applications. It discusses using clustering techniques for malware categorization, phishing website detection, and detecting secure emails. Feature extraction and categorization are generally used to automatically group file samples or websites. The document also reviews several related works applying clustering and other techniques for malware analysis, phishing detection, and analyzing privacy-breaching malware behavior.
Enhancing Intrusion Detection System with Proximity InformationZhenyun Zhuang
This document proposes PAIDS, a Proximity-Assisted Intrusion Detection System that identifies unknown worm outbreaks by leveraging proximity information of compromised hosts. PAIDS operates independently from existing signature-based and anomaly-based IDS approaches. It observes that compromised hosts tend to cluster geographically and remain active for long periods, allowing proximity to infected machines to indicate higher infection risk. The document motivates PAIDS based on limitations of other IDSes and clustered/long-term nature of worm spread. It then outlines PAIDS design, deployment model, software architecture, and key components for detecting outbreaks using proximity information.
This document summarizes a research paper that proposes new schemes called Power Spectral Density (PSD) and Spectral Flatness Measure (SFM) to detect camouflaging worms (C-worms). C-worms are a new type of worm that can hide their traffic patterns to avoid detection by existing anti-worm software. The proposed schemes aim to differentiate C-worm traffic from normal background traffic and normal worm traffic in the frequency domain, since their traffic patterns cannot be differentiated in the time domain. The results of applying PSD and SFM showed they were effective in detecting C-worms while existing detection systems could not distinguish C-worm and normal worm traffic.
Autonomic Anomaly Detection System in Computer Networksijsrd.com
This paper describes how you can protect your system from Intrusion, which is the method of Intrusion Prevention and Intrusion Detection .The underlying premise of our Intrusion detection system is to describe attack as instance of ontology and its first need is to detect attack. In this paper, we propose a novel framework of autonomic intrusion detection that fulfills online and adaptive intrusion detection over unlabeled HTTP traffic streams in computer networks. The framework holds potential for self-governing: self-labeling, self-updating and self-adapting. Our structure employs the Affinity Propagation (AP) algorithm to learn a subject’s behaviors through dynamical clustering of the streaming data. It automatically labels the data and adapts to normal behavior changes while identifies anomalies.
This document summarizes a research paper analyzing a layered defense system in a virtual lab environment. The paper discusses using tools like honeypots, pfSense firewall, and an intrusion detection system together to form a layered defense model. The researchers used various tools in Kali Linux to simulate attacks and analyze vulnerabilities in the defensive systems. Literature on topics like honeypots, Nmap, pfSense, firewalls, and penetration testing was also reviewed to support the research. The virtual lab experiment tested the layered defense approach against simulated attacks.
Modeling and Containment of Uniform Scanning WormsIOSR Journals
This document presents a branching process model for characterizing the propagation of uniform scanning worms on the Internet. The model models both the inter-host and intra-host spreading of worms. It then describes an automatic worm containment strategy that aims to contain uniform scanning worms by detecting infected machines through scanning and deleting worm files. The model and containment strategy are validated through simulations. The document concludes by discussing modeling topology-aware worms and designing containment mechanisms for them.
This document discusses cyber crime trends in 2013 and cyber security solutions. It begins with an introduction of the author and their background. It then defines various types of cyber crimes like online scams, identity theft, fraud, and embezzlement. International cyber crime trends are discussed along with increasing sophistication of attacks. Solutions discussed include integrated threat protection through application control, intrusion prevention, web filtering, vulnerability management, antispam, and antivirus technologies. The document concludes with information about the author's company and resources.
Presentation describes the idea of heuristic scanning - method used for malware detection and recognition by almost every modern antivirus product. I explain how heuristic scanning works, why it is better than conventional solutions like signature scan, how it bypasses antiheuristic techniques used by malware. Finally I present modern and even future solutions such as Nereus - genetic heuristic engine, developed by Panda Security.
Clustering Categorical Data for Internet Security ApplicationsIJSTA
This document summarizes research on clustering categorical data for internet security applications. It discusses using clustering techniques for malware categorization, phishing website detection, and detecting secure emails. Feature extraction and categorization are generally used to automatically group file samples or websites. The document also reviews several related works applying clustering and other techniques for malware analysis, phishing detection, and analyzing privacy-breaching malware behavior.
Current Studies On Intrusion Detection System, Genetic Algorithm And Fuzzy Logicijdpsjournal
This document summarizes a research paper on current studies of intrusion detection systems using genetic algorithms and fuzzy logic. The paper presents an overview of intrusion detection systems, including different techniques like misuse detection and anomaly detection. It discusses using genetic algorithms to generate fuzzy rules to characterize normal and abnormal network behavior in order to reduce false alarms. The paper also outlines the dataset, genetic algorithm approach, and use of fuzzy logic that are proposed for the intrusion detection system.
Intrusion detection systems aim to detect unauthorized access to computer systems and networks. There are three main types: anomaly-based detection identifies deviations from normal behavior profiles; signature-based detection looks for known threat patterns; and hybrid detection combines the two approaches. Intrusion detection systems are also classified based on their monitoring scope, including network-based systems that monitor network traffic and host-based systems that monitor logs and activities on individual computers. Recent research focuses on developing more effective hybrid systems and methods that can detect both known and unknown threats.
A review of anomaly based intrusions detection in multi tier web applicationsiaemedu
This document provides a review of anomaly-based intrusion detection techniques for multi-tier web applications. It begins with an introduction to intrusion detection systems and the differences between misuse detection and anomaly detection. It then reviews several existing anomaly detection approaches including rule-based systems, multimodal approaches, state transition analysis, profiling of internal application states, and combined approaches that analyze both web requests and database queries. The key advantages and disadvantages of each technique are discussed. Overall, the document analyzes different methods for building behavior models and detecting anomalies to identify intrusions in complex multi-tier web applications.
The document discusses Darktrace's Enterprise Immune System technology, which takes inspiration from the human immune system to provide cyber defense. It uses unsupervised machine learning and advanced mathematics to learn what normal network behavior looks like and detect anomalies indicating threats. This self-learning approach can identify new threats that traditional signature-based tools miss. The system also automatically responds to threats with targeted digital responses. Darktrace's technology represents a new approach to cybersecurity that is better suited to today's sophisticated and unpredictable threat landscape.
In recent years, wireless sensor network (WSN) is used in several application areas resembling observance, tracking, and dominant in IoTs. for several applications of WSN, security is a crucial demand. However, security solutions in WSN disagree from ancient networks because of resource limitation and process constraints. This paper analyzes security solutions: TinySec, IEEE 802.15.4, SPINS, MiniSEC, LSec, LLSP, LISA, and LISP in WSN. This paper additionally presents characteristics, security needs, attacks, cryptography algorithms, and operation modes. This paper is taken into account to be helpful for security designers in WSNs.
company names mentioned herein are for identification and educational purposes only and are the property of, and may be trademarks of, their respective owners.
1) The document discusses different types of intruders including masqueraders, misfeasors, and clandestine users. Masqueraders are outsiders who penetrate access controls, misfeasors are legitimate users who access unauthorized data, and clandestine users seize control to evade detection.
2) Intruder attacks range from benign curiosity to serious attempts to access privileged data or disrupt systems. Common intrusion examples include password cracking, unauthorized data access, and packet sniffing.
3) Intrusion detection is important as a secondary line of defense when prevention fails. It can help identify intruders, collect information on techniques, and act as a deterrent. Behavior-based detection looks for
Intrusion Detection System - False Positive Alert Reduction TechniqueIDES Editor
Intrusion Detection System (IDS) is the most
powerful system that can handle the intrusions of the computer
environments by triggering alerts to make the analysts take
actions to stop this intrusion, but the IDS is triggering alerts
for any suspicious activity which means thousand alerts that
the analysts should take care of it. IDS generate a large
number of alerts and most of them are false positive as the
behavior construe for partial attack pattern or lack of
environment knowledge. These Alerts has different severities
and most of them don’t require big attention because of the
huge number of the false alerts among them. Monitoring and
identifying risky alerts is a major concern to security
administrator. Deleting the false alerts or reducing the
amount of the alerts (false alerts or real alerts) from the
entire amount alerts lead the researchers to design an
operational model for minimization of false positive alarms,
including recurring alarms by security administrator. In this
paper we are proposing a method, which can reduce such kind
of false positive alarms.
A Secure Intrusion Detection System against DDOS Attack in Wireless Ad-Hoc Ne...IJERA Editor
MANET (Wireless Mobile Ad-hoc Network) is a technology which are used in society in daily life an
activities such as in traffic surveillance, in building construction or it’s application is used in battlefield also. In
MANET there is no control of any node here is no centralized controller that’s why each node has its own
routing capability. And each node act as device and its change its connection to other devices.
The main problem of today’s MANET is a security, because there is no any centralized controller. Our main aim
is that we protect them from DDOS attack in terms of flooding through messages, packet drop, end to end delay
and energy dropping etc. For that we are applying many techniques for saving energy of nodes and identifying
malicious node and types of DDOS attack and in this paper we are discussing this technique.
This document summarizes the key findings from an analysis of over 26,000 malware samples collected over 3 months from over 1,000 enterprise networks. The analysis found that 90% of unknown malware was delivered via web browsing, with an average of 20 days to detection compared to 5 days for email-delivered malware. The document provides recommendations to address unknown malware such as bringing anti-malware technologies into networks, enabling real-time detection and blocking, and enforcing user and application controls on files transfers.
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Artificial Intelligence in Virus Detection & Recognitionahmadali999
Heuristic scanning is a method for virus detection and recognition that uses metaheuristics like pattern matching, automatic learning, and environment emulation. It examines program behaviors and characteristics to recognize potential threats without specific knowledge of their code or structure. While heuristic scanning can detect new threats, it risks false positives by flagging innocent programs. Continued development focuses on improving accuracy through techniques like automatic learning from non-infected machines.
Detecting Anomaly IDS in Network using Bayesian NetworkIOSR Journals
In a hostile area of network, it is a severe challenge to protect sink, developing flexible and adaptive
security oriented approaches against malicious activities. Intrusion detection is the act of detecting, monitoring
unwanted activity and traffic on a network or a device, which violates security policy. This paper begins with a
review of the most well-known anomaly based intrusion detection techniques. AIDS is a system for detecting
computer intrusions, type of misuse that falls out of normal operation by monitoring system activity and
classifying it as either normal or anomalous .It is based on Machine Learning AIDS schemes model that allows
the attacks analyzed to be categorized and find probabilistic relationships among attacks using Bayesian
network.
This document discusses various topics related to intruders and network security. It covers intrusion techniques like password guessing and capture. It also discusses approaches to intrusion detection such as statistical anomaly detection, rule-based detection, and audit record analysis. Finally, it discusses password management strategies like education, computer-generated passwords, and proactive password checking.
Darktrace Antigena is an automated response capability that allows organizations to respond to cyber threats without disrupting normal business operations. As a "digital antibody", Antigena detects threats uniquely identified by Darktrace and automatically takes measured and targeted responses. This includes terminating abnormal connections while leaving normal activities unaffected. Antigena's dynamic boundary enforces each user and device's normal "pattern of life" to combat threats faster than any security team.
Intrusion Detection and Prevention System in an Enterprise NetworkOkehie Collins
This document describes a project on intrusion detection and prevention systems in an enterprise network. It was submitted by Okehie Collins Obinna to the Department of Computer Science at the Federal University of Technology in partial fulfillment of a Bachelor of Technology degree in Computer Science. The project analyzes intrusion detection and prevention technologies used in enterprise networks and designs a desktop application to monitor a computer network system for possible intrusions and provide an interface for a network administrator.
This document provides an overview and introduction to various computer security threats. It explains that today's threats are more likely to be low-profile and targeted towards financial gain, such as encrypting files and demanding ransom, or hacking to steal banking or credit card details. Future threats may be difficult to predict but will likely continue to exploit opportunities for criminal profit. The document then provides definitions and descriptions of specific threat types from A to Z.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A3: application-aware acceleration for wireless data networksZhenyun Zhuang
This document discusses application-aware acceleration (A3) for improving application performance over wireless networks. It presents results showing that while enhanced transport protocols improve performance for FTP, they provide little benefit for other popular applications like CIFS, SMTP, and HTTP. This is because the behavior of these applications, designed for reliable LANs, negatively impacts their performance over lossy wireless links. The document proposes A3 as a middleware solution that offsets these behavioral problems through application-specific design principles, while remaining transparent to applications.
AOTO: Adaptive overlay topology optimization in unstructured P2P systemsZhenyun Zhuang
IEEE GLOBECOM 2003
Peer-to-Peer (P2P) systems are self-organized and
decentralized. However, the mechanism of a peer randomly
joining and leaving a P2P network causes topology mismatch-
ing between the P2P logical overlay network and the physical
underlying network. The topology mismatching problem brings
great stress on the Internet infrastructure and seriously limits
the performance gain from various search or routing tech-
niques. We propose the Adaptive Overlay Topology Optimiza-
tion (AOTO) technique, an algorithm of building an overlay
multicast tree among each source node and its direct logical
neighbors so as to alleviate the mismatching problem by choos-
ing closer nodes as logical neighbors, while providing a larger
query coverage range. AOTO is scalable and completely dis-
tributed in the sense that it does not require global knowledge
of the whole overlay network when each node is optimizing the
organization of its logical neighbors. The simulation shows that
AOTO can effectively solve the mismatching problem and re-
duce more than 55% of the traffic generated by the P2P system itself.
Hazard avoidance in wireless sensor and actor networksZhenyun Zhuang
This document discusses hazards that can occur in wireless sensor and actor networks due to out-of-order execution of queries and commands. It identifies three types of hazards:
1) Command-after-command (CAC) hazard occurs when the order of two sequential commands is reversed.
2) Query-after-command (QAC) hazard occurs when a query is executed before the corresponding command.
3) Command-after-query (CAQ) hazard is the reverse of QAC, where a command is executed before its preceding query.
The document uses an example of a fire detection and suppression system to illustrate these hazards and their undesirable consequences. It also discusses challenges in addressing hazards such as parallel
On the Impact of Mobile Hosts in Peer-to-Peer Data NetworksZhenyun Zhuang
This document analyzes the performance issues faced by mobile hosts participating in peer-to-peer (P2P) data networks like BitTorrent. It finds that the design of P2P networks is incompatible with the characteristics of wireless networks, causing poor performance for mobile users. It then presents a solution called wireless P2P (wP2P) that addresses these issues through techniques only applied on mobile hosts, improving performance for both mobile and fixed peers. An evaluation shows wP2P provides significant gains over existing P2P applications on mobile networks.
Current Studies On Intrusion Detection System, Genetic Algorithm And Fuzzy Logicijdpsjournal
This document summarizes a research paper on current studies of intrusion detection systems using genetic algorithms and fuzzy logic. The paper presents an overview of intrusion detection systems, including different techniques like misuse detection and anomaly detection. It discusses using genetic algorithms to generate fuzzy rules to characterize normal and abnormal network behavior in order to reduce false alarms. The paper also outlines the dataset, genetic algorithm approach, and use of fuzzy logic that are proposed for the intrusion detection system.
Intrusion detection systems aim to detect unauthorized access to computer systems and networks. There are three main types: anomaly-based detection identifies deviations from normal behavior profiles; signature-based detection looks for known threat patterns; and hybrid detection combines the two approaches. Intrusion detection systems are also classified based on their monitoring scope, including network-based systems that monitor network traffic and host-based systems that monitor logs and activities on individual computers. Recent research focuses on developing more effective hybrid systems and methods that can detect both known and unknown threats.
A review of anomaly based intrusions detection in multi tier web applicationsiaemedu
This document provides a review of anomaly-based intrusion detection techniques for multi-tier web applications. It begins with an introduction to intrusion detection systems and the differences between misuse detection and anomaly detection. It then reviews several existing anomaly detection approaches including rule-based systems, multimodal approaches, state transition analysis, profiling of internal application states, and combined approaches that analyze both web requests and database queries. The key advantages and disadvantages of each technique are discussed. Overall, the document analyzes different methods for building behavior models and detecting anomalies to identify intrusions in complex multi-tier web applications.
The document discusses Darktrace's Enterprise Immune System technology, which takes inspiration from the human immune system to provide cyber defense. It uses unsupervised machine learning and advanced mathematics to learn what normal network behavior looks like and detect anomalies indicating threats. This self-learning approach can identify new threats that traditional signature-based tools miss. The system also automatically responds to threats with targeted digital responses. Darktrace's technology represents a new approach to cybersecurity that is better suited to today's sophisticated and unpredictable threat landscape.
In recent years, wireless sensor network (WSN) is used in several application areas resembling observance, tracking, and dominant in IoTs. for several applications of WSN, security is a crucial demand. However, security solutions in WSN disagree from ancient networks because of resource limitation and process constraints. This paper analyzes security solutions: TinySec, IEEE 802.15.4, SPINS, MiniSEC, LSec, LLSP, LISA, and LISP in WSN. This paper additionally presents characteristics, security needs, attacks, cryptography algorithms, and operation modes. This paper is taken into account to be helpful for security designers in WSNs.
company names mentioned herein are for identification and educational purposes only and are the property of, and may be trademarks of, their respective owners.
1) The document discusses different types of intruders including masqueraders, misfeasors, and clandestine users. Masqueraders are outsiders who penetrate access controls, misfeasors are legitimate users who access unauthorized data, and clandestine users seize control to evade detection.
2) Intruder attacks range from benign curiosity to serious attempts to access privileged data or disrupt systems. Common intrusion examples include password cracking, unauthorized data access, and packet sniffing.
3) Intrusion detection is important as a secondary line of defense when prevention fails. It can help identify intruders, collect information on techniques, and act as a deterrent. Behavior-based detection looks for
Intrusion Detection System - False Positive Alert Reduction TechniqueIDES Editor
Intrusion Detection System (IDS) is the most
powerful system that can handle the intrusions of the computer
environments by triggering alerts to make the analysts take
actions to stop this intrusion, but the IDS is triggering alerts
for any suspicious activity which means thousand alerts that
the analysts should take care of it. IDS generate a large
number of alerts and most of them are false positive as the
behavior construe for partial attack pattern or lack of
environment knowledge. These Alerts has different severities
and most of them don’t require big attention because of the
huge number of the false alerts among them. Monitoring and
identifying risky alerts is a major concern to security
administrator. Deleting the false alerts or reducing the
amount of the alerts (false alerts or real alerts) from the
entire amount alerts lead the researchers to design an
operational model for minimization of false positive alarms,
including recurring alarms by security administrator. In this
paper we are proposing a method, which can reduce such kind
of false positive alarms.
A Secure Intrusion Detection System against DDOS Attack in Wireless Ad-Hoc Ne...IJERA Editor
MANET (Wireless Mobile Ad-hoc Network) is a technology which are used in society in daily life an
activities such as in traffic surveillance, in building construction or it’s application is used in battlefield also. In
MANET there is no control of any node here is no centralized controller that’s why each node has its own
routing capability. And each node act as device and its change its connection to other devices.
The main problem of today’s MANET is a security, because there is no any centralized controller. Our main aim
is that we protect them from DDOS attack in terms of flooding through messages, packet drop, end to end delay
and energy dropping etc. For that we are applying many techniques for saving energy of nodes and identifying
malicious node and types of DDOS attack and in this paper we are discussing this technique.
This document summarizes the key findings from an analysis of over 26,000 malware samples collected over 3 months from over 1,000 enterprise networks. The analysis found that 90% of unknown malware was delivered via web browsing, with an average of 20 days to detection compared to 5 days for email-delivered malware. The document provides recommendations to address unknown malware such as bringing anti-malware technologies into networks, enabling real-time detection and blocking, and enforcing user and application controls on files transfers.
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Artificial Intelligence in Virus Detection & Recognitionahmadali999
Heuristic scanning is a method for virus detection and recognition that uses metaheuristics like pattern matching, automatic learning, and environment emulation. It examines program behaviors and characteristics to recognize potential threats without specific knowledge of their code or structure. While heuristic scanning can detect new threats, it risks false positives by flagging innocent programs. Continued development focuses on improving accuracy through techniques like automatic learning from non-infected machines.
Detecting Anomaly IDS in Network using Bayesian NetworkIOSR Journals
In a hostile area of network, it is a severe challenge to protect sink, developing flexible and adaptive
security oriented approaches against malicious activities. Intrusion detection is the act of detecting, monitoring
unwanted activity and traffic on a network or a device, which violates security policy. This paper begins with a
review of the most well-known anomaly based intrusion detection techniques. AIDS is a system for detecting
computer intrusions, type of misuse that falls out of normal operation by monitoring system activity and
classifying it as either normal or anomalous .It is based on Machine Learning AIDS schemes model that allows
the attacks analyzed to be categorized and find probabilistic relationships among attacks using Bayesian
network.
This document discusses various topics related to intruders and network security. It covers intrusion techniques like password guessing and capture. It also discusses approaches to intrusion detection such as statistical anomaly detection, rule-based detection, and audit record analysis. Finally, it discusses password management strategies like education, computer-generated passwords, and proactive password checking.
Darktrace Antigena is an automated response capability that allows organizations to respond to cyber threats without disrupting normal business operations. As a "digital antibody", Antigena detects threats uniquely identified by Darktrace and automatically takes measured and targeted responses. This includes terminating abnormal connections while leaving normal activities unaffected. Antigena's dynamic boundary enforces each user and device's normal "pattern of life" to combat threats faster than any security team.
Intrusion Detection and Prevention System in an Enterprise NetworkOkehie Collins
This document describes a project on intrusion detection and prevention systems in an enterprise network. It was submitted by Okehie Collins Obinna to the Department of Computer Science at the Federal University of Technology in partial fulfillment of a Bachelor of Technology degree in Computer Science. The project analyzes intrusion detection and prevention technologies used in enterprise networks and designs a desktop application to monitor a computer network system for possible intrusions and provide an interface for a network administrator.
This document provides an overview and introduction to various computer security threats. It explains that today's threats are more likely to be low-profile and targeted towards financial gain, such as encrypting files and demanding ransom, or hacking to steal banking or credit card details. Future threats may be difficult to predict but will likely continue to exploit opportunities for criminal profit. The document then provides definitions and descriptions of specific threat types from A to Z.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A3: application-aware acceleration for wireless data networksZhenyun Zhuang
This document discusses application-aware acceleration (A3) for improving application performance over wireless networks. It presents results showing that while enhanced transport protocols improve performance for FTP, they provide little benefit for other popular applications like CIFS, SMTP, and HTTP. This is because the behavior of these applications, designed for reliable LANs, negatively impacts their performance over lossy wireless links. The document proposes A3 as a middleware solution that offsets these behavioral problems through application-specific design principles, while remaining transparent to applications.
AOTO: Adaptive overlay topology optimization in unstructured P2P systemsZhenyun Zhuang
IEEE GLOBECOM 2003
Peer-to-Peer (P2P) systems are self-organized and
decentralized. However, the mechanism of a peer randomly
joining and leaving a P2P network causes topology mismatch-
ing between the P2P logical overlay network and the physical
underlying network. The topology mismatching problem brings
great stress on the Internet infrastructure and seriously limits
the performance gain from various search or routing tech-
niques. We propose the Adaptive Overlay Topology Optimiza-
tion (AOTO) technique, an algorithm of building an overlay
multicast tree among each source node and its direct logical
neighbors so as to alleviate the mismatching problem by choos-
ing closer nodes as logical neighbors, while providing a larger
query coverage range. AOTO is scalable and completely dis-
tributed in the sense that it does not require global knowledge
of the whole overlay network when each node is optimizing the
organization of its logical neighbors. The simulation shows that
AOTO can effectively solve the mismatching problem and re-
duce more than 55% of the traffic generated by the P2P system itself.
Hazard avoidance in wireless sensor and actor networksZhenyun Zhuang
This document discusses hazards that can occur in wireless sensor and actor networks due to out-of-order execution of queries and commands. It identifies three types of hazards:
1) Command-after-command (CAC) hazard occurs when the order of two sequential commands is reversed.
2) Query-after-command (QAC) hazard occurs when a query is executed before the corresponding command.
3) Command-after-query (CAQ) hazard is the reverse of QAC, where a command is executed before its preceding query.
The document uses an example of a fire detection and suppression system to illustrate these hazards and their undesirable consequences. It also discusses challenges in addressing hazards such as parallel
On the Impact of Mobile Hosts in Peer-to-Peer Data NetworksZhenyun Zhuang
This document analyzes the performance issues faced by mobile hosts participating in peer-to-peer (P2P) data networks like BitTorrent. It finds that the design of P2P networks is incompatible with the characteristics of wireless networks, causing poor performance for mobile users. It then presents a solution called wireless P2P (wP2P) that addresses these issues through techniques only applied on mobile hosts, improving performance for both mobile and fixed peers. An evaluation shows wP2P provides significant gains over existing P2P applications on mobile networks.
Optimizing Streaming Server Selection for CDN-delivered Live StreamingZhenyun Zhuang
LNCS 2012
Content Delivery Networks (CDNs) have been widely used to deliver
web contents on today’s Internet. Gaining tremendous popularity, live streaming
is also increasingly being delivered by CDNs. Compared to conventional static
or dynamic web contents, the new application type of live streaming exposes
unique characteristics that pose challenges to the underlying CDN infrastructure.
Unlike traditional web-objects fetching, which allows Edge Servers to cache contents
and thus typically only involves Edge Servers for delivering contents, live
streaming requires real-time full CDN-streaming paths that span across Ingest
Servers, Origin Servers and Edge Servers.
DNS is the standard practice for enabling dynamic assignment of servers. GeoDNS,
a specialized DNS system, provides DNS resolution by taking into account the
geographical locations of end-users and CDN servers. Though GeoDNS effectively
redirects users to nearest CDN Edge Servers, it may not be able to select
the optimal Origin Server for relaying a live stream to Edge Servers due to the
unique characteristics of live streaming. In this work, we consider the requirements
of delivering live streaming with CDN, and propose advanced design for
selecting optimal Origin Streaming Servers in order to reduce network transit
cost and increase viewers’ experience. We further propose a live-streaming specific
GeoDNS design for selecting optimal Origin Servers to serve Edge Servers.
Client-side web acceleration for low-bandwidth hostsZhenyun Zhuang
This document proposes client-side optimizations to reduce web page load times for users on low-bandwidth networks. It analyzes problems with how current web browsers fetch entire pages greedily without prioritizing visible content. This wastes bandwidth and increases load times. The document proposes three browser-side mechanisms: 1) prioritizing the fetching of objects visible on the initial screen over other objects, 2) reordering object fetching to better utilize bandwidth, and 3) improving connection management. Simulations show these techniques can significantly reduce user-perceived response times compared to current browsers for low-bandwidth conditions.
Ensuring High-performance of Mission-critical Java Applications in Multi-tena...Zhenyun Zhuang
The document discusses problems with ensuring high performance of mission-critical Java applications in multi-tenant cloud environments. It identifies issues caused by resource sharing between applications on the same platform, such as memory pressure triggering page swapping and direct reclaiming, which can severely degrade Java application performance through increased garbage collection pauses and reduced throughput. The authors investigate two scenarios in a production environment and determine that transparent huge pages, memory pressure from other applications, and interactions between the JVM and Linux memory management are key factors impacting Java application performance in multi-tenant cloud setups.
Programmatic Right Here, Right Now ( English Version )Xavier Garrido
Presentation done at II Forum Programmatic during March the 1rst 2016 in Madrid.
In this presentation, i tried to bring closer programmatic in a simple way to the audience emphasizing that we are facing a paradigm shift in the advertising industry.
Legal aspects of religion in the workplaceRonald Brown
1. Managing religion in the workplace is challenging as people have strong spiritual beliefs but companies must also avoid religious discrimination. While some firms welcome religious expressions, others keep religion out of the workplace.
2. A survey found 20% of workers reported experiencing religious prejudice or knew of others facing discriminatory treatment. This has led to lawsuits against companies over religious discrimination or failure to accommodate religious practices.
3. Employers must strive to balance religious expression with inclusion of all beliefs to avoid lawsuits. They should provide religious accommodations when possible without causing undue hardship. Anti-harassment policies and training can help prevent problems related to religion in the workplace.
The document discusses definitions of leadership and what makes a good leader. It provides several definitions:
1) "Leadership [is] creating the conditions in organizational systems so that people can do their best work" and "Leaders define or clarify goals for a group, which can be as small as a seminar or as large as a nation-state and mobilize the energies of members of the group to pursue those goals."
2) "Problem solving is the core of leadership" and "the art of accomplishing more than the science of management says is possible."
3) Leadership "is about coping with change", about "motivating and inspiring---keeping people moving in the right direction, despite major obstacles
What makes a leader and what is leadershpRonald Brown
Leadership is defined in multiple ways in the document. It involves creating conditions for people to do their best work, defining goals for a group, and motivating people to pursue those goals. It also involves problem solving and accomplishing more than what seems possible. Effective leadership requires vision, managing change, and having a clear sense of direction.
The document discusses biometrics, which uses physiological or behavioral human characteristics to identify individuals. It defines biometrics and describes a generic biometric system involving enrollment, sensors, feature extraction, and matching. The document outlines several types of biometrics including face recognition, fingerprints, hand geometry, iris/retina scans, DNA, keystrokes, and voice. It also discusses vulnerabilities in biometric systems such as spoofing attacks, template database leaks, and intrinsic limitations like false matches. The document proposes security approaches like feature transformations and cryptosystems to enhance biometric security.
Mutual Exclusion in Wireless Sensor and Actor NetworksZhenyun Zhuang
This document discusses mutual exclusion in wireless sensor and actor networks. It begins by introducing wireless sensor networks and how they have evolved into wireless sensor and actor networks which can both sense and act on their environments. This introduces new challenges around resource utilization that must be addressed. Specifically, the document identifies the problem of mutual exclusion - ensuring only a minimum necessary subset of actors take action for a given event to avoid issues like inefficient resource usage. It defines different types of mutual exclusion and proposes both a greedy centralized approach and a distributed localized approach to address this problem efficiently while meeting application-specific delay bounds and fully covering the event region.
Optimizing CDN Infrastructure for Live Streaming with Constrained Server Chai...Zhenyun Zhuang
This document proposes a method called Constrained Server Chaining (CSC) to optimize CDN infrastructure for live streaming. CSC allows CDN streaming servers to dynamically select upstream servers to pull live streams from, rather than only pulling from fixed ingest servers. This allows streaming servers to form constrained chains to minimize total transit costs for the CDN provider while ensuring end user experience is not compromised by capping delivery path lengths. The document outlines the problem definition, design overview, and software architecture of CSC and provides an example to motivate how CSC can reduce costs compared to traditional layered CDN structures.
Eliminating OS-caused Large JVM Pauses for Latency-sensitive Java-based Cloud...Zhenyun Zhuang
For PaaS-deployed (Platform as a Service)
customer-facing applications (e.g., online gaming and online
chatting), ensuring low latencies is not just a preferred feature,
but a must-have feature. Given the popularity and powerful-
ness of Java platforms, a significant portion of today’s PaaS
platforms run Java. JVM (Java Virtual Machine) manages a
heap space to hold application objects. The heap space can be
frequently GC-ed (Garbage Collected), and applications can be
occasionally stopped for long time during some GC and JVM
activities.
In this work, we investigated the JVM pause problem.
We found out that there are some (and large) JVM STW
pauses cannot be explained by application-level activities and
JVM activities during GC; instead, they are caused by OS
mechanisms. We successfully reproduced such problems and
root-cause-ed the reasons. The findings can be used to enhance
JVM implementation. We also proposed a set of solutions to
mitigate and eliminate these large STW pauses. We share the
knowledge and experiences in this writing.
OCPA: An Algorithm for Fast and Effective Virtual Machine Placement and Assig...Zhenyun Zhuang
This document proposes a method called Constrained Server Chaining (CSC) to optimize CDN infrastructure for live streaming. CSC allows CDN streaming servers to dynamically select upstream servers to pull live streams from, rather than only pulling from fixed ingest servers. This can reduce transit costs for CDN providers by creating more direct paths between servers. However, CSC also imposes a delivery length cap to avoid compromising end user experience with longer paths. The document describes the problem CSC addresses, an illustrative example of how CSC works, and the key components of CSC including cost determination, length cap determination, and server connection monitoring.
Mobile Hosts Participating in Peer-to-Peer Data Networks: Challenges and Solu...Zhenyun Zhuang
Wireless Networks (2010)
http://dl.acm.org/citation.cfm?id=1873504
Peer-to-peer (P2P) data networks dominate
Internet traffic, accounting for over 60% of the overall
traffic in a recent study. In this work, we study the
problems that arise when mobile hosts participate in
P2P networks. We primarily focus on the performance
issues as experienced by the mobile host, but also study
the impact on other fixed peers. Using BitTorrent as a
key example, we identify several unique problems that
arise due to the design aspects of P2P networks being
incompatible with typical characteristics of wireless
and mobile environments. Using the insights gained
through our study, we present a wireless P2P (wP2P)
client application that is backward compatible with existing
fixed-peer client applications, but when used on
mobile hosts can provide significant performance improvements.
Building Cloud-ready Video Transcoding System for Content Delivery Networks (...Zhenyun Zhuang
GLOBECOM 2012
Video streaming traffic of both VoD (Video on
Demand) and Live is exploding. Various types of businesses
and many people are relying on video streaming to attract
customers/users and for other purposes. Given the vast number
of video stream formats (e.g., MP4, FLV) and transmission
protocols (e.g., HTTP, RTMP, RTSP) for supporting varying
types of playback terminals (particularly mobile devices such as
iphone/ipad and Android phones), video content providers often
need to transcode videos to multiple formats in order to stream
to different types of users.
Being time-sensitive and requiring high bandwidth, video
streaming exerts high pressure on underlying delivery networks.
Content Delivery Network (CDN) providers can help their
customers quickly and reliably distribute stream contents to end
users. In addition to distributing video streams, CDN providers
typically allow their customers to perform video transcoding on
CDN platforms. With the high volume of video streams and the
bursty transcoding workload, CDN providers are eager to deploy
elastic and optimized cloud-based transcoding platforms.
C-Worm Traffic Detection using Power Spectral Density and Spectral Flatness ...IOSR Journals
This document summarizes a research paper that proposes new schemes called Power Spectral Density (PSD) and Spectral Flatness Measure (SFM) to detect camouflaging worms (C-worms). C-worms can hide their scan traffic to avoid detection by traditional anti-worm software. The schemes are based on analyzing differences between normal worm traffic and C-worm traffic in the frequency domain, since they cannot be differentiated in the time domain. Experimental results showed that PSD and SFM were effective at detecting C-worms by identifying differences in their scan traffic patterns compared to normal worms when analyzed in the frequency domain. The document provides background on worms, C-worm modeling and propagation, and evaluates the
This document discusses the detection of "smart worms", which are malicious software programs that can intelligently manipulate their scanning behavior to avoid detection. The authors propose a novel spectrum-based scheme to detect smart worms using power spectral density analysis of traffic volumes. Their scheme analyzes the spectral flatness measure of worm traffic compared to background traffic. Evaluation results demonstrate the scheme can effectively detect smart worm propagation and outperforms existing detection methods. The authors also show it can detect traditional worms.
Intrusion detection systems aim to detect unauthorized access or activity in a computer system or network. There are two main types: network-based systems monitor network traffic to detect intrusions, while host-based systems monitor operating system logs and files on individual computers. Effective intrusion detection requires an incident response team to assess damage from intrusions and prevent future vulnerabilities, as well as securely storing logs as potential evidence.
A cooperative immunization system for an untrusting internetUltraUploader
This document proposes a cooperative immunization system where nodes work together to defend against computer viruses and worms. It presents an algorithm called COVERAGE that has nodes share information about observed infection rates. Based on this shared information, each node probabilistically determines which viruses to respond to. Simulations show COVERAGE is more effective against viruses and more robust against malicious participants compared to existing approaches.
This document presents a taxonomy of computer worms that categorizes different types of worms based on their target discovery method, propagation mechanism, activation method, potential payloads, and plausible attackers. It describes common target discovery methods like scanning, pre-generated target lists, and externally generated target lists. It also discusses propagation mechanisms, activation methods, possible payloads, and types of attackers that might use different worms. The goal of the taxonomy is to help understand the threat of computer worms and potential defenses.
NETWORK INTRUSION DETECTION AND COUNTERMEASURE SELECTION IN VIRTUAL NETWORK (...ijsptm
Intrusion in a network or a system is a problem today as the trend of successful network attacks continue to
rise. Intruders can explore vulnerabilities of a network system to gain access in order to deploy some virus
or malware such as Denial of Service (DOS) attack. In this work, a frequency-based Intrusion Detection
System (IDS) is proposed to detect DOS attack. The frequency data is extracted from the time-series data
created by the traffic flow using Discrete Fourier Transform (DFT). An algorithm is developed for
anomaly-based intrusion detection with fewer false alarms which further detect known and unknown attack
signature in a network. The frequency of the traffic data of the virus or malware would be inconsistent with
the frequency of the legitimate traffic data. A Centralized Traffic Analyzer Intrusion Detection System
called CTA-IDS is introduced to further detect inside attackers in a network. The strategy is effective in
detecting abnormal content in the traffic data during information passing from one node to another and
also detects known attack signature and unknown attack. This approach is tested by running the artificial
network intrusion data in simulated networks using the Network Simulator2 (NS2) software.
Network Intrusion Detection And Countermeasure Selection In Virtual Network (...ClaraZara1
Intrusion in a network or a system is a problem today as the trend of successful network attacks continue to rise. Intruders can explore vulnerabilities of a network system to gain access in order to deploy some virus or malware such as Denial of Service (DOS) attack. In this work, a frequency-based Intrusion Detection System (IDS) is proposed to detect DOS attack. The frequency data is extracted from the time-series data created by the traffic flow using Discrete Fourier Transform (DFT). An algorithm is developed for anomaly-based intrusion detection with fewer false alarms which further detect known and unknown attack signature in a network. The frequency of the traffic data of the virus or malware would be inconsistent with the frequency of the legitimate traffic data. A Centralized Traffic Analyzer Intrusion Detection System called CTA-IDS is introduced to further detect inside attackers in a network. The strategy is effective in detecting abnormal content in the traffic data during information passing from one node to another and also detects known attack signature and unknown attack. This approach is tested by running the artificial network intrusion data in simulated networks using the Network Simulator2 (NS2) software.
AN IMPROVED METHOD TO DETECT INTRUSION USING MACHINE LEARNING ALGORITHMSieijjournal
An intrusion detection system detects various malicious behaviors and abnormal activities that might harm
security and trust of computer system. IDS operate either on host or network level via utilizing anomaly
detection or misuse detection. Main problem is to correctly detect intruder attack against computer
network. The key point of successful detection of intrusion is choice of proper features. To resolve the
problems of IDS scheme this research work propose “an improved method to detect intrusion using
machine learning algorithms”. In our paper we use KDDCUP 99 dataset to analyze efficiency of intrusion
detection with different machine learning algorithms like Bayes, NaiveBayes, J48, J48Graft and Random
forest. To identify network based IDS with KDDCUP 99 dataset, experimental results shows that the three
algorithms J48, J48Graft and Random forest gives much better results than other machine learning
algorithms. We use WEKA to check the accuracy of classified dataset via our proposed method. We have
considered all the parameter for computation of result i.e. precision, recall, F – measure and ROC.
AN IMPROVED METHOD TO DETECT INTRUSION USING MACHINE LEARNING ALGORITHMSieijjournal1
An intrusion detection system detects various malicious behaviors and abnormal activities that might harm
security and trust of computer system. IDS operate either on host or network level via utilizing anomaly
detection or misuse detection. Main problem is to correctly detect intruder attack against computer
network. The key point of successful detection of intrusion is choice of proper features. To resolve the
problems of IDS scheme this research work propose “an improved method to detect intrusion using
machine learning algorithms”. In our paper we use KDDCUP 99 dataset to analyze efficiency of intrusion
detection with different machine learning algorithms like Bayes, NaiveBayes, J48, J48Graft and Random
forest. To identify network based IDS with KDDCUP 99 dataset, experimental results shows that the three
algorithms J48, J48Graft and Random forest gives much better results than other machine learning
algorithms. We use WEKA to check the accuracy of classified dataset via our proposed method. We have
considered all the parameter for computation of result i.e. precision, recall, F – measure and ROC.
Modeling & automated containment of worms(synopsis)Mumbai Academisc
This document proposes a model for characterizing the propagation of Internet worms using a branching process model. It develops this model for uniform scanning worms and extends it to preference scanning worms. This model leads to the development of an automatic worm containment strategy that limits the total number of IP addresses contacted per host. The strategy is shown to effectively contain uniform and preference scanning worms through simulations and analysis, while having minimal impact on normal network operations.
A Comprehensive Review On Intrusion Detection System And TechniquesKelly Taylor
This document discusses machine learning techniques for intrusion detection systems (IDS). It provides an overview of the research progress using machine learning to improve intrusion detection in networks. Machine learning and data mining techniques have been widely used to automatically detect network traffic anomalies. The goal is to summarize and compare research contributions of IDS using machine learning, define existing challenges, and discuss anticipated solutions. Commonly used machine learning techniques for IDS are reviewed along with some existing machine learning-based IDS proposed by researchers.
Intrusion Detection System using AI and Machine Learning AlgorithmIRJET Journal
This document discusses using artificial intelligence and machine learning algorithms to develop an intrusion detection system (IDS). It begins with an abstract that outlines using AI to act as a virtual analyst to concurrently monitor network traffic and defend against threats. It then provides background on IDS and the need for more effective automated threat detection. The document discusses classifying attacks, different types of IDS (host-based and network-based), and detection methods like signature-based and anomaly-based. It aims to develop an IDS using machine learning algorithms that can learn patterns to provide automatic intrusion detection without extensive manual maintenance.
2011 modeling and detection of camouflaging wormdeepikareddy123
This document summarizes a research article about detecting a new type of active worm called a Camouflaging Worm (C-Worm). The C-Worm aims to avoid detection by manipulating its scan traffic volume over time to camouflage its propagation. The researchers analyze characteristics of the C-Worm traffic in both time and frequency domains. They observe that while C-Worm traffic shows no noticeable trends over time, it demonstrates a distinct pattern in the frequency domain with a narrow concentration of frequencies. Based on this, they develop a novel spectrum-based detection scheme using power spectral density distribution and spectral flatness measure to distinguish C-Worm traffic from background traffic. Evaluation shows their scheme can effectively detect C
2011 modeling and detection of camouflaging wormdeepikareddy123
This document summarizes a research article about detecting a new type of active worm called a Camouflaging Worm (C-Worm). The C-Worm is able to manipulate its scan traffic volume over time to camouflage its propagation and avoid detection by existing systems. The researchers analyze characteristics of the C-Worm traffic in both time and frequency domains. They observe that while C-Worm traffic shows no trends in time, it demonstrates a distinct pattern in frequency with concentration in a narrow range of frequencies. Based on this, they develop a novel spectrum-based detection scheme using power spectral density distribution and spectral flatness measure to distinguish C-Worm traffic from background traffic. Evaluation shows their scheme can
An effective architecture and algorithm for detecting worms with various scan...UltraUploader
The document proposes and evaluates an architecture and algorithm for detecting worm infections that use various scanning techniques. It analyzes different scan methods worms could use, such as random scanning, scanning only addresses in routing tables, and hitlist scanning. It then presents a generic worm detection architecture that monitors for malicious activities by analyzing statistics on scan traffic, such as the number of source addresses and traffic volume. The paper introduces an algorithm called the victim number based algorithm that relies solely on increases in the number of source addresses to detect infections. Simulation results show this algorithm can detect a Code Red-like worm when only 4% of machines are infected.
Malicious activities (malcodes) are self replicating
malware and a major security threat in a network environment.
Timely detection and system alert flags are very essential to
prevent rapid malcodes spreading in the network. The difficulty
in detecting malcodes is that they evolve over time. Despite the fact
that signature-based tools, are generally used to secure systems,
signature-based malcode detectors neglect to recognize muddled
and beforehand concealed malcode executables. Automatic signature
generation systems has likewise been use to address the issue
of malcodes, yet there are many works required for good detection.
Base on the behavior way of malcodes, a behavior approach is
required for such detection. Specifically, we require a dynamic
investigation and behavior Rule Base system that distinguishes
malcodes without erroneously block legitimate traffic or increase
false alarms. This paper proposed and discussed the approach
using Machine learning and Indicators of Compromise (IOC) to
analyze intrusion in a network, to identify the cause of the attack
and to provide future detection. This paper proposed the use of
behaviour malware analysis framework to analyze intrusion data,
apply clustering algorithm on the analyzed data and generate IOC
from the clustered data for IOCRule, which will be implemented
into Snort Intrusion Detection System (IDS) for malicious code
detection.
Intrusion Detection & Prevention Systems (IDPS) are crucial for protecting computers and detecting threats in real time. As threats have grown in the 21st century, IDPS have also evolved, with different types providing various protection functions. Effective IDPS not only detect and prevent attacks, but also log events, create reports on recent attacks, and provide detailed information. Detection methods include signature-based detection by comparing traffic to known attacks, anomaly-based detection by identifying deviations from normal behavior, and policy-based detection by enforcing allowed functions.
This paper proposes an automated approach called "content sifting" to quickly detect new worms/viruses based on common exploit sequences and spreading behavior. The approach analyzes network traffic to identify strings that recur frequently across many sources and destinations. The authors developed a prototype system called Earlybird that implemented this approach and was able to automatically detect and generate signatures for existing worms as well as new worms before public disclosure. Earlybird demonstrated the potential for fully automated defenses against even unknown "zero-day" outbreaks.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Similar to PAIDS: A Proximity-Assisted Intrusion Detection System for Unidentified Worms (20)
Designing SSD-friendly Applications for Better Application Performance and Hi...Zhenyun Zhuang
This document discusses how applications can be designed to take advantage of the unique characteristics of solid state drives (SSDs) in order to improve application performance, storage input/output (IO) efficiency, and SSD lifespan. It proposes nine SSD-friendly application design changes and explains how they can result in better application performance by fully utilizing SSDs' internal parallelism, more efficient storage IO by reducing write amplification, and longer SSD lifespan by decreasing write amplification.
Optimized Selection of Streaming Servers with GeoDNS for CDN Delivered Live S...Zhenyun Zhuang
This document proposes a new DNS design called Sticky-DNS to optimize server selection for CDN-delivered live streaming. Sticky-DNS aims to minimize CDN transit costs while maintaining good viewer experience. Unlike traditional GeoDNS which selects the nearest origin server to an edge server, Sticky-DNS considers the full ingest-origin and origin-edge paths to potentially select a non-nearest origin server that results in lower overall transit costs. It does this by maintaining cost values for all server pairs and selecting origins to serve edges in a way that minimizes total path costs. For less popular streams, origins are chosen based on end-to-end path lengths, while for popular streams Sticky-DNS adapts to encourage reuse
Application-Aware Acceleration for Wireless Data Networks: Design Elements an...Zhenyun Zhuang
This document discusses an approach called Application-Aware Acceleration (A3) to improve application performance over wireless networks. It finds that while transport layer protocols improve performance for FTP, they provide little benefit for other applications like CIFS, SMTP, and HTTP due to the applications' behaviors. A3 addresses this by using principles like transaction prediction, prioritized fetching, and redundant transmissions to offset applications' typical problems when used over wireless networks. The document presents the motivation and design of A3, and evaluates its effectiveness through emulations and a proof-of-concept prototype using NetFilter.
WebAccel: Accelerating Web access for low-bandwidth hostsZhenyun Zhuang
The document describes problems with how current web browsers access web pages in low-bandwidth environments. It analyzes factors that cause large response times, such as properties of typical web pages, interactions between HTTP and TCP protocols, and impact of server-side optimizations. It proposes a new solution called WebAccel that uses three browser-side mechanisms - prioritized fetching, object reordering, and connection management - to reduce user response time in an easy-to-deploy way. Simulation results and a prototype implementation show that WebAccel brings significant performance benefits over current browsers.
A Distributed Approach to Solving Overlay Mismatching ProblemZhenyun Zhuang
This document proposes an algorithm called Adaptive Connection Establishment (ACE) to address the topology mismatch problem between the logical overlay network and physical underlying network in unstructured peer-to-peer systems. ACE builds a minimum spanning tree among each source node and its neighbors within a certain diameter, optimizes connections not on the tree to reduce redundant traffic, while retaining search scope. It evaluates tradeoffs between topology optimization and information exchange overhead by changing the diameter. Simulation results show ACE can significantly reduce unnecessary P2P traffic by efficiently matching the overlay and physical network topologies.
Hybrid Periodical Flooding in Unstructured Peer-to-Peer NetworksZhenyun Zhuang
This document proposes a new search mechanism called Hybrid Periodical Flooding (HPF) for unstructured peer-to-peer networks. HPF aims to reduce unnecessary traffic like blind flooding while also addressing the "partial coverage problem" of some statistics-based search mechanisms. It introduces the concept of Periodical Flooding (PF), which controls the number of neighbors a query is forwarded to based on the time-to-live value. This allows the forwarding behavior to change periodically over the query's lifetime. HPF then combines PF with weighted selection of neighbors based on multiple metrics to guide queries towards potentially relevant results while exploring more of the network.
Guarding Fast Data Delivery in Cloud: an Effective Approach to Isolating Perf...Zhenyun Zhuang
LNCS 2015
Cloud-based products heavily rely on the fast data
delivery between data centers and remote users - when data
delivery is slow, the products’ performance is crippled. When
slow data delivery occurs, engineers need to investigate the issue
and find the root cause. The investigation requires experience
and time, as data delivery involves multiple playing parts
including sender/receiver/network.
To facilitate the investigations, we propose an algorithm
to automatically identify the performance bottleneck. The
algorithm aggregates information from multiple layers of
data sender and receiver. It helps to automatically isolate
the problem type by identifying which component of
sender/receiver/network is the bottleneck. After isolation, successive
efforts can be taken to root cause the exact problem.
We also build a prototype to demonstrate the effectiveness of
the algorithm.
SLA-aware Dynamic CPU Scaling in Business Cloud Computing EnvironmentsZhenyun Zhuang
IEEE CLOUD 2015
Modern cloud computing platforms (e.g. Linux
on Intel CPUs) feature ACPI-based (Advanced Configuration
and Power Interface) mechanism, which dynamically scales
CPU frequencies/voltages to adjust the CPU frequencies based
on the workload intensity. With this feature, CPU frequency
is reduced when the workload is relatively light in order to
save energy; while increased when the workload intensity is
relatively high.
In business cloud computing environments, software products/
services often need to “scale out” to multiple machines to
form a cluster to achieve a pre-defined aggregated performance
goal (e.g., SLA-devised throughput). To reduce business operation
cost, minimizing the provisioned cluster size is critical.
However, as we show in this work, the working of ACPI
in today’s modern OS may result in more machines being
provisioned, hence higher business operation cost,
To deal with this problem, we propose a SLA-aware CPU
scaling algorithm based on business SLA (Service Level Agreement
aware). The proposed design rational and algorithm are
a fundamental rethinking of how ACPI mechanisms should be
implemented in business cloud computing environments. Contrary
to the current forms of ACPI which simply adapt CPU
power levels only based on workload intensity, the proposed
SLA-aware algorithm is primarily based on current application
performance relative to the pre-defined SLA. Specifically, the
algorithm targets at achieving the pre-defined SLA as the toplevel
goal, while saving energy as the second-level goal.
Optimizing JMS Performance for Cloud-based Application ServersZhenyun Zhuang
IEEE CLOUD 2012
http://dl.acm.org/citation.cfm?id=2353798
Many business-oriented services will be gradually
offered in the Cloud. Java Message Service (JMS) is a critical
messaging technology in Java-based business applications, particularly
to those that are based on the Java Enterprise Edition
(Java EE) open standard. Maintaining high performance in
the horizontally scaled, and elastic, cloud environment is
critical to the success of the business applications. In this
paper, we present practical considerations in optimizing JMS
performance for the cloud deployment, where some of the
findings may also serve to improve the design of JMS container
so it adapts well to cloud computing. Our work also includes
performance evaluation on the proposed strategies.
Capacity Planning and Headroom Analysis for Taming Database Replication LatencyZhenyun Zhuang
ACM ICPE 2015
http://dl.acm.org/citation.cfm?id=2688054
Internet companies like LinkedIn handle a large amount of
incoming web traffic. Events generated in response to user
input or actions are stored in a source database. These
database events feature the typical characteristics of Big
Data: high volume, high velocity and high variability. Data-
base events are replicated to isolate source database and
form a consistent view across data centers. Ensuring a low
replication latency of database events is critical to business
values. Given the inherent characteristics of Big Data, min-
imizing the replication latency is a challenging task.
In this work we study the problem of taming the database
replication latency by effective capacity planning. Based
on our observations into LinkedIn’s production traffic and
various playing parts, we develop a practical and effective
model to answer a set of business-critical questions related
to capacity planning. These questions include: future traffic
rate forecasting, replication latency prediction, replication
capacity determination, replication headroom determination
and SLA determination.
OS caused Large JVM pauses: Deep dive and solutionsZhenyun Zhuang
We have found many large JVM GC pauses are not caused by application itself, but by the interactions between JVM and OS. We characterize these issues into 3 scenarios: (1) application startup state; (2) application steady state with memory pressure; and (3) application steady state with heavy IO. The root causes are quite complicated, so we share our experiences about this.
//This slide deck is for Qcon Beijing 2016 talk.
Wireless memory: Eliminating communication redundancy in Wi-Fi networksZhenyun Zhuang
This document describes a proposed system called Wireless Memory (WM) to eliminate communication redundancy in Wi-Fi networks. The authors first analyze real Wi-Fi traces from multiple buildings and observe significant redundancy both between users and over time for individual users. Based on these insights, they propose WM, which equips access points and clients with memory to store transmitted data. When sending new data, the access point can retrieve stored data from the client's memory by sending a reference rather than the full data, reducing transmission size. The authors evaluate WM through simulations using the collected traces and find it can improve network throughput by up to 93% in some scenarios by eliminating redundancy.
Improving energy efficiency of location sensing on smartphonesZhenyun Zhuang
The document proposes an adaptive location-sensing framework to improve energy efficiency on smartphones running location-based applications. The framework uses four design principles: substitution replaces GPS with less power-intensive location services when possible; suppression avoids unnecessary GPS use through sensors like accelerometers; piggybacking synchronizes location requests from multiple apps; and adaptation adjusts location sensing based on battery level. An implementation on Android phones reduces GPS use by up to 98% and improves battery life by up to 75%.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
6th International Conference on Machine Learning & Applications (CMLA 2024)ClaraZara1
6th International Conference on Machine Learning & Applications (CMLA 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Applications.
Low power architecture of logic gates using adiabatic techniquesnooriasukmaningtyas
The growing significance of portable systems to limit power consumption in ultra-large-scale-integration chips of very high density, has recently led to rapid and inventive progresses in low-power design. The most effective technique is adiabatic logic circuit design in energy-efficient hardware. This paper presents two adiabatic approaches for the design of low power circuits, modified positive feedback adiabatic logic (modified PFAL) and the other is direct current diode based positive feedback adiabatic logic (DC-DB PFAL). Logic gates are the preliminary components in any digital circuit design. By improving the performance of basic gates, one can improvise the whole system performance. In this paper proposed circuit design of the low power architecture of OR/NOR, AND/NAND, and XOR/XNOR gates are presented using the said approaches and their results are analyzed for powerdissipation, delay, power-delay-product and rise time and compared with the other adiabatic techniques along with the conventional complementary metal oxide semiconductor (CMOS) designs reported in the literature. It has been found that the designs with DC-DB PFAL technique outperform with the percentage improvement of 65% for NOR gate and 7% for NAND gate and 34% for XNOR gate over the modified PFAL techniques at 10 MHz respectively.
We have compiled the most important slides from each speaker's presentation. This year’s compilation, available for free, captures the key insights and contributions shared during the DfMAy 2024 conference.
PAIDS: A Proximity-Assisted Intrusion Detection System for Unidentified Worms
1. PAIDS: A Proximity-Assisted Intrusion Detection
System for Unidentified Worms
†Zhenyun Zhuang, †Ying Li, ‡Zesheng Chen
† College of Computing, Georgia Institute of Technology, Atlanta, Georgia 30332
‡ Department of Electrical and Computer Engineering, Florida International University, Miami, Florida 33174
†{zhenyun, yingli}@cc.gatech.edu, ‡zchen@fiu.edu
Abstract—The wide spread of worms poses serious challenges
to today’s Internet. Various IDSes (Intrusion Detection Sys-
tems) have been proposed to identify or prevent such spread.
These IDSes can be largely classified as signature-based or
anomaly-based ones depending on what type of knowledge the
system knows. Signature-based IDSes are unable to detect the
outbreak of new and unidentified worms when the worms’
characteristic patterns are unknown. In addition, new worms
are often sufficiently intelligent to hide their activities and evade
anomaly detection. Moreover, modern worms tend to spread more
quickly, and the outbreak period lasts in the order of hours
or even minutes. Such characteristics render existing detection
mechanisms less effective.
In this work, we consider the drawbacks of current detection
approaches and propose PAIDS, a proximity-assisted IDS ap-
proach for identifying the outbreak of unknown worms. PAIDS
does not rely on signatures. Instead, it takes advantage of the
proximity information of compromised hosts. PAIDS operates on
an orthogonal dimension with existing IDS approaches and can
thus work collaboratively with existing IDSes to achieve better
performance. We test the effectiveness of PAIDS with trace-driven
simulations and show that PAIDS has a high detection rate and
a low false positive rate.
Index Terms—Proximity; Intrusion Detection System; Worm
I. INTRODUCTION
Self-propagating worms have been posing serious threats
to today’s Internet [29]. Such malicious programs are capable
of propagating quickly and infecting a significant number of
vulnerable machines in a short period of time (e.g., a few
hours). Victims compromised by worms can form botnets
and be used to launch large-scale attacks (e.g., DDoS and
spamming).
To effectively prevent worms from propagating rapidly, an
IDS (Intrusion Detection System) is needed. Over the last few
decades, numerous intrusion detection mechanisms have been
proposed. These various techniques can be largely classified
into two categories based on the nature of the knowledge that
an IDS has. If the characteristic patterns (i.e., signatures) of
attacks in an IDS are known, the IDS is referred to as a signa-
ture-based IDS. Otherwise, if the patterns of normal activities
are known, the attacks can be identified by monitoring the
differences between the normal and anomalous activities. An
IDS relying on identifying anomaly features is referred to as
an anomaly-based IDS.
Existing techniques can detect and prevent the propagation
of many worms, in particular the known ones. Nevertheless,
their operations often fail to work with new and more intel-
ligent worms since the characteristics patterns of new worms
have not been extracted and analyzed. Due to the unavailability
of patterns of the worms, signature-based solutions simply do
not work. Briefly, two limitations associated with signature-
based IDSes prevent them from functioning effectively. First,
it takes considerable time for detection entities (such as anti-
virus software companies) to learn the attack pattern or the
signature of the worms. Thus, before such signatures are
available, the worms may have infected a significant number
of machines. Second, it takes some time for a normal user to
utilize such signatures in the form of updating or installing IDS
software. On the other hand, modern worms tend to spread
more quickly, and the outbreak period lasts in the order of
hours or even minutes. In particular, flash worms are able to
compromise a large number of hosts in several minutes [28].
Therefore, signature-based techniques fail to effectively detect
the propagation of unknown worms.
Anomaly-based solutions also have their drawbacks. An
anomaly-based IDS examines traffic, activities, or behaviors to
find anomalies. The underlying principle is that “attack behav-
iors” may differ from “normal user behaviors.” By cataloging
and identifying the differences involved, IDSes can detect a
worm in many circumstances. However, anomaly-based IDSes
are far from being effective. First, since normal behaviors can
change easily and readily, anomaly-based IDSes are prone to
false positives, i.e., many actually normal behaviors are falsely
reported as attacks. Second, worms can potentially become
increasingly intelligent to hide their abnormal activities and
thus render most anomaly-based approaches ineffective. For
example, if a worm applies a polymorphic blending attack
[10], it can hide its activities in normal network activities.
Therefore, anomaly-based IDSes may fail to effectively detect
the propagation of intelligent worms.
In this work, our goal is to design an IDS that is capable
of identifying new and intelligent fast-propagating worms
and thwarting their spread, particularly during the worm’s
“start-up” stage. Since neither signature-based nor anomaly-
based techniques can achieve such capabilities, we ask the
question: Given the unknown nature of an impeding worm,
its high spreading speed, and its intelligence to hide its
abnormal activities, is it still possible to efficiently thwart its
propagation?
We answer this question with a novel proximity-assisted
2. approach. Our approach is referred to as PAIDS (Proximity-
Assisted Intrusion Detection System) and is mainly based on
the observations of the clustered pattern of worm propagation
and the typical long active time of a compromised host.
Since vulnerable hosts are highly unevenly distributed in the
Internet [5], compromised computers are usually centralized
in certain outbreak areas, especially at the early stage of
worm propagation. Thus, a host located in the area with more
infected hosts is more likely to be compromised by the worm.
The locations of the early infected hosts can be known in
certain ways, for instance, using honeypots. In other words,
although the nature and signatures of the upcoming worms
cannot be identified at the early stage, the exact fact that
they are infecting other machines can be detected. Thus, such
information can be utilized to counteract the worms’ infection
process at their early stage when conventional signature-based
methods fail to work. One important property about PAIDS is
that it is fully complementary to existing approaches, partic-
ularly anomaly-based ones. That is, the mechanisms included
in the proximity-assisted approach can be utilized in tandem
with other approaches since PAIDS is working on a different
dimension from existing IDSes. Moreover, our preliminary
evaluation based on trace-driven simulations shows that PAIDS
has a high detection rate and a low false positive rate.
The rest of the paper is organized as follows. We first
motivate our design in Section II. We present the detailed
design in Section III. We then perform trace-driven simulations
in Section IV. Finally, we conclude our work in Section V.
II. MOTIVATION
Our proposed approach, PAIDS, is motivated by three major
observations: (i) limitations of existing IDSes; (ii) clustered
pattern of worm spread; and (iii) long active time of compro-
mised hosts.
A. Limitations of existing approaches
• Signature-based or anomaly-based. Intrusion detection
efforts have been proposed for years and can be classified as
signature-based and anomaly-based. Signature-based IDSes,
such as anti-virus programs (e.g., Norton Antivirous and
Macfee), use the signatures to recognize and block infected
files, programs, or active Web content from entering a com-
puter system. Such IDSes are the most widely used approach
in the commercial IDS technology today. Moreover, abundant
research efforts such as [3], [6], [14], [16], [18], [24] also fall
into this category.
Anomaly-based IDSes use rules or predefined concepts
about “normal” and “abnormal” system activities (i.e., heuris-
tics) to distinguish anomalies from normal system behaviors
and to monitor or block anomalies. Anomaly-based IDSes
include [1], [7], [13], [17], [27], [33], [34]. For an anomaly-
based IDS, a training process is typically required to extract
normal patterns. Afterwards, when some activities are ob-
served to be sufficiently different from such patterns, they are
flagged as abnormal ones, and the corresponding actions may
be taken. A wide variety of techniques have been explored
to approach the anomaly detection problem, such as neural
networks [13], statistical modeling [27], temporal sequence
learning [17], n-grams [34], and states of web applications
[7].
However, neither signature-based nor anomaly-based IDSes
can effectively identify new and intelligent worms. On one
hand, identifying a worm using signature-based IDSes requires
the characteristic patterns of the worm. Such mechanisms will
not work with new worms since the patterns of these worms
are not known a priori. Furthermore, even if the signature of
a new worm is extracted after a certain period of time, it takes
considerable time for a normal user to adopt the signature in
the form of updating his anti-worm software. On the other
hand, modern worms are becoming increasingly intelligent,
and many are capable of hiding their activities to avoid the
detection of anomaly-based IDSes [10]. Worms achieve such
stealthy goals either by learning the rules used by the IDS or
by acting extremely less aggressively.
• Network-based or host-based. Various IDSes can also
be classified into network-based [9], [11], [19], [22], [26],
[31], [32] and host-based ones [8], [12], [20], [25], [30].
The difference between these two categories lies in where the
intelligence used for detection resides. In host-based IDSes,
the intelligence only resides on local hosts, whereas network-
based IDSes rely on the information exchanged among many
hosts. Both categories have their own advantages. For example,
network-based IDSes can monitor an entire network with only
a few well-situated nodes and impose little overhead on a
network. Host-based IDSes can analyze activities on the host
at a high level of details and determine which processes and/or
users are involved in malicious activities.
Both types of approaches, however, have disadvantages
and may fail to work under certain scenarios. For example,
although network-based IDSes have the advantage of knowing
aggregate traffic patterns, they may not know other crucial
information such as the process environments and the exact
protocol of a connection. Most of the time normal users are
reluctant to give such information to other entities including
IDSes due to privacy or security concerns. By contrast, host-
based IDSes have the full information of the connections of a
specific host, but they lack the aggregate view of the network.
The disadvantages associated with either IDSes may greatly
affect the effectiveness of detecting worms. Our designed
PAIDS intends to incorporate the advantages of both IDSes.
B. Clustered pattern of worm spread
Once a worm is released to the Internet, it typically starts
from a few hosts. It then guesses the addresses of targets and
attempts to infect them. Typically, worms use random scanning
or localized scanning to spread [29], [35]. Random scanning
selects targets randomly and has been used by Code Red
v2, Slammer, and Witty worms, whereas localized scanning
preferentially searches for targets in the local network (i.e.,
sharing the same prefix of IP addresses) and has been exploited
by Code Red II and Nimda worms. Localized scanning has
some advantages over random scanning. For example, hosts
3. 2 4 6 8 10 12 14 16 18 20
0
10
20
30
40
50
60
70
80
90
100
Time (hours)
Coverageoftopclusters(%)
Clustered pattern (40*40 Grid)
K=8
K=12
K=16
Fig. 1. Top K grids (out of 1600 grids) of U.S. Map
within the same subnet often expose similar vulnerabilities. In
addition, in general infecting close-by hosts takes less time due
to the smaller round trip time. Furthermore, many networks are
protected by firewalls, and infecting hosts in the same network
is relatively easier.
Worm spread demonstrates a highly clustered pattern, es-
pecially for localized scanning. The main reason is that
vulnerable hosts are highly unevenly distributed in the In-
ternet [4]. Therefore, at any time of the worm outbreak, the
compromised hosts typically form clusters, e.g., they reside
in close-by geographical locations. To show this, we study
the outbreak of the Code Red v2 worm in July 2001 based
on the traces from CAIDA [2]. To examine the clustered
pattern of geographical locations of infected hosts, we equally
divide the US continent map into a number of grids and count
the number of compromised hosts covered by each grid. We
then choose the top K grids and show the percentage of
compromised hosts covered by these grids. Intuitively, if the
cluster degree is higher, more hosts would reside in the top
K grids. Specifically, we divide the map into 40 × 40 grids
(totally 1600) and measure the percentage of compromised
hosts covered by the top 8, 12, and 16 grids (i.e., K = 8, 12,
16). Figure 1 shows our measurements of the coverage of top
K grids over time of up to 20 hours. We observe the highly
clustered patterns. For example, as shown in the figure, with
only 8 grids (0.5% of total grids), the coverage at the 2-hour
time is about 60%, whereas 16 grids (1% of all grids) covers
more than 72% of all compromised hosts. More interestingly,
such clustered behaviors do not fade quickly over time. As
shown in the figure, even after 20 hours, top 16 grids still
account for more than 53% of all compromised hosts.
Interestingly, the clustering behavior of worm spreading
occurs not only in the geographical dimension, but in the
network address dimension. Works such as [5] have identified
the clustered pattern in the network address space. Specifically,
work [5] finds that more than 80% of malicious sources are
clustered in the same 20% of the IPv4 address space over time
based on DShield data. Our designed PAIDS can work on both
geographical and network address dimensions. In this paper,
we mainly present the results on the geographical dimension.
C. Long active time of compromised hosts
Another observation is that compromised hosts are typically
engaged in infecting other hosts for a considerable amount of
time. Taking the Code Red v2 worm as an example, we plot the
distribution of active time of all compromised hosts (more than
359,000) in Figure 2. It can be seen that 80% of compromised
hosts have active time of longer than 30 minutes, while only
12% of these hosts have active time shorter than 5 minutes. In
other words, the figure shows that most compromised hosts
attempt to infect other hosts for a considerably long time
(e.g., more than dozens of minutes) before they are stopped.
This characteristic of compromised hosts can potentially be
utilized to assist detection. For example, if a compromised
host can be identified as a suspicious one, then hosts that are
communicating with this host should be alerted.
III. DESIGN
We now present the design of PAIDS. After giving an
overview, we will present the deployment model, the software
architecture, and three major components of PAIDS.
A. Overview
The key idea of PAIDS is to take advantage of the clustered
pattern of worm outbreak and the long active time of compro-
mised hosts. Briefly, PAIDS works as follows. PAIDS runs on
a local host as a daemon and keeps monitoring the ongoing
connections. It records the IP addresses and port numbers
of the hosts with which the local host is communicating.
The recorded information is sent to a processing center that
determines the danger level of the corresponding hosts. If the
processing center determines that a recorded host is suspicious,
then it reports back to the local host so that PAIDS can raise
alerts and notify the user. The processing center makes the
decision by collecting the information of suspicious hosts on
the Internet and performing clustering operations to model
the danger levels of each area. The clustering is based on
proximity information associated with the suspicious hosts.
Note that proximity is not necessarily limited to geographical
and network address dimensions and can also occur in other
dimensions such as DNS. In this work, we present the design
of PAIDS only in the context of the geographical dimension
for the purpose of simplicity. However, PAIDS can be easily
changed to utilize other dimensions.
The operations of PAIDS involve the cooperation of several
entities (shown in Figure 3). Firstly, it requires the local host
0−5 min. (12%)
5−10 min. (2%)
10−15 min. (2%)
15−20 min. (2%)
20−25 min. (1%)
25−30 min. (1%)
>30 min. (80%)
Fig. 2. Active time distribution of the Code Red v2 worm
4. User
H1 H2
RCM (Remote
Connection Monitor)
PAIDS Frontend
RCA (Remote
Connection Attendant)
SLPE (Security Level
Processing Engine)
Neighbor IP list
Suspicious IP
Information
Danger level
information
User
Geo-
location
Services
SHS (Suspicious Hosts Sniffing)
Fig. 3. PAIDS Architecture
to report its communicating hosts (referred to as “communi-
cation neighbors”) to a processing center. The communica-
tion components of this reporting service are referred to as
Remote Connection Monitor (RCM) and Remote Connection
Attendant (RCA). Secondly, the processing center is referred
to as Security-Level Processing Engine (SLPE). SLPE is the
entity that determines the danger level of each of the reported
neighbors from the local host. The determination is based
on proximity correlations, i.e., the closer the communicating
host is to the outbreak area, the greater chance it might
conduct malicious activities. The proximity correlations can
be computed using various geographic mapping techniques
[21]. One of such techniques is the IP-to-location service
[15]. The SLPE can be implemented at a place convenient
to the local hosts, e.g., a dedicated machine of a group or an
institute. Thirdly, it requires a suspicious-activity monitoring
service that typically runs honeynet to attract worms and
collects related information. In this paper we refer to the
monitoring service as Suspicious Hosts Sniffing (SHS). SHS
serves no other purpose than simply recording suspicious
connections reaching the honeynet. The recorded information
at least includes the IP addresses and the port numbers of
the suspicious connections. Such information will expose the
outbreak information of new worms for they have to propagate
to other hosts after they are released.
PAIDS addresses the shortcomings of signature-based and
anomaly-based IDSes with a mechanism based on proximity
information. It deals with the unavailability of worm signatures
by using the notion of suspicious hosts, i.e., the hosts that
are captured by honeynet when they attempt to connect to
honeynet. Specifically, if a host is compromised, it will attempt
to infect other hosts. If it falls into the view of the monitoring
service, it is recorded. Since compromised hosts typically keep
active for a considerably long time, such a “black-list” style
treatment will bring benefits. Moreover, since PAIDS is not
based on the known patterns of normal behaviors, it does not
have the drawbacks of the anomaly-based approach.
PAIDS addresses the drawbacks of network-based and host-
based approaches with a processing center, which is actually
the third layer between a global information center and a local
host. Introducing such a layer can gain benefits by striking the
balance between the advantages and disadvantages associated
with these two types of IDSes. Since hosts are more likely to
trust a local processing center and present detailed information
to the latter, such undertaking can bridge the trust gap between
these two types of IDSes. In other words, the processing
center can serve as the intermediate proxy for the information
exchanges between the global information center and local
hosts. The functioning of the processing center works in two
directions. On one hand, it collects the information from local
hosts and reports filtered information to the global center. On
the other hand, it obtains information from the global center
and determines the danger level of reported connections on
behalf of local hosts. Other than the trust benefit, introducing
the processing center can also help relieve the scalability
concern embedded in alternative two-layer approaches.
B. Deployment model of PAIDS
The design of PAIDS is not intended to replace any other
type of IDSes. Thus, it is inappropriate to compare PAIDS
directly with other IDSes. Instead, it complements other IDSes
by working orthogonally with them. For instance, PAIDS can
be combined with an anomaly-based IDS. One way for such
a combination is to let PAIDS adjust the threshold values
usually used in anomaly-based IDSes. An anomaly-based IDS
typically sets certain “threshold” values to determine whether
a connection is suspicious or not. These values are difficult to
determine in many cases for the determination has to strike
a balance between false positives and false negatives. With
PAIDS, the threshold values can be adjusted according to
the danger levels measured by PAIDS. For example, if a
connection involves a host that is within an outbreak area
and thus has a higher danger level, the threshold values can
be adjusted to reflect such information. Such treatments are
possible since PAIDS utilizes the proximity information that
other IDSes typically do not consider.
We now use one example scenario to elaborate how PAIDS
can work cooperatively with other existing IDSes. We choose
an anomaly-based IDS simply because of easy explanation.
Note that such a scenario should not be treated as the only way
in which PAIDS can work with other IDSes. The anomaly-
based IDS is Swaddler [7]. Swaddler supports two modes:
5. training and detection modes. During the training mode,
suitable thresholds are derived to represent the anomalous
scores. The system then switches to the detection mode,
and anomalous states can be reported based on the derived
thresholds. PAIDS can assist the setting of the threshold
values by replacing each threshold by a pair of threshold
values, one for safe neighbors and the other for suspicious
neighbors. The suspicious neighbors are the hosts causing
PAIDS to raise alerts, whereas the safe neighbors are those
not triggering alerts. Intuitively, the threshold value for safe
neighbors should be higher than the value for suspicious
neighbors since the suspicious ones require stricter screening.
Alternatively, PAIDS may help the threshold setting in a finer
way by assigning a range of values. In other words, PAIDS
may define a threshold value function T h = g(x), where x is
the danger level outputted by PAIDS. With such a function,
the threshold value is not fixed across all hosts, it is adjusted
according to the danger level of the corresponding host.
C. Software Architecture and Components
The software architecture of PAIDS is shown in Figure
3. PAIDS mainly consists of two main entities: a PAIDS
client and a PAIDS server. It also assumes the existence of a
third entity, namely the SHS (Suspicious Hosts Sniffing). The
PAIDS client runs two software components: a RCM (Remote
Connection Monitor) and a Danger Level Frontend. The first
component, RCM, keeps track of on-going connections on the
local host and reports the connection list to the PAIDS server.
The second component can be as simple as a typical browser
such as Internet Explorer or Firefox, but may also include more
functionalities such as providing the user ways to disconnect
certain connections.
The PAIDS server consists of two parts: a RCA (Remote
Connection Attendant) and a SLPE (Security-Level Processing
Engine). The purpose of RCA is simply to receive the reported
connection list from RCM. SLPE is the core part of PAIDS.
It collects information from both RCA and SHS, determines
the danger level of each reported connection using geographic
mapping techniques, and then sends the relevant information
back to the PAIDS client.
We now elaborate on the three major components of PAIDS:
• RCM and RCA. RCM and RCA are used to report the lo-
cal host’s communicating neighbors to the SLPE. Specifically,
RCM runs on the local host side, whereas RCA runs on the
server side. The basic functionalities of RCM are to monitor
and report. RCM keeps track of all the ongoing connections of
the local host and is capable of reporting to RCA periodically
(i.e., pushing) or in the on-demand fashion (i.e., pulling). The
major advantage of pushing information to RCA is the timeli-
ness. However, it incurs higher communication and processing
overhead. In contrast, pulling information from RCM requires
less overhead, but it inflates the information gathering time.
RCA collects connection information from RCM. Under the
pushing model, RCA is simply a server daemon collecting the
reports of RCM. Under the pulling model, RCA pro-actively
requests the information from RCM.
• Security Level Processing Engine (SLPE). The SLPE is
the key component of PAIDS, determining the danger level of
the reported neighbors using the IP-to-location service. The
determination is based on the distance between a neighbor
and the outbreak areas. Formally, given a set of outbreak
areas S = {S1, S2, ..., SI} and a list of neighbor hosts
H = {H1, H2, ..., HJ }, SLPE will output a danger level
vector of L = {L1, L2, ..., LJ }, where Li is the danger level
of neighbor host Hi. Specifically, let di,j denote the distance
between Hi and Sj. The danger level of Hi is given by a
function f(di,j, S). The design of f(di,j, S) by itself is an
important and interesting problem. Due to space limitations,
we only provide two simple forms. The first form is the
average value of all di,j on Sj:
Li =
1
I
I
j=1
di,j. (1)
The second form simply chooses the minimum value of all
di,j, i.e., Li = minj {di,j}. Both forms have advantages and
disadvantages. Without explicit notes, in the following we only
use the first form. If SHS also reports the intensities of each
outbreak areas, the above equation is given as
Li =
1
I
I
j=1
di,jtj, (2)
where tj is the intensity of Sj.
After computing the danger level of a neighboring host,
SLPE compares its danger level value to a pre-defined thresh-
old value Td. If the danger level value is below the threshold,
which implies that the host is sufficiently close to the outbreak
area, the host will be flagged as suspicious.
One important design issue of SLPE is the scalability. As
the number of compromised hosts increases, the processing
overhead on SLPE also increases. Thus, the SLPE should
appropriately limit the number of outbreak areas when the
scalability is a concern. There are many ways to achieve this,
and one popular way is to use K-means. With K-means, the
number of outbreak areas is fixed to K irrespective of the
number of compromised hosts.
• Suspicious Hosts Sniffing (SHS). SHS is responsible
for collecting the activities of suspicious hosts and reporting
to SLPE. The implementation of SHS can be as simple as an
individual honeypot or as a more complicated form such as
honeynet [23]. Since typically SHS does not have legitimate
hosts, SHS records suspicious hosts whenever it receives
connection requests from other hosts. The information that
SHS records can be simply a list of IP addresses or both IP
addresses and port numbers. To take full advantage of the
suspicious activities, SHS may also record the intensity of
the threat by counting the occurrence frequencies of specific
connections. Since there might be a significant number of
suspicious activities, SHS may aggregate individual hosts into
suspicious outbreak areas. In other words, each outbreak area
may contain many hosts and can be represented in the format
of < C, r, t >, namely, the center C, the radium r, and the
outbreak intensity t.
6. 0 2000 4000 6000 8000 10000
0
10
20
30
40
50
60
70
80
90
100
Number of Compromised Hosts
Percentage(%)
Detection Rate
False Positive Rate
(a) Threshold = 10
0 2000 4000 6000 8000 10000
0
10
20
30
40
50
60
70
80
90
100
Number of Compromised Hosts
Percentage(%)
Detection Rate
False Positive Rate
(b) Threshold = 15
0 2000 4000 6000 8000 10000
0
10
20
30
40
50
60
70
80
90
100
Number of Compromised Hosts
Percentage(%)
Detection Rate
False Positive Rate
(c) Threshold = 20
Fig. 4. Effectiveness of PAIDS (Sniffing Latency = 100 seconds)
IV. EVALUATION
A. Trace-Driven Simulations
We evaluate the effectiveness of PAIDS by performing
trace-driven simulations. The simulation uses the outbreak
trace of the Code Red v2 worm from CAIDA [2]. We focus
on two performance metrics: detection rate and false positive
rate.
The simulation is performed as follows. First, for all the
recorded compromised hosts, we sort them according to their
starting time, i.e., the time they are observed to begin to infect
other hosts. The method is illustrated in Figure 5, where all
hosts are ranked along the time line. For each of the hosts, we
assume that their infectious activities can be sniffed by SHS
and their information are sent to SLPE after a certain latency.
This latency is referred to as the Sniffing Latency, denoted
by SL. Let Ci denote the set of all compromised hosts when
host Hi is compromised, and we have Ci = {Hj, j ≤ i}. Let
Hm denote the last host that is at least SL time ahead of the
infection of Hi. We use Ri to denote the set of all recorded
hosts before host Hi, i.e., Ri = {Hj, j ≤ m}. We assume
that honeynet can provide Ri immediately, but cannot provide
the signatures and the patterns in a short time. With these
notations, Ci contains all hosts that are actually compromised
when Hi is compromised, whereas Ri contains all hosts in
Ci that are sniffed or recorded by SHS. In other words, Ri is
only a subset of Ci.
Since the trace from CAIDA does not provide the informa-
tion of “who infects whom”, we make several assumptions to
evaluate the performance of PAIDS. For a host Hv, we assume
that PAIDS is running on this host and reporting suspicious
hosts with which the host is communicating. Moreover, we
assume that Hv is communicating with Hi. We then measure
the effectiveness of PAIDS by monitoring how well PAIDS can
Time
...
July 13th, 2001
H20000H1 H2 H3 Hi
...
Hm
Sniffing Latency
SHM Set (Ri)
...
Compromised Set (Ci)
Fig. 5. Illustration of Trace-driven Evaluation
detect the infected host Hi. That is, if PAIDS was running on
host Hv when the host Hi was infected and begin infecting
Hv, then PAIDS running on Hv may be able to determine
the danger level of Hi based on the information of Ri. If
PAIDS is able to raise the alert, then it is considered to be
effective. Otherwise, it is considered to be ineffective. Since
Ri is only a subset of Ci (i.e., Ri ⊂ Ci), PAIDS running on
Hv may or may not be able to raise a alert depending on the
current threshold value. Specifically, we compute the averaged
distance of Hi to Ri, i.e.,
Li =
1
|Ri| j
dHi,Hj , where Hj ∈ Ri.
Note that such a scenario provides a worst case for PAIDS
since we consider the time slot right after host Hi is infected
when the size of SHS set Ri is smallest.
After Li is obtained, it is then compared to Td, a pre-
configured threshold value. If Li is smaller than or equal
to Td, then host Hi is considered to be in danger, and the
corresponding effectiveness value is set to 1. Otherwise, if Li
is larger, then the effectiveness value is set to 0, as illustrated
below: Alerti = 1, if Li ≤ Td; Alerti = 0, otherwise.
We measure the effectiveness of PAIDS for the first 10,000
infected hosts. For every 500 hosts, we output a detection
rate that is calculated based on the number of alerts raised.
Specifically, for each of the 500 hosts, if PAIDS can raise the
alert, we count 1; otherwise, 0. Then we obtain the detection
rate by dividing the sum of alerts by 500. Hence, denoting
the number of alerts as Alert, the effectiveness of PAIDS is
calculated as Alert
500 .
The false positive rate is obtained by considering a random-
ized scenario. Specifically, we generate a scenario by randomly
10 15 20 25 30 35 40
0
10
20
30
40
50
60
70
80
90
100
Threshold
Percentage(%)
Fig. 6. Impact of Thresholds
7. 0 2000 4000 6000 8000 10000
0
10
20
30
40
50
60
70
80
90
100
Number of Compromised Hosts
Percentage(%)
Detection Rate
False Positive Rate
(a) Sniffing Latency = 25 seconds
0 2000 4000 6000 8000 10000
0
10
20
30
40
50
60
70
80
90
100
Number of Compromised Hosts
Percentage(%)
Detection Rate
False Positive Rate
(b) Sniffing Latency = 50 seconds
0 2000 4000 6000 8000 10000
0
10
20
30
40
50
60
70
80
90
100
Number of Compromised Hosts
Percentage(%)
Detection Rate
False Positive Rate
(c) Sniffing Latency = 150 seconds
Fig. 7. Effectiveness of PAIDS (Threshold = 15)
choosing a host that is not compromised and is communicating
with host Hv. That is, we introduce a host with a random IP
address and use this host to replace the host Hi in each step.
As a result, the randomly selected neighbor represents the false
positive. Using the same method as described in calculating
the detection rate, we also obtain the false positive rate.
B. Results
• Impact of Td. We study the impact of Td by varying its
values. We set the SL = 100 and test three cases: Td = 10, 15,
and 20. We consider totally 10,000 hosts, and Figure 4 shows
the results. In the figure, we observe two trends. Firstly, for
randomly introduced hosts, only very few false positives are
raised, whereas for compromised hosts, PAIDS can raise a
very high percentage of alerts. For example, with Td = 15,
about 75% of compromised hosts are detected. Secondly, the
effectiveness increases with higher values of Td. These results
are expected since higher Td means that more neighbor hosts
are treated as suspicious, thus reducing the false negatives.
To more closely study the trend, we vary Td from 10 to 40
and show the results in Figure 6. We can see that Td = 10
gives the detection rate only 51%, and the rate quickly reaches
91% when Td = 20. When Td = 40, it can detect up to
96% compromised hosts. The false positive rates for all results
shown in the figure are almost zero.
• Impact of SL. We also vary the Sniffing Latency
value SL from 25 seconds to 150 seconds and evaluate the
impact. The results are shown in Figure 7. Note that in these
experiments, the threshold is 15 (i.e., Td = 15). We observe
that as SL increases, the detection rate of PAIDS decreases.
For instance, with SL = 25, the rate is about 79%, while
20 40 60 80 100 120 140 160 180 200
0
10
20
30
40
50
60
70
80
90
100
Sniffing Latency
Percentage(%)
Fig. 8. Impact of Sniffing Latency
with SL = 150, the detection rate drops to about 74%. This
is because a larger SL value means that it takes a longer time
for SHS to learn the information about compromised hosts. We
further test the impact of SL with varying values and show
the results in Figure 8. We can see that as SL increases from
25 to 200, the effectiveness only drops from about 79% to
74%. These results imply that the effectiveness of PAIDS is
relatively insensitive to the changes of SL, which is actually
an advantage of PAIDS.
• Impact of f(di,j, S). As we elaborated in Section III, the
choice of f(di,j, S), the danger level function for a host Hi,
also needs the careful design. We have proposed two forms
of this function. In all above evaluations we use the average
form, which calculates the danger level by averaging on all
di,j. We also preliminarily evaluate the impact of the other
form, which takes the minimum value of all di,j.
Since the second form of f(di,j, S) takes the minimum value
of all di,j, the value of f(di,j, S) is much smaller than the
first form. We choose three threshold values for Td, namely,
0.01, 0.05, and 0.1. The results are shown in Figure 9. We
see that as Td increases, the detection rates also gradually
increase, and with Td = 0.1 the rate almost reaches 100%.
The false positive rates are kept very low. Specifically, our
results indicate that the highest false rate is only 1.3%. The
interesting observation regarding the false positive rate is that
the rate increases as more hosts are compromised. Part of the
reason for such a result is that with increasing number of hosts,
the randomly selected hosts have a higher probability of being
close to outbreak areas.
It would be interesting to study the question of applying
which form of f(di,j, S) under what conditions. Although
0 2000 4000 6000 8000 10000
0
10
20
30
40
50
60
70
80
90
100
Number of Compromised Hosts
Percentage(%)
Detection Rate (Threshold = 0.01)
Detection Rate (Threshold = 0.05)
Detection Rate (Threshold = 0.1)
False Positive Rate (Threshold=0.01)
False Positive Rate (Threshold=0.05)
False Positive Rate (Threshold=0.1)
Fig. 9. Impact of Danger Level Function
8. the page limit does not allow us to present the details, in
short, our results suggest that the average form has the better
false positive rate (all are zero), but with the relatively high
detection rate; whereas the minimum form has the non-zero
false positive rate (though very small), but with the higher
detection rate.
V. CONCLUSIONS
In this work, we have studied the problem of intrusion
detection and focused on the detection of unidentified worms.
We have observed the failures and the drawbacks of existing
IDSes and approached the problem in a novel way. Our design
is based on the typical behaviors of worm outbreaks. By utiliz-
ing the proximity and activity information of worm outbreaks,
we have proposed a detection system, called PAIDS, that is
complementary to existing IDSes. Our trace-driven simulation
show that PAIDS can detect an unidentified worm with a high
detection rate and a low false positive rate.
VI. ACKNOWLEDGEMENTS
We thank Dr. Jonathon Giffin of Georgia Tech and the
anonymous reviewers for their insightful comments that helped
us improve the quality of this research.
REFERENCES
[1] S. Bhatkar, A. Chaturvedi, and R. Sekar. Dataflow anomaly detection.
In SP ’06: Proceedings of the 2006 IEEE Symposium on Security and
Privacy, pages 48–62, Washington, DC, USA, 2006. IEEE Computer
Society.
[2] CAIDA. The cooperative association for internet data analysis. In
http://www.caida.org/home/.
[3] R. Cathey, L. Ma, N. Goharian, and D. Grossman. Misuse detection for
information retrieval systems. In CIKM ’03: Proceedings of the twelfth
international conference on Information and knowledge management,
2003.
[4] Z. Chen and C. Ji. Measuring network-aware worm spreading ability.
In Proceedings of IEEE INFOCOM, pages 116–124, Anchorage, AK,
USA, 2007.
[5] Z. Chen, C. Ji, and B. Paul. Spatial-temporal characteristics of in-
ternet malicious sources. In Proceedings of IEEE INFOCOM (Mini-
Conference), Phoenix, AZ, USA, 2008.
[6] C. Y. Chung, M. Gertz, and K. Levitt. Demids: A misuse detection
system for database systems. In In Third International IFIP TC-
11 WG11.5 Working Conference on Integrity and Internal Control in
Information Systems, pages 159–178. Kluwer Academic Publishers,
1999.
[7] M. Cova, D. Balzarotti, V. Felmetsger, and G. Vigna. Swaddler: An
approach for the anomaly-based detection of state violations in web
applications. In Proceedings of the 10th International Symposium on
Recent Advances in Intrusion Detection (RAID), pages 63–86, Queens-
land, Australia, September 5–7, 2007.
[8] H. Debar, M. Dacier, and A. Wespi. A revised taxonomy for intrusion
detection systems. Technical report, IBM Research, October 1999.
[9] H. Dreger, A. Feldmann, M. Mai, V. Paxson, and R. Sommer. Dynamic
application-layer protocol analysis for network intrusion detection. In
USENIX-SS’06, Berkeley, CA, USA, 2006. USENIX Association.
[10] P. Fogla and W. Lee. Evading network anomaly detection systems:
formal reasoning and practical techniques. In CCS ’06: Proceedings of
the 13th ACM conference on Computer and communications security,
pages 59–68, New York, NY, USA, 2006. ACM.
[11] J. M. Gonz´alez and V. Paxson. Enhancing network intrusion detection
with integrated sampling and filtering. In Proceedings of RAID, pages
272–289, 2006.
[12] R. Gopalakrishna, E. H. Spafford, and J. Vitek. Efficient intrusion
detection using automaton inlining. In SP ’05: Proceedings of the 2005
IEEE Symposium on Security and Privacy, pages 18–31, Washington,
DC, USA, 2005. IEEE Computer Society.
[13] A. K. Gosh, J. Wanken, and F. Charron. Detecting anomalous and
unknown intrusions against programs. In ACSAC ’98: Proceedings of
the 14th Annual Computer Security Applications Conference, page 259,
Washington, DC, USA, 1998. IEEE Computer Society.
[14] H.Bos and K. Huang. Towards software-based signature detection
for intrusion prevention on the network card. In Proc. of Eighth
International Symposium on Recent Advances in Intrusion Detection
(RAID2005), Seattle, WA, USA, 2005. ACM.
[15] IP2Location. Geolocation ip address to country city region latitude
longitude. In http://www.ip2location.com/.
[16] C. Kr¨ugel and T. Toth. Using decision trees to improve signature-based
intrusion detection. In Proceedings of RAID, pages 173–191, 2003.
[17] T. Lane and C. E. Brodley. Temporal sequence learning and data
reduction for anomaly detection. ACM Trans. Inf. Syst. Secur., 2(3):295–
331, 1999.
[18] U. Lindqvist and P. A. Porras. Detecting computer and network misuse
through the production-based expert system toolset (p-BEST). In IEEE
Symposium on Security and Privacy, pages 146–161, 1999.
[19] C. Muelder, K.-L. Ma, and T. Bartoletti. Interactive visualization for
network and port scan detection. In Proceedings of RAID, pages 265–
283, 2005.
[20] D. Mutz, W. K. Robertson, G. Vigna, and R. A. Kemmerer. Exploiting
execution context for the detection of anomalous system calls. In
Proceedings of RAID, pages 1–20, 2007.
[21] V. N. Padmanabhan and L. Subramanian. An investigation of geographic
mapping techniques for internet hosts. In SIGCOMM ’01, pages 173–
185, New York, NY, USA, 2001. ACM.
[22] M. Polychronakis, K. G. Anagnostakis, and E. P. Markatos. Emulation-
based detection of non-self-contained polymorphic shellcode. In Pro-
ceedings of RAID, volume 4637, pages 87–106. Springer, 2007.
[23] M. A. Rajab, F. Monrose, and A. Terzis. On the effectiveness of
distributed worm monitoring. In SSYM’05: Proceedings of the 14th
conference on USENIX Security Symposium, pages 15–15, Baltimore,
MD, USA, 2005. USENIX Association.
[24] S. Rubin, S. Jha, and B. P. Miller. Language-based generation and
evaluation of nids signatures. In SP ’05: Proceedings of the 2005 IEEE
Symposium on Security and Privacy, pages 3–17, Washington, DC, USA,
2005. IEEE Computer Society.
[25] M. I. Sharif, K. Singh, J. T. Giffin, and W. Lee. Understanding precision
in host based intrusion detection. In Proceedings of RAID, pages 21–41,
2007.
[26] S. Sinha, F. Jahanian, and J. M. Patel. Wind: Workload-aware intrusion
detection. In Proceedings of RAID, pages 290–310, 2006.
[27] A. Soule, K. Salamatian, and N. Taft. Combining filtering and statistical
methods for anomaly detection. In IMC’05: Proceedings of the Internet
Measurement Conference 2005 on Internet Measurement Conference,
pages 31–31, Berkeley, CA, USA, 2005. USENIX Association.
[28] S. Staniford, D. Moore, V. Paxson, and N. Weaver. The top speed of
flash worms. In WORM ’04: Proceedings of the 2004 ACM workshop
on Rapid malcode, pages 33–42, New York, NY, USA, 2004. ACM.
[29] S. Staniford, V. Paxson, and N. Weaver. How to 0wn the internet in your
spare time. In Proceedings of the 11th USENIX Security, San Francisco,
CA, USA, 2002.
[30] Sufatrio and R. H. C. Yap. Improving host-based ids with argument
abstraction to prevent mimicry attacks. In Proceedings of RAID, pages
146–164, 2005.
[31] M. Vallentin, R. Sommer, J. Lee, C. Leres, V. Paxson, and B. Tierney.
The nids cluster: Scalable, stateful network intrusion detection on
commodity hardware. In Proceedings of RAID, pages 107–126, 2007.
[32] G. Vasiliadis, S. Antonatos, M. Polychronakis, E. P. Markatos, and
S. Ioannidis. Gnort: High performance network intrusion detection using
graphics processors. In R. Lippmann, E. Kirda, and A. Trachtenberg, ed-
itors, Proceedings of RAID, volume 5230 of Lecture Notes in Computer
Science, pages 116–134. Springer, 2008.
[33] K. Wang, G. F. Cretu, and S. J. Stolfo. Anomalous payload-based worm
detection and signature generation. In Proceedings of RAID, pages 227–
246, Seattle, Washington, USA, 2005.
[34] K. Wang, J. J. Parekh, and S. J. Stolfo. Anagram: A content anomaly
detector resistant to mimicry attack. In In Proceedings of the 9 th
International Symposium on Recent Advances in Intrusion Detection
(RAID), pages 226–248, Hamburg, Germany, 2006.
[35] C. C. Zou, D. Towsley, and W. Gong. On the performance of internet
worm scanning strategies. Perform. Eval., 63(7):700–723, 2006.