The document describes a network worm vaccine architecture that aims to automatically patch vulnerable software in response to worm infections. The key components are sensors to detect infection attempts, an analysis engine to identify vulnerabilities and generate patches, and a sandboxed environment to test patches. Patches are generated using techniques like buffer relocation, code randomization, or removing unused functionality. The goal is to quickly react to attacks without human intervention through automated vulnerability identification and "good enough" patch generation until comprehensive security updates can be deployed. The approach seeks to counter known attack classes like buffer overflows and could be deployed across multiple cooperating organizations.
Traditional Host Intrusion detection systems usually bring an attack to an operator's attention, but this asynchronous attack response paradigm may not be sufficient to stop an attack before it can do damage to a system. The solution, Amr Ali and Zach Dexter explain, is reactive security, or shutting down attacks in real-time, via collaborative attack vector closure.
To familiarize ourselves with vulnerability databases, their terminology, standards, and procedures to share vulnerability data. To understand how these data sources are integrated into commercial security software tools that help organizations manage their vulnerabilities. These software tools are generally grouped under the term “vulnerabilities scanners” (or similar terms). To examine a few “classic” vulnerabilities in depth to get a sense of just how vulnerabilities expose systems to exploitation.
Traditional Host Intrusion detection systems usually bring an attack to an operator's attention, but this asynchronous attack response paradigm may not be sufficient to stop an attack before it can do damage to a system. The solution, Amr Ali and Zach Dexter explain, is reactive security, or shutting down attacks in real-time, via collaborative attack vector closure.
To familiarize ourselves with vulnerability databases, their terminology, standards, and procedures to share vulnerability data. To understand how these data sources are integrated into commercial security software tools that help organizations manage their vulnerabilities. These software tools are generally grouped under the term “vulnerabilities scanners” (or similar terms). To examine a few “classic” vulnerabilities in depth to get a sense of just how vulnerabilities expose systems to exploitation.
Vulnerability scanners a proactive approach to assess web application securityijcsa
With the increasing concern for security in the network, many approaches are laid out that try to protect
the network from unauthorised access. New methods have been adopted in order to find the potential
discrepancies that may damage the network. Most commonly used approach is the vulnerability
assessment. By vulnerability, we mean, the potential flaws in the system that make it prone to the attack.
Assessment of these system vulnerabilities provide a means to identify and develop new strategies so as to
protect the system from the risk of being damaged. This paper focuses on the usage of various vulnerability
scanners and their related methodology to detect the various vulnerabilities available in the web
applications or the remote host across the network and tries to identify new mechanisms that can be
deployed to secure the network.
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.co¬m-Visit Our Website: www.finalyearprojects.org
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Advanced Threats in the Enterprise: Finding an Evil in the HaystackEMC
This white paper describes the current advanced threat landscape, shortcomings of anti-virus, and how RSA ECAT fills the gap and helps organizations detect advanced malware.
A Survey on Hidden Markov Model (HMM) Based Intention Prediction TechniquesIJERA Editor
The extensive use of virtualization in implementing cloud infrastructure brings unrivaled security concerns for cloud tenants or customers and introduces an additional layer that itself must be completely configured and secured. Intruders can exploit the large amount of cloud resources for their attacks. This paper discusses two approaches In the first three features namely ongoing attacks, autonomic prevention actions, and risk measure are Integrated to our Autonomic Cloud Intrusion Detection Framework (ACIDF) as most of the current security technologies do not provide the essential security features for cloud systems such as early warnings about future ongoing attacks, autonomic prevention actions, and risk measure. The early warnings are signaled through a new finite State Hidden Markov prediction model that captures the interaction between the attackers and cloud assets. The risk assessment model measures the potential impact of a threat on assets given its occurrence probability. The estimated risk of each security alert is updated dynamically as the alert is correlated to prior ones. This enables the adaptive risk metric to evaluate the cloud’s overall security state. The prediction system raises early warnings about potential attacks to the autonomic component, controller. Thus, the controller can take proactive corrective actions before the attacks pose a serious security risk to the system. In another Attack Sequence Detection (ASD) approach as Tasks from different users may be performed on the same machine. Therefore, one primary security concern is whether user data is secure in cloud. On the other hand, hacker may facilitate cloud computing to launch larger range of attack, such as a request of port scan in cloud with multiple virtual machines executing such malicious action. In addition, hacker may perform a sequence of attacks in order to compromise his target system in cloud, for example, evading an easy-to-exploit machine in a cloud and then using the previous compromised to attack the target. Such attack plan may be stealthy or inside the computing environment, so intrusion detection system or firewall has difficulty to identify it.
Recent studies have shown that 90% of security breaches involve a software vulnerability caused by a missing patch – even if the patch is made available to the public.
Many organizations do not realize that a vulnerable system connected to the enterprise network potentially puts the entire organization to risk by being an easy target for cyber-attacks. Many service providers scan the network and provide a comprehensive report of the vulnerabilities existing in endpoint systems. However, they do not take the next step to remove the vulnerabilities.
Read this whitepaper to know how SecPod's Saner ensures enterprise security by remediating vulnerabilities in the endpoints. Saner is a light-weight, enterprise grade, scalable solution that hardens your systems; providing protection from malware & security threats
Security Event Analysis Through CorrelationAnton Chuvakin
This paper covers several of the security event correlation methods, utilized by Security Information Management (SIM) solutions for better attack and misuse detection. We describe these correlation methods, show their corresponding advantages and disadvantages and explain how they work together for maximum security.
The overwhelming threat may be a challenge to
general security system. Fundamentally diverse alert and threat
techniques are been researched in order to reduce deceptive
warnings. Threat Detection Systems generates huge amount of
alerts which becomes challenging to deal with them and prepare
solution. The detection System checks inbound and outbound
network activities and finds an suspicious pattern that indicate
an ongoing steps for attack. Large amount of alert may contain
false alarm therefore need of alert analysis mechanisms to offer
high level information of seriousness of threat, how dangerous
device are and which device admin has to pay more attention. To
solve this query we would make use of time and space based alert
analysis technique that provides a solution in form of attack
graph and its evaluation that provides severity of attack to
administrator.
A Security Analysis Framework Powered by an Expert SystemCSCJournals
Today\'s IT systems are facing a major challenge in confronting the fast rate of emerging security threats. Although many security tools are being employed within organizations in order to standup to these threats, the information revealed is very inferior in providing a rich understanding to the consequences of the discovered vulnerabilities. We believe expert systems can play an important role in capturing any security expertise from various sources in order to provide the informative deductions we are looking for from the supplied inputs. Throughout this research effort, we have built the Open Security Knowledge Engineered (OpenSKE) framework (http://code.google.com/p/openske), which is a security analysis framework built around an expert system in order to reason over the security information collected from external sources. Our implementation has been published online in order to facilitate and encourage online collaboration to increase the practical research within the field of security analysis.
Recent studies have shown that 90% of security breaches involve a software vulnerability caused by a missing patch – even if the patch is made available to public.
Most organisations do not realise that a vulnerable system connected to the enterprise network potentially puts the entire organisation to risk by being easy targets of cyber-attacks. Many service providers scan the network and provide a comprehensive report of the vulnerabilities existing in the end point systems. However, they do not take the next step of removing these vulnerabilities.
Read this whitepaper to know how Saner ensures enterprise security by remediating vulnerabilities in the endpoints.
Vulnerability scanners a proactive approach to assess web application securityijcsa
With the increasing concern for security in the network, many approaches are laid out that try to protect
the network from unauthorised access. New methods have been adopted in order to find the potential
discrepancies that may damage the network. Most commonly used approach is the vulnerability
assessment. By vulnerability, we mean, the potential flaws in the system that make it prone to the attack.
Assessment of these system vulnerabilities provide a means to identify and develop new strategies so as to
protect the system from the risk of being damaged. This paper focuses on the usage of various vulnerability
scanners and their related methodology to detect the various vulnerabilities available in the web
applications or the remote host across the network and tries to identify new mechanisms that can be
deployed to secure the network.
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.co¬m-Visit Our Website: www.finalyearprojects.org
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Advanced Threats in the Enterprise: Finding an Evil in the HaystackEMC
This white paper describes the current advanced threat landscape, shortcomings of anti-virus, and how RSA ECAT fills the gap and helps organizations detect advanced malware.
A Survey on Hidden Markov Model (HMM) Based Intention Prediction TechniquesIJERA Editor
The extensive use of virtualization in implementing cloud infrastructure brings unrivaled security concerns for cloud tenants or customers and introduces an additional layer that itself must be completely configured and secured. Intruders can exploit the large amount of cloud resources for their attacks. This paper discusses two approaches In the first three features namely ongoing attacks, autonomic prevention actions, and risk measure are Integrated to our Autonomic Cloud Intrusion Detection Framework (ACIDF) as most of the current security technologies do not provide the essential security features for cloud systems such as early warnings about future ongoing attacks, autonomic prevention actions, and risk measure. The early warnings are signaled through a new finite State Hidden Markov prediction model that captures the interaction between the attackers and cloud assets. The risk assessment model measures the potential impact of a threat on assets given its occurrence probability. The estimated risk of each security alert is updated dynamically as the alert is correlated to prior ones. This enables the adaptive risk metric to evaluate the cloud’s overall security state. The prediction system raises early warnings about potential attacks to the autonomic component, controller. Thus, the controller can take proactive corrective actions before the attacks pose a serious security risk to the system. In another Attack Sequence Detection (ASD) approach as Tasks from different users may be performed on the same machine. Therefore, one primary security concern is whether user data is secure in cloud. On the other hand, hacker may facilitate cloud computing to launch larger range of attack, such as a request of port scan in cloud with multiple virtual machines executing such malicious action. In addition, hacker may perform a sequence of attacks in order to compromise his target system in cloud, for example, evading an easy-to-exploit machine in a cloud and then using the previous compromised to attack the target. Such attack plan may be stealthy or inside the computing environment, so intrusion detection system or firewall has difficulty to identify it.
Recent studies have shown that 90% of security breaches involve a software vulnerability caused by a missing patch – even if the patch is made available to the public.
Many organizations do not realize that a vulnerable system connected to the enterprise network potentially puts the entire organization to risk by being an easy target for cyber-attacks. Many service providers scan the network and provide a comprehensive report of the vulnerabilities existing in endpoint systems. However, they do not take the next step to remove the vulnerabilities.
Read this whitepaper to know how SecPod's Saner ensures enterprise security by remediating vulnerabilities in the endpoints. Saner is a light-weight, enterprise grade, scalable solution that hardens your systems; providing protection from malware & security threats
Security Event Analysis Through CorrelationAnton Chuvakin
This paper covers several of the security event correlation methods, utilized by Security Information Management (SIM) solutions for better attack and misuse detection. We describe these correlation methods, show their corresponding advantages and disadvantages and explain how they work together for maximum security.
The overwhelming threat may be a challenge to
general security system. Fundamentally diverse alert and threat
techniques are been researched in order to reduce deceptive
warnings. Threat Detection Systems generates huge amount of
alerts which becomes challenging to deal with them and prepare
solution. The detection System checks inbound and outbound
network activities and finds an suspicious pattern that indicate
an ongoing steps for attack. Large amount of alert may contain
false alarm therefore need of alert analysis mechanisms to offer
high level information of seriousness of threat, how dangerous
device are and which device admin has to pay more attention. To
solve this query we would make use of time and space based alert
analysis technique that provides a solution in form of attack
graph and its evaluation that provides severity of attack to
administrator.
A Security Analysis Framework Powered by an Expert SystemCSCJournals
Today\'s IT systems are facing a major challenge in confronting the fast rate of emerging security threats. Although many security tools are being employed within organizations in order to standup to these threats, the information revealed is very inferior in providing a rich understanding to the consequences of the discovered vulnerabilities. We believe expert systems can play an important role in capturing any security expertise from various sources in order to provide the informative deductions we are looking for from the supplied inputs. Throughout this research effort, we have built the Open Security Knowledge Engineered (OpenSKE) framework (http://code.google.com/p/openske), which is a security analysis framework built around an expert system in order to reason over the security information collected from external sources. Our implementation has been published online in order to facilitate and encourage online collaboration to increase the practical research within the field of security analysis.
Recent studies have shown that 90% of security breaches involve a software vulnerability caused by a missing patch – even if the patch is made available to public.
Most organisations do not realise that a vulnerable system connected to the enterprise network potentially puts the entire organisation to risk by being easy targets of cyber-attacks. Many service providers scan the network and provide a comprehensive report of the vulnerabilities existing in the end point systems. However, they do not take the next step of removing these vulnerabilities.
Read this whitepaper to know how Saner ensures enterprise security by remediating vulnerabilities in the endpoints.
Self Evolving Antivirus Based on Neuro-Fuzzy Inference SystemIJRES Journal
With today’s world filled with information and data, it is very important for one to know which information or data is harmless and which is harmful. Right from cellular phones to big MNCs and Server companies require a security system that is as competent and adaptive as its ever-updating and evolving viruses or malware. The paper talks about the development and implementation of a new idea Adaptive anti-virus based on Anfis logic. An adaptive anti-virus system that will catch up to the speed at which the viruses update and evolve.
A Survey on Hidden Markov Model (HMM) Based Intention Prediction Techniques IJERA Editor
The extensive use of virtualization in implementing cloud infrastructure brings unrivaled security concerns for
cloud tenants or customers and introduces an additional layer that itself must be completely configured and
secured. Intruders can exploit the large amount of cloud resources for their attacks.
This paper discusses two approaches In the first three features namely ongoing attacks, autonomic prevention
actions, and risk measure are Integrated to our Autonomic Cloud Intrusion Detection Framework (ACIDF) as
most of the current security technologies do not provide the essential security features for cloud systems such as
early warnings about future ongoing attacks, autonomic prevention actions, and risk measure. The early
warnings are signaled through a new finite State Hidden Markov prediction model that captures the interaction
between the attackers and cloud assets. The risk assessment model measures the potential impact of a threat on
assets given its occurrence probability. The estimated risk of each security alert is updated dynamically as the
alert is correlated to prior ones. This enables the adaptive risk metric to evaluate the cloud’s overall security
state. The prediction system raises early warnings about potential attacks to the autonomic component,
controller. Thus, the controller can take proactive corrective actions before the attacks pose a serious security
risk to the system.
In another Attack Sequence Detection (ASD) approach as Tasks from different users may be performed on the
same machine. Therefore, one primary security concern is whether user data is secure in cloud. On the other
hand, hacker may facilitate cloud computing to launch larger range of attack, such as a request of port scan in
cloud with multiple virtual machines executing such malicious action. In addition, hacker may perform a
sequence of attacks in order to compromise his target system in cloud, for example, evading an easy-to-exploit
machine in a cloud and then using the previous compromised to attack the target. Such attack plan may be
stealthy or inside the computing environment, so intrusion detection system or firewall has difficulty to identify
it.
Zero-Day Vulnerability and Heuristic AnalysisAhmed Banafa
A zero day vulnerability refers to a hole in software that is unknown to the vendor. This security hole is then exploited by hackers before the vendor becomes aware and fix it. Uses of zero day attacks can include infiltrating malware, spyware or allowing unwanted access to user information.
The term “zero day” refers to the unknown nature of the hole to those outside of the hackers, specifically, the developers. Once the vulnerability becomes known, a race begins for the developer, who must protect users.
Secure intrusion detection and countermeasure selection in virtual system usi...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
AN ISP BASED NOTIFICATION AND DETECTION SYSTEM TO MAXIMIZE EFFICIENCY OF CLIE...IJNSA Journal
End users are increasingly vulnerable to attacks directed at web browsers which make the most of popularity of today’s web services. While organizations deploy several layers of security to protect their systems and data against unauthorised access, surveys reveal that a large fraction of end users do not utilize and/or are not familiar with any security tools. End users’ hesitation and unfamiliarity with security products contribute vastly to the number of online DDoS attacks, malware and Spam distribution. This work on progress paper proposes a design focused on the notion of increased participation of internet service providers in protecting end users. The proposed design takes advantage of three different detection tools to identify the maliciousness of a website content and alerts users through utilising Internet Content Adaptation Protocol (ICAP) by an In-Browser cross-platform messaging system. The system also incorporates the users’ online behaviour analysis to minimize the scanning intervals of malicious websites database by client honeypots. Findings from our proof of concept design and other research indicate that such a design can provide a reliable hybrid detection mechanism while introducing low delay time into user browsing experience.
Autonomic Anomaly Detection System in Computer Networksijsrd.com
This paper describes how you can protect your system from Intrusion, which is the method of Intrusion Prevention and Intrusion Detection .The underlying premise of our Intrusion detection system is to describe attack as instance of ontology and its first need is to detect attack. In this paper, we propose a novel framework of autonomic intrusion detection that fulfills online and adaptive intrusion detection over unlabeled HTTP traffic streams in computer networks. The framework holds potential for self-governing: self-labeling, self-updating and self-adapting. Our structure employs the Affinity Propagation (AP) algorithm to learn a subject’s behaviors through dynamical clustering of the streaming data. It automatically labels the data and adapts to normal behavior changes while identifies anomalies.
1. A Network Worm Vaccine Architecture
Stelios Sidiroglou
Columbia University
stelios@cs.columbia.edu
Angelos D. Keromytis
Columbia University
angelos@cs.columbia.edu
Abstract
The ability of worms to spread at rates that effectively
preclude human-directed reaction has elevated them to a
first-class security threat to distributed systems. We present
the first reaction mechanism that seeks to automatically
patch vulnerable software. Our system employs a collection
of sensors that detect and capture potential worm infection
vectors. We automatically test the effects of these vectors
on appropriately-instrumented sandboxed instances of the
targeted application, trying to identify the exploited soft-
ware weakness. Our heuristics allow us to automatically
generate patches that can protect against certain classes of
attack, and test the resistance of the patched application
against the infection vector. We describe our system archi-
tecture, discuss the various components, and propose direc-
tions for future research.
1 Introduction
Recent incidents [4, 5] have demonstrated the ability of
self-propagating code, also known as “network worms” [31,
9], to infect large numbers of hosts, exploiting vulner-
abilities in the largely homogeneous deployed software
base [6, 41]. Even when the worm carries no malicious pay-
load, the direct cost of recovering from the side effects of
an infection epidemic can be tremendous [1]. Thus, coun-
tering worms has recently become the focus of increased
research. Analysis of the Code Red worm [25, 32] has
shown that even modest levels of immunization and dy-
namic counter-measures can considerably limit the infec-
tion rate of a worm. Other simulations confirm this re-
sult [37]. However, these epidemics have also demonstrated
the ability of worms to achieve extremely high infection
rates. This implies that a hypothetical reaction system must
not, and in some cases cannot, depend on human interven-
tion for containment [34, 26].
Most of the research for countering worms has focused
on prevention, with little attention paid to reaction systems.
Work on application protection (prevention) has produced
solutions that either have considerable impact on perfor-
mance or are inadequate. We propose a first-reaction mech-
anism that tries to automatically patch vulnerable software,
thus circumventing the need for human intervention in time-
critical infection containment. In our system, the issue of
performance is of diminished importance since the primary
use of the protection mechanisms is to identify the source
of weakness in the application.
The approach presented here employs a combination of
techniques such as the use of honeypots, dynamic code
analysis, auto-patching, sandboxing, and software updates.
The ability to use these techniques is contingent upon a
number of (we believe) realistic assumptions. Dynamic
analysis relies on the assumption that we expect to tackle
known classes of attacks. The generation of patches de-
pends on source availability, although techniques such as
binary rewriting may be applicable. The ability to provide
a fix implies a trusted computing base (TCB) of some form,
which we envision would be manifest in the form of a vir-
tual machine [3] or an application sandbox [28].
Our architecture can be deployed on a per-network
or per-organization basis, reflecting the trust relationships
among entities. This can be further dissected to distributed
sensors, anomaly detection, and the sandboxed environ-
ment. For example, an enterprise network may use a com-
plete instantiation of our architecture while also providing
remote-sensor functionality to other organizations.
The benefits presented by our system are the quick reac-
tion to attacks by the automated creation of ‘good enough’
fixes without any sort of dependence on a central author-
ity, such as a hypothetical Cyber-CDC [34]. Comprehen-
sive security measures (CERT, vendor updates, etc.) can
be administered at a later time. A limitation of the system
is that it is able to counter only known classes of attacks
(e.g., buffer-overflow attacks). We feel that this does not
severely hinder functionality, as most attacks use already-
known techniques. Furthermore, our architecture is easily
extensible to accommodate detection and reactive measures
against new types of attacks as they become known.
1
2. Sensor
Instrumented
Application server
Patch
generation
Hypothesis
testing and
analysis
Remote
Sensor
Sensor
Firewall
Application
(e.g., Web) server
Internet
Anomaly
detection
analysis
Honeypot
Sensor
Passive
Enterprise Network
(1) Worm Scans/Infection attempts
(2) Notifications
(4) Vulnerability
testing and
(3) Forward
features
identification
(5) Possible fix generation
Sandboxed environment
(6) Application update
Other organization
Host−based
Figure 1. Worm vaccination architecture: sensors deployed at various locations in the network detect a potential worm (1),
notify an analysis engine (2) which forwards the infection vector and relevant information to a protected environment (3).
The potential infection vector is tested against an appropriately-instrumented version of the targeted application, identifying
the vulnerability (4). Several software patches are generated and tested using several different heuristics (5). If one of them
is not susceptible to the infection and does not impact functionality, the main application server is updated (6).
2 System Architecture
Our architecture, shown in Figure 1, makes use of five
types of components: a set of worm-detection sensors, an
anomaly-detection engine, a sandboxed environment run-
ning appropriately-instrumented versions of the applica-
tions used in the enterprise network (e.g., Apache web
server, SQL database server, etc.), an analysis and patch-
generation engine, and a software update component. We
describe each of these components in turn.
The worm-detection sensors are responsible for detect-
ing potential worm probes and, more importantly, infection
attempts. Several types of sensors may be employed con-
currently:
• Host-based sensors, monitoring the behavior of de-
ployed applications and servers.
• Passive sensors on the corporate firewall or on inde-
pendent boxes, eavesdropping on traffic to and from
the servers.
• Special-purpose honeypots, that simulate the behavior
of the target application and capture any communica-
tion.
• Other types of sensors, including sensors run by other
entities (more on this in Section 3).
Any combination of the above sensors can be used si-
multaneously, although we believe honeypot servers are the
most promising, since worms cannot distinguish between
real and fake servers in their probes. These sensors can
communicate with each other and with a central server,
which can correlate events from independent sensors and
determine potential infection vectors (e.g., the data on an
HTTP request, as seen by a honeypot). In general, we as-
sume that a worm can somehow be detected. We believe
our assumption to be pragmatic, in that any likely solution
to the worm problem is predicated upon this.
The potential infection vector (i.e., the byte stream
which, when “fed” to the target application, will cause
an instance of the worm to appear on the target sys-
tem) is forwarded to a sandboxed environment, which runs
appropriately-instrumented instances of the applications we
are interested in protecting (e.g., Apache or IIS server). The
instrumentation can be fairly invasive in terms of perfor-
mance, since we only use the server for clean-room test-
ing. In its most powerful form, a full-blown machine em-
ulator [30] can be used to determine whether the applica-
tion has been subverted. Other potential instrumentation in-
2
3. cludes light-weight virtual machines [12, 16, 38], dynamic
analysis/sandboxing tools [8, 20, 19], or mechanisms such
as MemGuard [11]. These mechanisms are generally not
used for application protection due to their considerable im-
pact on performance. In our system, this drawback is not
of particular importance because we only use these mech-
anisms to identify as accurately as possible the source of
weakness in the application. For example, MemGuard [11]
or libverify [8] can both identify the specific buffer and
function that is exploited in a buffer-overflow attack. Al-
ternatively, when running under a simulator, we can detect
when execution has shifted to the stack, heap, or some other
unexpected location, such as an unused library function.
The more invasive the instrumentation, the higher the
likelihood of detecting subversion and identifying the
source of the vulnerability. Although the analysis step can
only identify known classes of attack (e.g., a stack-based
buffer overflow [7]), even if it can detect anomalous behav-
ior, new classes of attack (e.g., heap-based overflows [23])
appear less often than exploits of known attack types do.
Armed with knowledge of the vulnerability, we can auto-
matically patch the program. Generally, program analysis is
an impossible problem (Halting problem). However, there
are a few fixes that may mitigate the effects of an attack.
Some potential fixes to buffer-overflow attacks include:
• Moving the offending buffer to the heap, by dynam-
ically allocating the buffer upon entering the function
and freeing it at all exit points. Furthermore, we can in-
crease the size of the allocated buffer to be larger than
the size of the infection vector, thus protecting the ap-
plication from even crashing. Finally, we can use a
version of malloc() that allocates two additional write-
protected pages that bracket the target buffer. Any
buffer overflow or underflow will cause the process
to receive a Segmentation Violation signal, which we
catch in a signal handler of ours. The signal handler
can then longjmp() to the code immediately after the
routine that caused the buffer overflow.
• For heap-based attacks that overflow buffers in
dynamically-allocated objects (in object-oriented lan-
guages), we can reorder fields in the object.
• More generally, we can use some minor code-
randomization techniques [14] that could shift the vul-
nerability such that the infection vector no longer
works.
• We can add code that recognizes either the attack it-
self or specific conditions in the stack trace (e.g., a
specific sequence of stack records), and returns from
the function if it detects these conditions. The for-
mer is in some sense equivalent to content filtering,
and least likely to work against even mildly polymor-
phic worms. Generally, this approach appears to be the
least promising.
• Finally, we can attempt to “slice-off” some function-
ality, by immediately returning from mostly-unused
code that contains the vulnerability. Especially for
large software systems that contain numerous, often
untested, features that are not regularly used, this may
be the solution with the least impact. We can deter-
mine whether a piece of functionality is unused by pro-
filing the real application; if the vulnerability is in an
unused section of the application, we can logically re-
move that part of the functionality (e.g., by an early
function-return).
Our architecture makes it possible to easily add new anal-
ysis techniques and patch-generation components. To gen-
erate the patches, we are experimenting with Stratego [36]
and TXL [24], two code-transformation tools.
We can test several patches (potentially even simultane-
ously, if enough resources are available), until we are satis-
fied that the application is no longer vulnerable to the spe-
cific exploit. To ensure that the patched version will con-
tinue to function, regression testing can be used to deter-
mine what functionality (if any) has been lost. The test suite
is generated by the administrator in advance, and should re-
flect a typical workload of the application, exercising all
critical aspects (e.g., performing purchasing transactions).
Naturally, one possibility is that no heuristic will work, in
which case it is not possible to automatically fix the appli-
cation and other measures have to be used.
Once we have a worm-resistant version of the applica-
tion, we must instantiate it on the server. Thus, the last
component of our architecture is a server-based monitor. To
achieve this, we can either use a virtual-machine approach
or assume that the target application is somehow sandboxed
and implement the monitor as a regular process residing
outside that sandbox. The monitor receives the new ver-
sion of the application, terminates the running instance (first
attempting a graceful termination), replaces the executable
with the new one, and restarts the server.
3 Discussion
Following the description of our architecture, there are
several other issues that need to be discussed.
The main challenges in our approach are, (1) determin-
ing the nature of the attack (e.g., buffer overflow) and iden-
tifying the likely software flaws that permit the exploit and,
(2) reliably repairing the software. Obviously, our approach
can only fix attacks it already “knows” about, e.g., stack or
heap-based buffer overflows. This knowledge manifests it-
self through the debugging and instrumentation of the sand-
3
4. boxed version of the application. Currently, we use ProPo-
lice [13] to identify the likely functions and buffers that
lead to the overflow condition. More powerful analysis
tools [30, 19, 8] can be easily employed in our architec-
ture to catch more sophisticated code-injection attacks. One
advantage of our approach is that the performance impli-
cations of such mechanisms are not relevant: an order of
magnitude or more slow-down of the instrumented appli-
cation is acceptable, since it does not impact the common-
case usage. Furthermore, our architecture should be gen-
eral enough that other classes of attack can be detected, e.g.,
email worms, although we have not yet investigated this.
As we mentioned already, repairability is impossible to
guarantee, as the general problem can be reduced to the
Halting Problem. Our heuristics allow us to generate poten-
tial fixes for several classes of buffer overflows using code-
to-code transformations [24], and test them in a clean-room
environment. Further research is necessary in the direction
of automated software recovery in order to develop better
repair mechanisms. Interestingly, our architecture could be
used to automatically fix any type of software fault, such as
invalid memory dereference, by plugging in the appropriate
repair module. When it is impossible to automatically ob-
tain a software fix, we can use content-filtering as in [29] to
temporarily protect the service. The possibility of combin-
ing the two techniques is a topic of future research.
Our system assumes that the source code of the instru-
mented application is available, so patches can be eas-
ily generated and tested. When that is not the case,
binary-rewriting techniques may be applicable, at consider-
ably higher complexity. Instrumentation of the application
also becomes correspondingly more difficult under some
schemes. One intriguing possibility is that vendors ship
two versions of their applications, a “regular” and an “in-
strumented” one; the latter would provide a standardized
set of hooks that would allow a general monitoring module
to exercise oversight.
The authors of [34] envision a Cyber “Center for Disease
Control” (CCDC) for identifying outbreaks, rapidly analyz-
ing pathogens, fighting the infection, and proactively devis-
ing methods of detecting and resisting future attacks. How-
ever, it seems unlikely that there would ever be wide accep-
tance of an entity trusted to arbitrarily patch software run-
ning on any user’s system1
. Furthermore, fixes would still
be need to be handcrafted by humans and thus arrive too late
to help in worm containement. In our scheme, such a CCDC
would play the role of a real-time alert-coordination and
distribution system. Individual enterprises would be able
to independently confirm the validity of a reported weak-
ness and create their own fixes in a decentralized manner,
thereby minimizing the trust they would have to place to
the CCDC.
1Although certain software vendors are perhaps near that point.
Note that although we speculate the deployment of such
a system in every medium to large-size enterprise network,
there is nothing to preclude pooling of resources across
multiple, mutually trusted, organizations. In particular, a
managed-security company could provide a quick-fix ser-
vice to its clients, by using sensors in every client’s location
and generating patches in a centralized facility. The fixes
would then be pushed to all clients.
One concern in our system is the possibility of “gaming”
by attackers, causing instability and unnecessary software
updates. One interesting attack would be to cause oscilla-
tion between versions of the software that are alternatively
vulnerable to different attacks. Although this may be the-
oretically possible, we cannot think of a suitable example.
Such attack capabilities are limited by the fact that the sys-
tem can test the patching results against both current and
previous (but still pending, i.e., not “officially” fixed by an
administrator-applied patch) attacks. Furthermore, we as-
sume that the various system components are appropriately
protected against subversion, i.e., the clean-room environ-
ment is firewalled, the communication between the various
components is integrity-protected using TLS/SSL or IPsec.
If a sensor is subverted and used to generate false alarms,
event correlation will reveal the anomalous behavior. In any
case, the sensor can at best only mount a denial of service
attack against the patching mechanism, by causing many
hypotheses to be tested. Again, such anomalous behavior is
easy to detect and take into consideration without impacting
either the protected services or the patching mechanism.
Another way to attack our architecture involves deny-
ing the communication between the correlation engine, the
sensors, and the sandbox through a denial of service at-
tack. Such an attack may in fact be a by-product of a
worm’s aggressive propagation, as was the case with the
SQL worm [6]. Fortunately, it should be possible to ingress-
filter the ports used for these communications, making it
very difficult to mount such an attack from an external net-
work. To protect communication with remote sensors, we
can use the SOS architecture [18].
4 Related Work
In [34], the authors describe the risk to the Internet due
to the ability of attackers to quickly gain control of vast
numbers of hosts. They argue that controlling a million
hosts can have catastrophic results because of the poten-
tial to launch distributed denial of service (DDoS) attacks
and potential access to sensitive information that is present
on those hosts. Their analysis shows how quickly attackers
can compromise hosts using “dumb” worms and how “bet-
ter” worms can spread even faster.
Since the first Internet-wide worm [33], considerable ef-
fort has gone into preventing worms from exploiting com-
4
5. mon software vulnerabilities by using the compiler to in-
ject run-time safety checks into applications (e.g., [11] ),
safe languages and APIs, and static or dynamic [20] anal-
ysis tools. Several shortcomings are associated with these
tools: some are difficult to use, have a steep learning curve
(especially for new languages), or impose significant per-
formance overheads. Furthermore, they are not always suc-
cessful in protecting applications against old [10, 39] or
new [23] classes of attacks. Finally, they require proactive-
ness from deadline-driven application developers.
Another approach has been that of containment of
infected applications, exemplified by the “sandboxing”
paradigm (e.g., [17]). Unfortunately, even when such sys-
tems are successful in containing the virus [15], they do
not prevent further propagation or ensure continued service
availability [21]. Furthermore, there is often a significant
performance overhead associated with their use.
Most of the existing anti-virus techniques use a sim-
ple signature scanning approach to locate threats. As new
viruses are created, so do virus signatures. Smarter virus
writers use more creative techniques (e.g., polymorphic
viruses) to avoid detection. In response detection mech-
anisms become ever more elaborate. This has led to co-
evolution [27], an ever-escalating arms race between virus
writers and anti-virus developers.
The HACQIT architecture [29] uses various sensors to
detect new types of attacks against secure servers, access to
which is limited to small numbers of users at a time. Any
deviation from expected or known behavior results in the
possibly subverted server to be taken off-line. Similar to
our approach, a sandboxed instance of the server is used to
conduct “clean room” analysis, comparing the outputs from
two different implementations of the service (in their proto-
type, the Microsoft IIS and Apache web servers were used
to provide application diversity). Machine-learning tech-
niques are used to generalize attack features from observed
instances of the attack. Content-based filtering is then used,
either at the firewall or the end host, to block inputs that
may have resulted in attacks, and the infected servers are
restarted. Due to the feature-generalization approach, triv-
ial variants of the attack will also be caught by the filter.
[35] takes a roughly similar approach, although filtering is
done based on port numbers, which can affect service avail-
ability. Cisco’s Network-Based Application Recognition
(NBAR) [2] allows routers to block TCP sessions based on
the presence of specific strings in the TCP stream. This fea-
ture was used to block Code-Red probes, without affecting
regular web-server access.
Code-Red inspired several countermeasure technologies.
La Brea [22] attempts to slow the growth of TCP-based
worms by accepting connections and then blocking on
them indefinitely, causing the corresponding worm thread
to block. Unfortunately, worms can avoid this mecha-
nisms by probing and infecting asynchronously. Under the
connection-throttling approach [40], each host restricts the
rate at which connections may be initiated. If universally
adopted, such an approach would reduce the spreading rate
of a worm by up to an order of magnitude, without affecting
legitimate communications.
5 Conclusion
We presented an architecture for countering self-
propagating code (worms) through automatic software-
patch generation. Our architecture uses a set of sensors
to detect potential infection vectors, and uses a clean-
room (sandboxed) environment running appropriately-
instrumented instances of the applications used in the enter-
prise network to test potential fixes. To generate the fixes,
we use code-transformation tools to implement several
heuristics that counter specific buffer-overflow instances.
We iterate until we create a version of the application that
is both resistant to the worm and meets certain minimal-
functionality criteria, embodied in an regression test suite
created in advance by the system administrator.
Our proposed system allows quick, automated reac-
tion to worms, thereby simultaneously increasing service
availability and potentially decreasing the worm’s infection
rate [41]. The emphasis is on generating a quick fix; a more
comprehensive patch can be applied at a later point in time,
allowing for secondary reaction at a human time-scale. Al-
though our system cannot fix all types of attacks, experi-
mentation with our preliminary heuristics is promising.
Future research is needed in better analysis and patch-
generation tools. We believe that our architecture, com-
bined with other approaches such as content-filtering, can
significantly reduce the impact of worms in service avail-
ability and network stability.
References
[1] 2001 Economic Impact of Malicious Code Attacks.
http://www.computereconomics.com/cei/
press/pr92101.html.
[2] Using Network-Based Application Recognition and Access
Control Lists for Blocking the ”Code Red” Worm at Net-
work Ingress Points. Technical report, Cisco Systems, Inc.
[3] VMWare Emulator. http://www.vmware.com/.
[4] CERT Advisory CA-2001-19: ‘Code Red’ Worm Exploiting
Buffer Overflow in IIS Indexing Service DLL. http://
www.cert.org/advisories/CA-2001-19.html,
July 2001.
[5] Cert Advisory CA-2003-04: MS-SQL Server
Worm. http://www.cert.org/advisories/
CA-2003-04.html, January 2003.
[6] The Spread of the Sapphire/Slammer Worm.
http://www.silicondefense.com/research/
worms/slammer.php, February 2003.
5
6. [7] Aleph One. Smashing the stack for fun and profit. Phrack,
7(49), 1996.
[8] A. Baratloo, N. Singh, and T. Tsai. Transparent Run-Time
Defense Against Stack Smashing Attacks. In Proceedings
of the USENIX Annual Technical Conference, June 2000.
[9] J. Brunner. The Shockwave Rider. Del Rey Books, Canada,
1975.
[10] Bulba and Kil3r. Bypassing StackGuard and StackShield.
Phrack, 5(56), May 2000.
[11] C. Cowan, C. Pu, D. Maier, H. Hinton, J. Walpole, P. Bakke,
S. Beattie, A. Grier, P. Wagle, and Q. Zhang. Stack-
guard: Automatic adaptive detection and prevention of
buffer-overflow attacks. In Proceedings of the 7th USENIX
Security Symposium, January 1998.
[12] G. W. Dunlap, S. T. King, S. Cinar, M. A. Basrai, and P. M.
Chen. ReVirt: Enabling Intrusion Analysis through Virtual-
Machine Logging and Replay. In Proceedings of the 5th
Symposium on Operating Systems Design and Implementa-
tion (OSDI), December 2002.
[13] J. Etoh. GCC extension for protecting applications from
stack-smashing attacks. http://www.trl.ibm.com/
projects/security/ssp/, June 2000.
[14] S. Forrest, A. Somayaji, and D. Ackley. Building Diverse
Computer Systems. In Proceedings of the 6th HotOS Work-
shop, 1997.
[15] T. Garfinkel. Traps and Pitfalls: Practical Problems in Sys-
tem Call Interposition Based Security Tools. In Proceedings
of the Symposium on Network and Distributed Systems Se-
curity (SNDSS), pages 163–176, February 2003.
[16] T. Garfinkel and M. Rosenblum. A Virtual Machine Intro-
spection Based Architecture for Intrusion Detection. In Pro-
ceedings of the Symposium on Network and Distributed Sys-
tems Security (SNDSS), pages 191–206, February 2003.
[17] I. Goldberg, D. Wagner, R. Thomas, and E. A. Brewer. A
Secure Environment for Untrusted Helper Applications. In
Procedings of the 1996 USENIX Annual Technical Confer-
ence, 1996.
[18] A. Keromytis, V. Misra, and D. Rubenstein. SOS: Se-
cure Overlay Services. In Proceedings of ACM SIGCOMM,
pages 61–72, August 2002.
[19] V. Kiriansky, D. Bruening, and S. Amarasinghe. Secure Ex-
ecution Via Program Shepherding. In Proceedings of the
11th USENIX Security Symposium, pages 191–205, August
2002.
[20] K. Lhee and S. J. Chapin. Type-Assisted Dynamic Buffer
Overflow Detection. In Proceedings of the 11th USENIX
Security Symposium, pages 81–90, August 2002.
[21] M.-J. Lin, A. Ricciardi, and K. Marzullo. A New Model for
Availability in the Face of Self-Propagating Attacks. In Pro-
ceedings of the New Security Paradigms Workshop, Novem-
ber 1998.
[22] T. Liston. Welcome To My Tarpit: The Tactical and Strategic
Use of LaBrea. Technical report, 2001.
[23] M. Conover and w00w00 Security Team. w00w00 on
heap overflows. http://www.w00w00.org/files/
articles/heaptut.txt, January 1999.
[24] A. J. Malton. The Denotational Semantics of a Func-
tional Tree-Manipulation Language. Computer Languages,
19(3):157–168, 1993.
[25] D. Moore, C. Shanning, and K. Claffy. Code-Red: a case
study on the spread and victims of an Internet worm. In Pro-
ceedings of the 2nd Internet Measurement Workshop (IMW),
pages 273–284, November 2002.
[26] D. Moore, C. Shannon, G. Voelker, and S. Savage. Internet
Quarantine: Requirements for Containing Self-Propagating
Code. In Proceedings of the IEEE Infocom Conference,
April 2003.
[27] C. Nachenberg. Computer Virus - Coevolution. Communi-
cations of the ACM, 50(1):46–51, 1997.
[28] N. Provos. Improving Host Security with System Call Poli-
cies. In Proceedings of the 12th USENIX Security Sympo-
sium, August 2003.
[29] J. Reynolds, J. Just, E. Lawson, L. Clough, and R. Maglich.
The Design and Implementation of an Intrusion Tolerant
System. In Proceedings of the International Conference on
Dependable Systems and Networks (DSN), June 2002.
[30] M. Rosenblum, E. Bugnion, S. Devine, and S. A. Herrod.
Using the SimOS Machine Simulator to Study Complex
Computer Systems. Modeling and Computer Simulation,
7(1):78–103, 1997.
[31] J. Shoch and J. Hupp. The “worm” programs – early exper-
iments with a distributed computation. Communications of
the ACM, 22(3):172–180, March 1982.
[32] D. Song, R. Malan, and R. Stone. A Snapshot of Global
Internet Worm Activity. Technical report, Arbor Networks,
November 2001.
[33] E. H. Spafford. The Internet Worm Program: An Analysis.
Technical Report CSD-TR-823, Purdue University, 1988.
[34] S. Staniford, V. Paxson, and N. Weaver. How to Own the
Internet in Your Spare Time. In Proceedings of the 11th
USENIX Security Symposium, pages 149–167, August 2002.
[35] T. Toth and C. Kruegel. Connection-history Based Anomaly
Detection. In Proceedings of the IEEE Workshop on Infor-
mation Assurance and Security, June 2002.
[36] E. Visser. Stratego: A Language for Program Transforma-
tion Based on Rewriting Strategies. Lecture Notes in Com-
puter Science, 2051, 2001.
[37] C. Wang, J. C. Knight, and M. C. Elder. On Computer Viral
Infection and the Effect of Immunization. In Proceedings of
the 16th Annual Computer Security Applications Conference
(ACSAC), pages 246–256, 2000.
[38] A. Whitaker, M. Shaw, and S. D. Gribble. Scale and Perfor-
mance in the Denali Isolation Kernel. In Proceedings of the
Fifth Symposium on Operating Systems Design and Imple-
mentation (OSDI), December 2002.
[39] J. Wilander and M. Kamkar. A Comparison of Publicly
Available Tools for Dynamic Intrusion Prevention. In Pro-
ceedings of the Symposium on Network and Distributed Sys-
tems Security (SNDSS), pages 123–130, February 2003.
[40] M. Williamson. Throttling Viruses: Restricting Propagation
to Defeat Malicious Mobile Code. Technical Report HPL-
2002-172, HP Laboratories Bristol, 2002.
[41] C. C. Zou, W. Gong, and D. Towsley. Code Red Worm Prop-
agation Modeling and Analysis. In Proceedings of the 9th
ACM Conference on Computer and Communications Secu-
rity (CCS), pages 138–147, November 2002.
6