1 CHAPTER 1 INTRODUCTION1.1 THREE TIER ARCHITECTURE Web Sphere Application Server provides the application logic layer ina three tier architecture, enabling client components to interact with dataresources and legacy applications. Collectively, three tier architectures areprogramming models that enable the distribution of applicationfunctionality across three independent systems. Client components runningon local workstations (tier one) Processes running on remote servers(tier two) A discrete collection of databases, resource managers, andmainframe applications (tier three) These tiers are logical tiers. Theymight or might not be running on the same physical server.1.1.1 FIRST TIER Responsibility for presentation and user interaction resides with thefirst tier components. These client components enable the user to interactwith the second tier processes in a secure and intuitive manner. Web SphereApplication Server supports several client types. Clients do not access thethird tier services directly. For example, a client component provides a formon which a customer orders products. The client component submits thisorder to the second-tier processes, which check the product databases andperform tasks that are needed for billing and shipping.1.1.2 SECOND TIER The second tier processes are commonly referred to as theapplication logic layer. These processes manage the business logic ofthe application, and are permitted access to the third tier services. The
2application logic layer is where most of the processing work occurs.Multiple client components can access the second tier processessimultaneously so this application logic layer must manage its owntransactions.1.1.3 THIRD TIER The third tier services are protected from direct access by the clientcomponents residing within a secure network. Interaction must occurthrough the second tier processes.1.2 INTRODUCTION ABOUT THE SYSTEM Web based attacks have recently become more diverse, asattention has shifted from attacking the front-end to exploitingvulnerabilities of the web applications in order to corrupt the back-enddatabase system (e.g., SQL injection attacks ). A plethora of IntrusionDetection Systems (IDS) currently examine network packets individuallywithin both the web server and the database system. However, there isvery little work being performed on multi tiered Anomaly Detection (AD)systems that generate models of network behavior for both web and databasenetwork interactions. In such multi tiered architectures, the back-enddatabase server is often protected behind a firewall while the web servers areremotely accessible over the Internet. Unfortunately, though they areprotected from direct remote attacks, the back-end systems are susceptibleto attacks that use web requests as a means to exploit the back-end. Toprotect multi-tiered web services, Intrusion detection systems (IDS) havebeen widely used to detect known attacks by matching misused trafficpatterns or signatures. A class of IDS that leverages machine learning canalso detect unknown attacks by identifying abnormal network traffic thatdeviates from the so called normal behavior previously profiled during the
3IDS training phase. Individually, the web IDS and the database IDS candetect abnormal network traffic sent to either of them. However found thatthese IDS cannot detect cases wherein normal traffic is used to attack theweb server and the database server. For example, if an attacker with nonadmin privileges can log in to a web server using normal-user accesscredentials, he/she can find a way to issue a privileged database query byexploiting vulnerabilities in the web server. Neither the web IDS nor thedatabase IDS would detect this type of attack since the web IDS wouldmerely see typical user login traffic and the database IDS would see only thenormal traffic of a privileged user. This type of attack can be readilydetected if the database IDS can identify that a privileged request from theweb server is not associated with user-privileged access. Unfortunately,within the current multi threaded web server architecture, it is not feasibleto detect or profile such causal mapping between web server traffic and DBserver traffic since traffic cannot be clearly attributed to user sessions.1.3 DOUBLE GUARD DETECTION In this project present Double Guard, a system used to detectattacks in multi tiered web services. Our approach can create normalitymodels of isolated user sessions that include both the web front end (HTTP)and back end (File or SQL ) network transactions. To achieve this, employ alightweight virtualization technique to assign each users web session to adedicated container, an isolated virtual computing environment. We use thecontainer I D to accurately associate the web request with the subsequent DBqueries. Thus, Double Guard can build a causal mapping profile by takingboth the web sever and DB traffic into account. We have implemented ourDouble Guard container architecture using Open VZ , and performance testingshows that it has reasonable performance overhead and is practical for mostweb applications .It also provides an isolation that prevents future session-hijacking attacks. With in a lightweight virtualization environment we ran
4many copies of the web server instances in different containers so that eachone was isolated from the rest. containers can be easily instantiated anddestroyed, we assigned each client session a dedicated container so that, evenwhen an attacker may be able to compromise a single session, the damage isconfined to the compromised session other user sessions remain unaffected byit. Using our prototype we show that, for websites that do not permit contentmodification from users, there is a direct causal relationship between therequests received by the front end web server and those generated for thedatabase backend. In fact we show that this causality mapping model can begenerated accurately and without prior knowledge of web applicationfunctionality .1.4 CONTAINERS AND LIGHT WEIGHT VIRTUALIZATION Virtualization is the act of making a set of processes believe that ithas a dedicated system to itself. There are a number of approaches being takento the virtualization problem, with Xen, VMWare, and User-mode Linux beingsome of the better known options. Those are relatively heavy weightsolutions, however, with a separate kernel being run for each virtual machine.Often, that is exactly the right solution to the problem; running independentkernels gives strong separation between environments and enables the runningof multiple operating systems on the same hardware. Full virtualization and para virtualization are not the only approachesbeing taken, however. An alternative is lightweight virtualization, generallybased on some sort of container concept. With containers, a group ofprocesses still appears to have its own dedicated system, but it is reallyrunning in a specially isolated environment. All containers run on top of thesame kernel. With containers, the ability to run different operating systems islost, as is the strong separation between virtual systems.
5Double guard might not want to give root access to processes running within acontainer environment. On the other hand, containers can have considerableperformance advantages, enabling large numbers of them to run on the samephysical host.1.5 OBJECTIVE In this project, propose an efficient IDS system called as DoubleGuard system that models the network behavior for multilayered webapplications of user sessions across both front-end web(HTTP) requestsand back end database(SQL) queries .1.6 EXISTING SYSTEM Intrusion detection system currently examine network packetsindividually within both the web server and the database system.Howeverthere is very little work being performed on multi-tiered AnomalyDetection system that generate models of network behavior for both weband database interactions In such muti tiered architectures ,the back enddatabase server is often protected behind a firewall while the web serversare remotely accessible over the internet. unfortunately, though they areprotected from direct remote attacks, the back-end systems are susceptibleto attacks that use web request as a means to exploit the back end.web IDSwould merely see typical user login traffic and database IDS see normaltraffic of privileged user. It detect the intrusions or vulnerabilities bystatically analyzing the source code or executables.1.6.1 CLASSIC 3 TIER MODEL In the existing system The communication between the web serverand the database server is not separated, and hardly U nderstand therelationships among them. if Client 2 is malicious and takes over the web
6server, all subsequent database transactions become suspect, as well as theresponse to the client. Figure 1.6.1 Classic 3 tier architecture1.6.2 LIMITATION OF EXISTING SYSTEM In the existing system , Traffic in multi cast is used by the hostile.Boththe web and the database servers are vulnerable.Attacks are come from theweb clients.they launch appplication layer attacks to compromise the webservers they c o n n e c t i n g t o . The attackers can bypass the web server todirectly attack the database server. Attackers may take over the web serverafter the attack, and that afterwards they can obtain full control of the webserver to launch subsequent attacks.Attackers could modify the applicationlogic of the web applications, eavesdrop or hijack other users web requests,or intercept and modify the database queries to steal sensitive databeyond their privileges.
71.7 PROPOSED SYSTEM In Double guard detection using both front end and back enddetection.Some previous approaches have detected intrusions orvulnerabilities by statically analyzing the source code or executables.othersdynamically track the information flow to unterstand taint propagations anddetect intrusions. In double guard,the new container based web serverarchitecture enables us to separate the different information flows by eachsessions by using light weight virtualization. With in a light weightvirtualization environment we ran many copies of web server instances indifferent containers so that each one isolated from the rest.it separatedifferent information flow from the each session.this provides a means oftracking the information flow from the web server to the database server foreach session. It is possible to initialize the thousand of container on asingle machine. Double guard detect sql injection attacks by taking the structureof the web request and database queries without looking into the valuesof input parameter. In our Double Guard ,we utilize the container ID toseparate session traffic as a way of extracting and identifying casualrelationship between web server request and database query event. Ourapproach dynamically generates new containers and recycles used ones.As a result a single physical server can run continuously and serve allweb requests. However, from a logical perspective, each session isassigned to a dedicated web server and isolated from other sessions.Since initialize each virtualized container using a read only cleantemplate, can guarantee that e a c h session will be served with a cleanweb server instance at initialization. T h i s s y s t e m choose to separate communications at thesession level so that a single user always deals with the same web server.
8Sessions can represent different users to some extent, and we expect thecommunication of a single user to go to the same dedicated web server,thereby allowing us to identify suspect behavior by both session and user.If detect abnormal behavior in a session, will treat all traffic within thissession as tainted.1.7.1 ADVANTAGES Prevent The proposed system is well correlated model that provides aneffective mechanism to different type of attacks and also more accurate. Theproposed system will also create a causal mapping profile by taking both theweb server and DB traffic into account. It also provides a bettercharacterization for anomaly detection with the correlation of input streamsbecause the intrusion sensor has a more precise normality model that detectsa wider range of threats. Containers are easily assigned and destroyed . Eachclient session are assigned to separate container. If suppose attackercompromise a single session the damage is confined to the compromisedsession other session are unaffected. If suppose traffic occurred onlyparticular user session will affected and also easily find who is attacker.
9 CHAPTER 2 LITERATURE SURVEY2.1  Toward Automated Detection of Logic Vulnerabilities in WebApplications V. Felmetsger, L. Cavedon, C. Kruegel, and G. Vigna,Proc. Usenix security symp 2010. The Web applications are the most common way to makeservices and data available on the Internet. Unfortunately, with theincrease in the number and complexity of these applications, there hasalso been an increase in the number and complexity of vulnerabilities.Current techniques to identify security problems in web applicationshave mostly focused on input validation flaws. This paper focused onlogic vulnerabilities. Application logic vulnerabilities are an importantclass of defects that are the result of faulty application logic. Thesevulnerabilities are specific to the functionality of particular webapplications, and, thus, they are extremely difficult to characterize andidentify. In this paper, we propose a first step toward the automateddetection of application logic vulnerabilities. We developed a tool, calledWaler, based on our ideas, and we applied it to a number of webapplications, finding previously-unknown logic vulnerabilities.Ourapproach a composition of dynamic analysis and symbolic modelchecking to identify invariants that are a part of the intended programspecification, but are not enforced on all paths in the code of a webapplication. We implemented the proposed approaches in a tool, calledWaler, that analyzes servlet based web applications. We used Waler toidentify a number of previously unknown application logic vulnerabilitiesin several real world applications and in a number of seniorundergraduate projects. To the best of our knowledge, Waler is the firsttool that is able to automatically detect complex web application logic
10flaws without the need for a substantial human (annotation) effort or theuse of ad hoc, manually specified heuristics.This paper used the techniquejpf and dikon tools used. We evaluated the effectiveness of our system indetecting logic vulnerabilities four real world applications, namely, EasyJSP Forum, JspCart GIMS and JaCoB. The first application that weanalyzed is the Easy JSP Forum application, a community forum writtenin JSP.The second application that we analyzed is the Global InternshipManagement System (GIMS) Using Waler, we found that many of thepages in the application do not have sufficient protection fromunauthorized Access.The third application is JaCoB, a communitymessage board application that supports posting and viewing ofmessages by registered users. For this program, our tool neither foundany vulnerabilities nor did it generate any false alert. The fourth testapplication is JspCart, which is a typical online store. Waler identified anumber of pages in its administrative interface that can be accessed byunauthorized users. In JspCart, all pages available to an admin user areprotected by checks that examine the User object in the session.2.2  Anomaly Detection of Web-Based Attacks .C. Kruegel and G.Vigna Proc. 10th ACM Conf. Computer and Comm. Security (CCS ’03),Oct. 2003 Web servers and web based applications are popular attack targets.In order to detect known web based attacks, misuse detection systemsare equipped with a large number of signatures. Unfortunately, it isdifficult to keep up with the daily disclosure of web relatedvulnerabilities, and, in addition, vulnerabilities may be introduced byinstallation-specific web based applications. Therefore, misuse detectionsystems should be complemented with anomaly detection systems. Thispaper presents an intrusion detection system that uses a number of
11different anomaly detection techniques to detect attacks against webservers and web based applications. The system to perform focusedanalysis and produce a reduced number of false positives. In this paper twomodels are used .in this first model are named as training mode .it isbased on user behavior analysis attack behavior is varies from normal userbehavior this mode generate normal user profile created. Second modecalled as detection mode here malicious score calculating and reportedanomalous queries based upon anomalous score .anomaly score is lessthan 1.detection mode also responsible for structure of the queries. Thispaper introduces a novel approach to perform anomaly detection, usingas input HTTP queries containing parameters. The work presented hereis novel in several ways. First of all, to the best of our knowledge, this isthe first anomaly detection system specifically tailored to the detectionof web based attacks. Second, the system takes advantage ofapplication correlation between server side programs and parametersused in their invocation. Third the parameter characteristics (e.g.,length and structure) are learned from input data. The ultimate goal isto be able to perform anomaly detection in real time for web sites thatprocess millions of queries per day with virtually no false alarms.2.3  Database Intrusion Detection Using Weighted Sequence Mining.A. Srivastava, S. Sural, and A.K. MajumdarJ. Computers.vol. 1,no. 4, pp. 8-17, 2006. Data mining has attracted a lot of attention due to increasedgeneration, transmission and storage of high volume data and animminent need for extracting useful information and knowledge fromthem. database stores valuable information of an application, its securityhas started getting attention. An intrusion detection system (IDS) is usedto detect potential violations in database security. In every database,
12some of the attributes are considered more sensitive to maliciousmodifications compared to others.We propose an algorithm for findingdependencies among important data items in a relational databasemanagement system. Any transaction that does not follow thesedependency rules are identified as malicious. show that this algorithmcan detect modification of sensitive attributes quite accurately. In this paper, propose a new approach for database intrusiondetection using a data mining technique which takes the sensitivity of theattributes into consideration in the form of weights.This paper proposedtwo types of technique first one weighted data dependency rule mininghere sensitive data are classified into 3 types high sensitive, mediumsensitive and low sensitive.this approach allocate the weights of eachoperation like read and write.compare level of security and weights ofoperation are added based upon that score how the data protecteddecided.second method is E-r model for representing attribute sensitivitiesThe model consists of a collection of basic objects called entities, andrelationships among these entities. We add a new symbol ’*’ to theexisting set of symbols to represent the sensitivity of an attribute.Repeated use of this symbol (more number of symbols in attributes) makesan attribute more sensitive as compared to others.Main aim of thisproject minimize the loss suffered from database owner. Currently,sensitive attributes are identified at the design level. In future, we plan touse Role Based Access Control (RBAC) administered databases forfinding out sensitive attributes dynamically.
13 The proposed database intrusion detection system generates morerules as compared to non weighted approach. There is a need for amechanism to find out which of these new rules are useful for detectingmalicious transactions. Such a mechanism helps in discarding redundantrules. We plan to use some learning mechanism to filter out extra rulesgenerated by our approach.2.4  Efficiently Tracking Application Interactions Using LightweightVirtualization Y. Huang, A. Stavrou, A.K. Ghosh, and S. Jajodia Proc.First ACM Workshop Virtual Machine Security,2008. The prevailing uses of computers in our everyday life give rise tomany new challenges, including system security, data protection, andintrusion analysis. Towards that direction, recent research has providedtwo new powerful tools: 1) virtualized execution environments and 2)the monitoring, inspection, and/or logging of system activities viasystem call monitoring. In this paper, we introduce a novel general-purpose framework that unifies the two mechanisms and models astransactions the interactions of applications harnessing the resourceefficiency and isolation provided by lightweight virtualizationenvironments (VEs)Our approach maintains the VE and system integrity,having as primarily focused on the interactions among VEs and systemresources including the file system, memory, and network. Only events that are pertinent to the integrity of an application andits interactions with the operating system are recorded. We attempt tominimize the system overhead both in terms of system events we have tostore and the resources required. Even though we cannot provideapplication replay, we keep enough information for a wide range ofother uses, including system recovery and information tracking amongothers. As a proof of concept, we have implemented a prototype based on
14Open VZ, a lightweight virtualization tool. Our preliminary resultsshow that, com- pared to state of the art event recording systems, we canreduce the amount of event recorded per application by almost an orderof magnitude. This paper presented a framework for tracking applicationinteractions using a lightweight virtualization tool combined with astackable file system. Our goal was to reduce the amount of system callsrecorded for each application while maintaining full tracking informationof the application interactions. To that end, we introduced anarchitecture that can scale with the number of applications and recorddata over a large period of time. Our approach maintains applicationintegrity and offers the ability to utilize the recorded information for awide range of other uses. This includes system recovery and informationtracking among others. Although this is the first step, we believe ourapproach to be the first that can provide a practical and efficientapproach to application tracking and a potential mechanism to recordall system events as database transactions without preventive storage andmonitoring overhead.2.5  Fast and Automated Generation of AttackSignatures: A Basis for Building Self-Protecting Servers Liang and SekarSIGSAC:Proc. 12th ACM Conf. Computer and Comm. Security, 2005. The past few years have witnessed an alarming increase inlargescale attacks: automated attacks carried out by large numbers of hostson the Internet. These may be the work of worms, zombies, or large numbersof hackers running attack scripts. Such attacks have the followingcharacteristics: they originate from many different hosts, target a non-negligible fraction of vulnerable hosts on the Internet, and are repetitive.Buffer overflow attacks (or more generally, attacks that exploit memoryerrors in C/C++ programs) are the most popular choice in these attacks, as
15they provide substantial control over a victim host. For instance, virtuallyevery worm known to date has been based on this class of attacks. Ourapproach, called COVERS, uses a forensic analysis of a victim serversmemory to correlate attacks to inputs received over the network, andautomatically develop a signature that characterizes inputs that carryattacksThe signatures tend to capture characteristics of the underlyingvulnerability (e.g., a message field being too long) rather than thecharacteristics of an attack, which makes them effective against variants ofattacks. Our approach introduces low overheads (under 10does not requireaccess to source code of the protected server, and has successfullygenerated signatures for the attacks studied in our experiments, withoutproducing false positives. Since the signatures are generated in tens ofmilliseconds, they can potentially be distributed quickly over the Internet tofilter out (and thus stop) fast spreading worms. Another interesting aspectof our approach is that it can defeat guessing attacks reported againstaddress-space randomization and instruction set randomization techniques.Finally, it increases the capacity of servers to withstand repeated attacks bya factor of 10 or more.In this paper we presented a new approach, calledCOVERS, for protecting servers from repetitive buffer overflow attacks,such as those due to worms and zombies. Our approach combines off theshelf attack detection techniques with a forensic analysis of the victim servermemory to correlate attacks to inputs received from the network, andautomatically generates signatures to filter out future attack instances. Thisimproves the ability of victim applications to withstand attacks by one totwo orders of magnitude. As compared to previous techniques. COVERSintroduces minimal overheads of under 10we showed that covers signaturescapture the characteristics of underlying vulnerabilities, thereby providingprotection against attack variants that exploit the same vulnerability. Incontrast with previous approaches, which required many attack samples to
16produce signatures that are sufficiently general to stop polymorphic attacks,our approach is able to generate signatures from a just a single attackinstance. This is because of the use of a 4-step approach that allowsCOVERS to localize the source of attacks to a small part of an inputmessage. We believe that other signature generation techniques, which oftenemploy more powerful algorithms at the signature generation step, can derivesignificant benefit by incorporating our correlation and input contextidentification steps. The signatures generated by COVERS can bedistributed and deployed at other sites over the Internet. This deploymentcan take the form of an inline filter at the network layer. As a proof of thisconcept, we recently implemented a Snort plug in that filters out networkflows that match the signatures generated by COVERS.2.6  Polygraph: Automatically Generating Signatures for PolymorphicWorms,” J. Newsome, B. Karp, and D.X. Song Proc. IEEE Symp. Securityand Privacy, 2005. Content signature based intrusion detection systems (idses) are easilyevaded by polymorphic worms, which vary their payload on every infectionattempt. in this paper, we present polygraph, a signature generation systemthat successfully produces signatures that match polymorphic worms.polygraph generates signatures that consist of multiple disjoint contentsubstrings. in doing so, polygraph leverages our insight that for a realworld exploit to function properly multiple invariant substrings must oftenbe present in all variants of a payload. we contribute a definition of thepolymorphic signature generation problem; propose classes of signaturesuited for matching polymorphic worm payloads; and present algorithms forautomatic generation of signatures in these classes. our evaluation of thesealgorithms on a range of polymorphic worms demonstrates that polygraphproduces signatures for polymorphic worms that exhibit low false negatives
17and false positives signature classes for polymorphic worms are three typesconjunction signatures, token subsequence signatures and bayess ignatures. polygraph must meet several design goals for producingsignatures such as signature quality. efficient signature generation efficientsignature matching generation of small signature sets robustness againstnoise and multiple worms robustness against evasion and subversion thegrowing consensus in the security community is that polymorphic wormsportend the death of content based worm quarantine that no sensitive andspecific signatures may exist for worms that vary their content significantlyon every infection. we have shown, on the contrary, that content-basedfiltering holds great promise for quarantine of polymorphic worms andmoreover, that polygraph can derive sensitive and specific signatures forpolymorphic worms automatically. signature generation problem for polymorphic worms; providedexamples that illustrate the presence of invariant content in polymorphicworms; proposed conjunction, token-subsequence, and bayes signaturesclasses that specifically target matching polymorphic worms invariantcontent; and offered and evaluated polygraph, a suite of algorithms thatautomatically derive signatures in these classes. our results indicate thateven in the presence of misclassified noise flows, and multiple worms,polygraph generates high-quality signatures. we conclude that the rumors ofthe demise of content-based filtering for worm quarantine are decidedlyexaggerated.2.7  B. Parno, J.M. McCune, D. Wendlandt, D.G. Andersen, and A.Perrig, “CLAMP: Practical Prevention of Large-Scale Data Leaks,”Proc. IEEE Symp. Security and Privacy, 2009. Providing online access to sensitive data makes web servers lucrativetargets for attackers. A compromise of any of the web servers scripts,
18applications, or operating system can leak the sensitive data of millions ofcustomers. Un fortunately, systems for stopping data leaks requireconsiderable effort from application developers, hindering their adoption Inthis work, we investigate how such leaks can be prevented with minimaldeveloper effort. We propose CLAMP, an architecture for preventing dataleaks even in the presence of web server compromises or SQL injectionattacks. CLAMP protects sensitive data by enforcing strong access controlon user data and by isolating code running on behalf of different users. Byfocusing on minimizing developer effort, we arrive at an architecture thatallows developers to use familiar operating systems, servers, and scriptinglanguages, while making relatively few changes to application code lessthan50 lines in our applications.Clamp uses 5 techniques such asIdentifying a Users Sensitive Data ,Data Access Policies ,ReadRestrictions Write Restrictions and Authentication ClassesWe developedthe CLAMP architecture to isolate the large and complex web server andscripting environments from a small trusted computing base that providesuser authentication and data access control. In this work, we haveinvestigated techniques to secure LAMP applications against a large numberof threats while requiring minimal changes to existing applications. Wedeveloped the CLAMP architecture to isolate the large and complex webserver and scripting environments from a small trusted computing base thatpro- vides user authentication and data access control. CLAMP occupies apoint in the web security design space notable for its simplicity for the webdeveloper. Our proof of concept implementation indicates that portingexisting LAMP applications to CLAMP requires a minimal number ofchanges, and the prototype can handle millions of SSL sessions per day.
192.8 A Stateful Intrusion Detection System for World-Wide Web ServersG. Vigna, W.K. Robertson, V. Kher, and R.A. Kemmerer Proc. Ann.Computer Security Applications Conf. (ACSAC ’03), 2003. Web servers are ubiquitous, remotely accessible, and oftenmisconfigured. In addition, custom web based applications may introducevulnerabilities that are overlooked even by the most security-consciousserver administrators. Consequently, web servers are a popular target forhackers. To mitigate the security exposure associated with web servers,intrusion detection systems are deployed to analyze and screen incomingrequests. The goal is to perform early detection of malicious activityand possibly prevent more serious damage to the protected site. Eventhough intrusion detection is critical for the security of web servers, theintrusion detection systems available today only perform very simpleanalyses and are often vulnerable to simple evasion techniques. This paperpresents webstat, an intrusion detection system that analyzes webrequests looking for evidence of malicious behavior. The system is novelin several ways. First of all, it provides a sophisticated language todescribe multistep attacks in terms of states and transitions. In addition,the modular nature of the system supports the integrated analysis ofnetwork traffic sent to the server host, operating system-level audit dataproduced by the server host, and the access logs produced by the webserver. By correlating different streams of events, it is possible toachieve more effective detection of web based attacks.This paperpresented an approach for stateful intrusion detection, called WebSTAT. The approach is implemented by extending the STAT frameworkto create a sensor that performs detection of web based attacks. WebSTAT is novel in that it provides a sophisticated language for describingmulti step attacks in terms of states and transitions, and these
20descriptions are automatically compiled into dynamically linked libraries,which has a number of advantages in terms of flexibility andextensibility. Web STAT also operates on multiple event streams and isable to correlate both network level and operating system level eventswith entries contained in server logs. This supports more effectivedetection of web based attacks and generates a reduced number of falsepositives.2.9  An Efficient Black-Box Technique for Defeating WebApplication Attacks R. Sekar Proc. Network and Distributed SystemSecurity Symp. (NDSS), 2009. Over the past few years, injection vulnerabilities have become theprimary target for remote exploits. SQL injection, command injection,and cross site scripting are some of the popular attacks that exploitthese vulnerabilities. Taint tracking has emerged as one of the mostpromising approaches for defending against these exploits, as it supportsaccurate detection (and prevention) of popular injection attacks.However, practical deployment of taint tracking defenses has beenhampered by a number of factors, including: (a) high performanceoverheads (often over 100% ), (b) the need for deep instrumentation,which has the potential to impact application robustness and stability,and (c) specificity to the language in which an application is written. In order to overcome these limitations, we present a new techniquein this paper called taint inference. This technique does not require anysource code or binary instrumentation of the application to be protected;instead, it operates by intercepting requests and responses from thisapplication. For most web applications, this interception may beachieved using network layer interposition or library interposition. We
21then develop a class of policies called syntax and taint aware policies thatcan accurately detect and/or block most injection attacks. Anexperimental evaluation shows that our techniques are effective indetecting a broad range of attacks on applications written in multiplelanguages (including PHP, Java and C), and impose low performanceoverheads (below 5% ) In this paper, we presented a new approach for accurate detection ofcommon types of attacks launched on web applications. Our approachrelies on a new technique for inferring taint propagation by passivelyobserving inputs and outputs of a protected application. We thenpresented a new policy frame work that enables policies to be specifiedin a language neutral manner. As compared to previous works, ourapproach does not require extensive instrumentation of the applications tobe protected. It is robust, and can easily support applications written inmany different languages (Java/C/C++/PHP), and on many platforms(Apache/IIS/Tomcat). It is able to detect many types of commandinjection attacks, as well as cross-site scripting within a singleframework, and using very few policies. It introduces significantly loweroverheads (typically less than 5% as compared to previous approaches.2.10  Intrusion Detection via Static Analysis D. Wagner andD. DeanProc. Symp. Security and Privacy (SSP ’01), May 2001. One of the primary challenges in intrusion detection is modellingtypical application behavior, so that we can recognize attacks by theiratypical effects without raising too many false alarms. We show howstatic analysis may be used to automatically derive a model of applicationbehavior. The result is a host based intrusion detection system with threeadvantages a high degree of automation, protection against a broad class
22of attacks based on corrupted code, and the elimination of false alarms.We report on our experience with a prototype implementation of thistechnique.Our approach constrains the system call trace of a programsexecution to be consistent with the programs source code. We assumethat the program was written with benign intent. This approach dealswith attacks (such as buffer overflows) that cause a program to behave ina manner inconsistent with its authors intent. These are the mostprevalent security problems. We have successfully applied staticprogram analysis to intrusion detection our approach is automatic:the programmer or system administrator merely needs to run our toolson the program at hand. All other automatic approaches to intrusiondetection to date have been based on statistical inference, leading tomany false alarms; in contrast, our approach is provably sound whenthe alarm goes off, something has definitely gone wrong. Nonetheless,we can immediately detect if a program behaves in an impossible(according to its source) way, thus detecting intrusions that other systemsmiss. strategic combination of static analysis and dynamic monitoring.This combination yields better results than either method alone andpresents a promising new approach to the intrusion detection problem.
23 CHAPTER 3 REQUIREMENT SPECIFICATION3.1 HARDWARE SPECIFICATION • System : Pentium IV 2.4 GHz • Hard Disk : 160 GB • Monitor : 15 VGA color • Keyboard : 110 keys enhanced • Ram : 1GB3.2 SOFTWARE SPECIFICATION • O/S : Windows XP. • Language : Java. • IDE : NetBeans 6.9.1 • Data Base : MySql3.2.1 JAVA Java is a programming language originally developed by JamesGosling at Sun Microsystems (now a subsidiary of Oracle Corporation) andreleased in 1995 as a core component of Sun Microsystems Java platform. Thelanguage derives much of its syntax from C and C++ but has a simpler objectmodel and fewer low-level facilities. Java applications are typically compiled tobyte code (class file) that can run on any Java Virtual Machine (JVM)regardless of computer architecture. Java is a general-purpose, concurrent,class-based, object-oriented language that is specifically designed to have asfew implementation dependencies as possible. It is intended to let applicationdevelopers "write once, run anywhere." Java is currently one of the most
24popular programming languages in use, particularly for client-server webapplications. The original and reference implementation Java compilers, virtualmachines, and class libraries were developed by Sun from 1995. As of May2007, in compliance with the specifications of the Java Community Process,Sun relicensed most of its Java technologies under the GNU General PublicLicense. Others have also developed alternative implementations of these Suntechnologies, such as the GNU Compiler for Java and GNU Class path.126.96.36.199 JAVA PLATFORM One characteristic of Java is portability, which means that computerprograms written in the Java language must run similarly on anyhardware/operating-system platform. This is achieved by compiling the Javalanguage code to an intermediate representation called Java byte code, insteadof directly to platform specific machine code. Java byte code instructions areanalogous to machine code, but are intended to be interpreted by a virtualmachine (VM) written specifically for the host hardware. End-users commonlyuse a Java Runtime Environment (JRE) installed on their own machine forstandalone Java applications, or in a Web browser for Java applets.Standardized libraries provide a generic way to access host-specific featuressuch as graphics, threading, and networking. Registration of various objects, files and hints into layer is pretty centralto the way Net Beans based applications handle communication betweenmodules. This page summarizes the list of such extension points defined bymodules with API. Context menu actions are read from the layer folderLoaders/text/x-ant+ xml/Actions. Key maps folder contains subfolders forindividual key maps ( Emacs, JBuilder, and Net Beans). The name of key mapcan be localized. Use "System File System. Localizing Bundle" attribute of yourfolder for this purpose. Individual key map folder contains shadows to actions.
25Shortcut is mapped to the name of file. Emacs shortcut format is used, multikeys are separated by space chars ("C-X P" means Ctrl+X followed by P)."Current Key map" property of "Key maps" folder contains original (notlocalized) name of current key map. This folder contains registration of shortcuts. It’s supported forbackward compatibility purpose only. All new shortcuts should be registered in"Key maps/Net Beans" folder. Shortcuts installed INS Shortcuts folder will beadded to all key maps, if there is no conflict. It means that if the same shortcutis mapped to different actions in Shortcut folder and current key map folder(like Key map/Net Beans), the Shortcuts folder mapping will be ignored. XML layer contract for registration of server plug-in and instances thatimplement optional capabilities of server plug-in. Plug-in with server-specificdeployment descriptor files should declare the full list in XML layer asspecified in the document plugin-layer-file.html from the above link.Sourcefiles must be named after the public class they contain, appending the suffix.java, for example, HelloWorldApp.java. It must first be compiled into bytecode, using a Java compiler, producing a file named Hello World App. class.Only then can it be executed, or launched. The Java source file may onlycontain one public class but can contain multiple classes with less than publicaccess and any number of public inner classes. A class that is not declared public may be stored in any .java file. Thecompiler will generate a class file for each class defined in the source file. Thename of the class file is the name of the class, with .class appended. For classfile generation, anonymous classes are treated as if their name were theconcatenation of the name of their enclosing class, and an integer.The keywordpublic denotes that a method can be called from code in other classes, or that aclass may be used by classes outside the class hierarchy. The class hierarchy isrelated to the name of the directory in which the .java file is located.
26 The keyword static in front of a method indicates a static method,which is associated only with the class and not with any specific instance of thatclass. Only static methods can be invoked without a reference to an object.Static methods cannot access any class members that are not also static. Thekeyword void indicates that the main method does not return any value to thecaller method. The method name "main" is not a keyword in the Java language. It issimply the name of the method the Java launcher calls to pass control to theprogram. Java classes that run in managed environments such as applets andEnterprise JavaBeans do not use or need a main () method. A Java program maycontain multiple classes that have main methods, which means that the VMneeds to be explicitly told which class to launch from. The main method must accept an array of String objects. Byconvention, it is referenced as args although any other legal identifier name canbe used. Since Java 5, the main method can also use variable arguments, in theform of public static void main (String.args) allowing the main method to beinvoked with an arbitrary number of String arguments. The effect of thisalternate declaration is semantically identical (the args parameter is still an arrayof String objects), but allows an alternative syntax for creating and passing thearray. The Java launcher launches Java by loading a given class (specified onthe command line or as an attribute in a JAR) and starting its public static voidmain(String) method. Stand-alone programs must declare this methodexplicitly. The String  args parameter is an array of String objects containingany arguments passed to the class. The parameters to main are often passed bymeans of a command line. Printing is part of a Java standard library: TheSystem class defines a public static field called out.
273.2.2 NET BEANS The Net Beans Platform is a reusable framework for simplifying thedevelopment of Java Swing desktop applications. The Net Beans IDE bundlefor Java SE contains what is needed to start developing Net Beans plug-in andNet Beans Platform based applications; no additional SDK is required.Applications can install modules dynamically. Any application can include theUpdate Center module to allow users of the application to download digitallysigned upgrades and new features directly into the running application.Reinstalling an upgrade or a new release does not force users to download theentire application again. The platform offers reusable services common to desktop applications,allowing developers to focus on the logic specific to their application. Amongthe features of the platform are: User interface management (e.g. menus and toolbars) User settings management Storage management (saving and loading any kind of data) Window management Wizard framework (supports step-by-step dialogs) Net Beans Visual Library Integrated Development Tools
28 CHAPTER 4 METHODOLOGY4.1 CREATE CONTAINER MODEL All network traffic from both legitimate users and adversaries, isreceived intermixed at the same web server. If an attacker compromises theweb server, he/she can potentially affect all future sessions (i.e., sessionhijacking). As signing each session to a dedicated web server is not arealistic option, as it will deplete the web server resources. To achievesimilar confinement while maintaining a low performance and resourceoverhead, we use lightweight virtualization. In our design, we make use oflightweight process containers referred to as containers as ephemeral,disposable servers for client sessions. It is possible to initialize thousands ofcontainers on a single physical machine, and these virtualized containers canbe discarded, reverted, or quickly reinitialized to serve new sessions. Asingle physical web server runs many containers, each one an exact copy ofthe original web server. Our approach dynamically generates new containers and recyclesused ones. As a result, a single physical server can run continuously andserve all web requests. However, from a logical perspective, each session isassigned to a dedicated web server and isolated from other sessions. Since weinitialize each virtualized container using a read-only clean template, we canguarantee that each session will be served with a clean web server instanceat initialization, We choose to separate communications at the session levelso that a single user always deals with the same web server. Sessions canrepresent different users to some extent, and we expect the communicationof a single user to go to the same dedicated web server, thereby allowing usto identify suspect behavior by both session and user. If we detect abnormalbehavior in a session, we will treat all traffic within this session as tainted.
29If an attacker compromises a vanilla web server, other sessionscommunications can also be hijacked. In our system, an attacker can onlystay within the web server containers that he/she is connected to, with noknowledge of the existence of other session communications. We can thusensure that legitimate sessions will not be compromised directly by anattacker.4.2 BUILDING NORMALITY MODEL Normality model depicts how communications are categorized assessions and how database transactions can be related to a correspondingsession. Client2 will only compromise the VE 2 and the correspondingdatabase transaction set T2 will be the only affected section of data within thedatabase. This container based and session separated web serverarchitecture not only enhances the security performances but also providesus with the isolated information flows that are separated in each containersession. It allows us to identify the mapping between the web server requestsand the subsequent DB queries, and to utilize such a mapping model to detectabnormal behaviors on a session/client level. In typical three-tiered webserver architecture. The web server receives HTTP requests from user clientsand then issues SQL queries to the database server to retrieve and updatedata. These SQL queries are causally dependent on the web request hittingthe web server. Even if we knew the application logic of the web server and wereto build a correct model, it would be impossible to use such a model todetect attacks within huge amounts of concurrent real traffic unless we hada mechanism to identify the pair of the HTTP request and SQL queries thatare causally generated by the HTTP request. However, within our containerbased web servers, it is a straightforward matter to identify the causalpairs of web requests and resulting SQL queries in a given session.
30Moreover as traffic can easily be separated by session, it becomes possible forus to compare and analyze the request and queries across different sessions. Figure 4.2 Webserver instances running in containers. Double guard system put sensors at both sides of the servers. Atthe web server, our sensors are deployed on the host system and cannot beattacked directly since only the virtualized containers are exposed toattackers. Our sensors will not be attacked at the database server eitherassume that the attacker cannot completely take control of the databaseserver. In fact, we assume that our sensors cannot be attacked and canalways capture correct traffic information at both ends. Once we build themapping model, it can be used to detect abnormal behaviors. If there existsany request or query that violates the normality model within a session,then the session will be treated as a possible of attack.
31In double guard prototype, choose to assign each user session into adifferent container however this was a design decision. For instance canassign a new container per each new IP address of the client. InOur prototype used 15 minitues time out due to resource constraints of our testserver.
32 CHAPTER 5 SYSTEM DESIGN5.1 STATIC MODEL static website can build an accurate model of the mappingrelationships between web requests and database queries since the links arestatic and clicking on the same link always returns the same information.However, some websites (e.g., blogs , forums) allow regular users with nonadministrative privileges to update the contents of the served data. Thiscreates tremendous challenges for IDS system training because the HTTPrequests can contain variables in the passed parameters.For example, insteadof one to one mapping, one web request to the web server usually invokesa number of SQL queries that can vary depending on type of the requestand the state of the system. Some requests will only retrieve data from theweb server instead of invoking database queries, meaning that no queries willbe generated by these web requests . In other cases, one request will invokea number of database queries. All communications from the clients to the database are separated bya session. Assign each session with a unique session ID. Double Guardnormalizes the variable values in both HTTP requests and databasequeries, preserving the structures of the requests and queries. To achieve thisDouble Guard substitutes the actual values of the variables with symbolicvalues.
335.2 MAPPING RELATIONS In Double guard classify the four possible mapping patterns. Sincethe request is at the origin of the data flow treat each request as themapping source. In other word , the mappings in the model are always in theform of one request to a query set rm TO Qn.5.2.1 DETERMINISTIC MAPPING This is the most common and perfectly matched pattern. That isto Say that web request rm appears in all traffic with the SQL queries setQn.For any session in the testing phase with the request rm, the absence ofa query set Qn matching the request indicates a possible intrusion. Onthe other hand, if Qn is present in the session traffic without thecorresponding rm, this may also be the sign of an intrusion. In staticwebsites this type of mapping comprises the majority of cases since thesame results should be returned for each time a user visits the same link . Session 1 (VE1) HTTP Web SQL Req A Session 2(VE 2) QUERY X Session 3(VE 3) HTTP SQL Session 4(VE 4) Web Req B QUERY Y Session 5(VE 5)Figure 5.2.1 Deterministic mapping using session ID of the container (VE).
345.2.2 EMPTY QUERY SET In special cases, the SQL query set may be the empty set. Thisimplies that the web request neither causes nor generates any databasequeries. For example, when a web request for retrieving an image GIF filefrom the same web server is made, a mapping relationship does not existbecause only the web requests are observed. This type of mapping is calledrm assign empty. During the testing phase, we keep these web requeststogether in the set EQS.5.2.3 NO MATCHED REQUEST In some cases, the web server may periodically submit queries to thedatabase e server in order to conduct some scheduled tasks, such as cron jobsfor archiving or backup. This is not driven by any web request, similar tothe reverse case of the Empty Query Set mapping pattern. These queriescan not match up with any web requests, and we keep these unmatchedqueries in a set NMR. During the testing phase, any query within setNMR is considered legitimate. The size of NMR depends on web serverlogic, but it is typically small.5.2.4 NON DETERMINISTIC MAPPING The same web request may result in different SQL query sets basedon input parameters or the status of the webpage at the time the webrequest is received. In fact, these different SQL query sets do not appearrandomly, and there exists a candidate pool of query sets. Each time that thesame type of web request arrives, it always matches up with one (and onlyone) of the query sets in the pool. it is difficult to identify traffic thatmatches this pattern. This happens only within dynamic websites, such asblogs or forum sites. Decrypts the file.
355.3 STATIC MODEL BUILDING ALGORITHM In the case of a static website, the nondeterministic mapping doesnot exist as there are no available input variables or states for static content.We can easily classify the traffic collected by sensors into three patterns inorder to build the mapping model As the traffic is already separated bysession, we begin by iterating all of the sessions from 1 to N.Require: Training Data set, Threshold tEnsure: The Mapping Model for static website1: for each session separated traffic Ti do2: Get different HTTP requests r and DB queries q in this session3: for each different r do4: if r is a request to static file then5: Add r into set EQS6: else7: if r is not in set REQ then8: Add r into REQ9: Append session ID i to the set ARr with r as the key10: for each different q do11: if q is not in set SQL then12: Add q into SQL13: Append session ID i to the set AQq with q as the key14: for each distinct HTTP request r in REQ do15: for each distinct DB query q in SQL do16: Compare the set ARr with the set AQq17: if ARr AQq and CardinalityAR assign to t then18: Found a Deterministic mapping from r to q19: Add q into mapping model set MSr of r20: Mark q in set SQL
3621: else22: Need more training sessions23: return False24: for each DB query q in SQL do25: if q is not marked then26: Add q into set NMR27: for each HTTP request r in REQ do28: if r has no deterministic mapping model then29: Add r into set EQS30: return True Some web requests that could appear separately are still present as aunit. During the training phase, we treat them as a single instance of webrequests bundled together unless we observe a case when either of themappears separately Our next step is to decide the other two mappingpatterns by assembling a white list for static file requests, including JPG,GIF, CSS, etc. HTTP requests for static files are placed in the EQS set.The remaining requests are placed in REQ. I f we cannot find any matchedqueries for them, they will also be placed in the EQS set. In addition, allremaining queries in SQL will be considered as No Matched Request casesand placed into NMR. I n t h i s algorithm that takes the input of training data set and buildsthe mapping model for static websites. For each unique HTTP request anddatabase query, the algorithm assigns a hash table entry, the key of the entryis the request or query itself, and the value of the hash entry is AR for therequest or AQ for the query, respectively.
37 The algorithm generates the mapping model by considering all threemapping patterns that would happen in static websites. The algorithm belowdescribes the training process. session ID provided by the container (VE) inorder to build the deterministic mapping between http requests and thedatabase requests.5.4 TESTING FOR STATIC WEBSITES Once the normality model is generated, it can be employed fortraining and detection of abnormal behavior. During the testing phase,each session is compared to the normality model. We begin with eachdistinct web request in the session and, since each request will have onlyone mapping rule in the model, we simply compare the request with thatrule The testing phase algorithm is as follows If the rule for the request isDeterministic Mapping we test whether Q is a subset of a query set of thesession. If so, this request is valid, and we mark the queries in Q.otherwiseabnormal. the rule is Empty Query Set then the request is notconsidered to be abnormal. For the remaining unmarked database queries,we check to see if they are in the set NMR. If so, we mark the query as suchAny untested we b request or unmarked database query is considered to beabnormal.5.5 MODELING OF DYNAMIC PATTERNS Dynamic web pages allow users to generate the same web query withdifferent parameters. Additionally, dynamic pages often use POST ratherthan GET methods to commit user inputs. Based on the web serversapplication logic, different inputs would cause different database queries.For example, to post a comment to a blog article, the web server wouldfirst query the database to see the existing comments. If the users commentdiffers from previous comments, then the web server would automatically
38generate a set of new queries to insert the new post into the back-enddatabase . Otherwise, the webs ever would reject the input in order to preventduplicated comments from being posted (i.e., no corresponding SQL querywould be issued). In such cases, even assigning the same parameter valueswould cause different set of queries, depending on the previous state of theweb site. Likewise, this nondeterministic mapping case (i.e., one-to-manymapping) happens even after we normalize all parameter values to extractthe structures of the web requests and queries. Since the mapping canappear differently in different cases, it becomes difficult to identify all of theone to many mapping patterns for each web request. Moreover, whendifferent operations occasionally overlap at their possible query set, itbecomes even harder for us to extract the one-to-many mapping for eachoperation by comparing matched requests and queries across the sessions.Since the algorithm for extracting mapping patterns in static pages no longerworked for the dynamic pages, we created another training method to buildthe model. First, we tried to categorize all of the potential single (atomic)operations on the web pages. For instance, the common possible operationsfor users on a blog website may include reading an article, posting a newarticle, leaving a comment, visiting the next page, etc. All of the operationsthat a p p e a r within one session are permutations of these operations. If wecould build a mapping model for each of these basic operations, then wecould compare web requests to determine the basic operations of the sessionand obtain the most likely set of queries mapped from these operations . Ifthese single operation models could not cover all of the requests and queriesin a session, then this would indicate a possible intrusion .
39 CHAPTER 6 RESULTS AND DISCUSSION6.1 SCREENSHOTS Figure 6.1 Home page Figure 6.2 Login page
40Figure 6.3: Allocating c ontainers Figure 6.4: User login status
41Figure 6 . 5 : User register informations Figure 6.6: Status of the model
43 CHAPTER 7 CONCLUSION AND FUTURE WORK7.1 CONCLUSION Double guard detection presented an intrusion detection systemthat builds models of normal behavior for multi tiered web applicationsfrom both front end web (HTTP) requests and back-end database(SQL) queries. Unlike previous approaches that correlated orsummarized alerts generated by independent IDSs, Double Guard formsa container-based IDS with multiple input streams to produce alerts.We have shown that such correlation of input streams provides abetter characterization of the system for anomaly detection becausethe intrusion sensor has a more precise normality model that detects awider range of threats. This system achieved this by isolating the flow of informationfrom each web server session with a lightweight virtualization.Furthermore, we quantified the detection accuracy of our approachwhen we attempted to model static and dynamic web requests with theback end file system and database queries. For static websites, we builta well-correlated model, which our experiments proved to be effectiveat detecting different types of attacks.
447.2 FUTURE WORK In future propose our system by developing efficient methodof detecting SQL query poisoning attack ,buffer over flowattack.Double Guard, all of the user input values are normalized so as tobuild a mapping model based on the structures of HTTP requests and DBqueries. Once the malicious user inputs are normalized, Double Guardcannot detect attacks hidden in the values. These attacks can occur evenwithout the databases. Double Guard offers a complementary approach tothose research approaches of detecting web attacks based on thecharacterization of input values. Our assumption is that an attacker canobtain “full control” of the web server thread that he/she connects to. Thatis, the attacker can only take over the web server instance running in itsisolated container. Our architecture ensures that every client be defined bythe IP address and port container pair, which is unique for each session.DoubleGuard is not designed to mitigate DDoS attacks. These attacks canalso occur in the server architecture without the back-end database.Infuture detect Vulnerabilities Due to Improper Input Processing anddistributed Dos attack.
45 CHAPTER 8 REFERENCES V. Felmetsger, L. Cavedon, C. Kruegel, and G. Vigna, “TowardAutomated Detection of Logic Vulnerabilities in Web Applications,”Proc. USENIX Security ymp., 2010. G. Vigna, W.K. Robertson, V. Kher, and R.A. Kemmerer, “A StatefulIntrusion Detection System for World-Wide Web Servers,” Proc. Ann.Computer Security Applications Conf. (ACSAC ’03), Oct.2003. C. Kruegel and G. Vigna, “Anomaly Detection of Web-Based Attacks,”Proc. 10th ACM Conf. Computer and Comm. Security (CCS ’03), Oct. 2003. Liang and Sekar, “Fast and Automated Generation of AttackSignatures: A Basis for Building Self-Protecting Servers,” SIGSAC:Proc. 12th ACM Conf. Computer and Comm. Security, 2005. Y. Hu and B. Panda, “A Data Mining Approach for DatabaseIntrusion Detection,” Proc. ACM Symp. Applied Computing (SAC),H. Haddad, A. Omicini, R.L. Wainwright, and L.M. Liebrock, eds., 2004. Y. Huang, A. Stavrou, A.K. Ghosh, and S. Jajodia, “EfficientlyTracking Application Interactions Using Lightweight Virtualization,”Proc. First ACM Workshop Virtual Machine Security, 2008. H.-A. Kim and B. Karp, “Autograph: Toward Automated Distributed WormSignature Detection,” Proc. USENIX Security Symp., 2004. R. Sekar, “An Efficient Black-Box Technique for Defeating WebApplication Attacks,” Proc. Network and Distributed System SecuritySymp. (NDSS), 2009
46 A. Srivastava, S. Sural, and A.K. Majumdar, “Database IntrusionDetection Using Weighted Sequence Mining,” J. Computers, vol. 1,no. 4, pp. 8-17, 2006. D. Wagner and D. Dean, “Intrusion Detection via Static Analysis,” Proc.Symp. Security and Privacy (SSP ’01), May 2001.