Packet%20marking%20report

  • 707 views
Uploaded on

 

More in: Engineering , Technology
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
No Downloads

Views

Total Views
707
On Slideshare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
5
Comments
0
Likes
1

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 1 CHAPTER 1 INTRODUCTION Cloud computing is a new computing model in which re-sources are pooled to provide software, platform and infrastructure to as many users as possible by sharing the available resources. In this model “customers” plug into the “cloud” to access IT resources which are priced and provided “on-demand”. The NIST (US National Institute of Standards and Technology) definition of cloud computing is “ a model for enabling ubiquitous, convenient, on- demand network access to shared pool of configurable computing resources( e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.” 1.1 Hall Marks Of Cloud On-demand self service, broadband network access, resource pooling, rapid elasticity are some of the essential characteristics of the cloud model. The cloud can be deployed for private, public, community or uses. Private cloud will be used by an organization and its customers, whereas public cloud is made available for public use. Community model is for a community of users having same mission/goal. Hybrid model of cloud shares the properties of any of the above models. Shabeeb et al (2012) discussed about the cloud services. The cloud delivers its services in the form of software, platform and infrastructure. Costly applications like ERP, CRM will be offloaded onto the cloud by provider. They run at providers cost. Platform includes the languages, libraries etc. and the database, operating system, network bandwidth comes under infrastructure. 1.2 Security Issues Trustworthiness of the cloud service provider is the key concern. The organizations are deliberately offloading their sensitive as well as insensitive data to cloud for getting theservices. The cloud works on pay for use basis. If numerous requests are sent to a server on cloud by the DoS attacker, the owner of that particular cloud have more requests for process. Moreover, other users will be denied of the service which they request as the server on cloud is expending all its
  • 2. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 2 requests for serving the malicious DoS request. The situation will be more drastic if the attacker compromises some more hosts for sending the flood request, which is called DDoS. Chonka et al (2011) discussed the variant forms of DDoS at-tack tools like Agobot, Mstream and Trinoo which are still used by attacker today. But, most attackers are more inclined to use the less complicated web based attack tools like Extensible XML-based Denial of Service (X-DoS) and HTTP-based Denial of Service (H-DoS) attack due to their simple implementation and lack of any real defenses against them. 1.3 Denial-of-service attack In computing, a denial-of-service attack (DoS attack) or distributed denial-of-service attack (DDoS attack) is an attempt to make a machine or network resource unavailable to its intended users. Although the means to carry out, motives for, and targets of a DoS attack may vary, it generally consists of efforts to temporarily or indefinitely interrupt or suspend services of a host connected to the Internet.
  • 3. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 3 Methods of attack A denial-of-service attack is characterized by an explicit attempt by attackers to prevent legitimate users of a service from using that service. There are two general forms of DoS attacks: those that crash services and those that flood services. A DoS attack can be perpetrated in a number of ways. The five basic types of attack are: 1. Consumption of computational resources, such as bandwidth, memory, disk space, or processor time. 2. Disruption of configuration information, such as routing information. 3. Disruption of state information, such as unsolicited resetting of TCP sessions. 4. Disruption of physical network components. 5. Obstructing the communication media between the intended users and the victim so that they can no longer communicate adequately.
  • 4. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 4 CHAPTER 2 LITERATURE SURVEY Literature survey is the most important step in software development process. Before developing the tool it is necessary to determine the time factor, economy and company strength. Once these things are satisfied, then next steps are to determine which operating system and language can be used for developing the tool. Once the programmers start building the tool the programmers need lot of external support. This support can be obtained from senior programmers, from book or from websites. Before building the system the above consideration are taken into account for developing the proposed system. A DoS attack is designed to prevent legitimate access to a re-source. In the context of the Internet, an attacker can “flood” a victim‟s connection with random packets to prevent legitimate packets from getting through. These internet Denial of Service attacks have become more prevalent recently due to their near untraceability and relative ease of execution. Dos attacks are so difficult to trace because the only hint a victim has, is the source of a given packet which can be easily forged. Dean et al (2001) presented a solution to the problem of determining the path a packet traversed over the Internet (called the traceback problem). It reframes the traceback problem as a polynomial reconstruction and uses algebraic techniques from coding theory and learning theory to provide robust methods of transmission and reconstruction. Savage et al (2001) presented an approach to the traceback problem that addresses the needs of both victims and network operators. The possibility of tracing flooding attacks by “marking” packets, either probabilistically or deterministically, with the addresses of the routers they traverse. The victim uses the information in the marked packets to trace an attack back to its source. A router “marks” one or more packets by augmenting them with additional information about the path they are travelling. The victim attempts to reconstruct the at-tack path using only the information in the marked packets. It allows a victim to identify the network path(s) traversed by attack traffic without requiring interactive operational support from Internet Service Providers (ISPs).
  • 5. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 5 Belenky et al (2003) proposed a Deterministic Packet Marking (DPM), a new approach to IP traceback. The 16-bit Packet ID Field and 1-bit Reserved Flag (RF) in the IP header will be used to mark packets. The packet is marked by the interface closest to the source of the packet. A general principle in handling DDoS attacks is to rely only on the information transferred in the DPM mark. The DPM Mark can be used to not only transfer the bits of the ingress address but also some other information. This additional information should enable the destination to determine which ingress address segments be-long to which ingress address. At the victim, a table matching the source addresses to the ingress addresses is maintained. The reconstruction procedure utilizes the data structure called Reconstruction Table (RecTbl), in which the destination would first put the address segments. After segments corresponding to the same ingress address have arrived to the destination, the ingress address for a given source address becomes avail-able to the victim. Xiang et al (2009) presented a Flexible Deterministic Packet Marking (FDPM) which provides a defense system with the ability to find out the real sources of attacking packets that traverse through the network. The FDPM scheme utilizes various bits (called marks) in the IP header. The mark has flexible lengths depending on the network protocols used, which is called flexible mark length strategy. The flexibility of FDPM is twofold. First, it can use flexible mark length according to the network protocols that are used in the network. This characteristic of FDPM gives it much adaptability to current heterogeneous networks. Second, FDPM can adaptively adjust its marking process to obtain a flexible marking rate. This characteristic prevents a traceback router from the overload problems. It has been used to not only trace DDoS attacking packets but also enhance filtering attacking traffic. hoi and Dai (2004) presented a marking scheme (with marking and traceback algorithms) in which a router marks a packet with a link that the packet came through. Links of a router are represented by Huffman codes according to the traffic distribution among the links. When a router marks a packet with address information, the information is not of the router that is marking but of a router that sent the packet to the current router and it uses a special table called link table, which shows all the links between the router and its adjacentlem as a polynomial reconstruction and uses algebraic techniques from coding theory and learning theory to provide ro-bust methods of transmission and reconstruction.
  • 6. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 6 Savage et al (2001) presented an approach to the traceback problem that addresses the needs of both victims and network operators. The possibility of tracing flooding attacks by “marking” packets, either probabilistically or deterministically, with the addresses of the routers they traverse. The victim uses the information in the marked packets to trace an attack back to its source. A router “marks” one or more packets by augmenting them with additional information about the path they are travelling. The victim attempts to reconstruct the at-tack path using only the information in the marked packets. It allows a victim to identify the network path(s) traversed by attack traffic without requiring interactive operational support from Internet Service Providers (ISPs). Belenky et al (2003) proposed a Deterministic Packet Marking (DPM), a new approach to IP traceback. The 16-bit Packet ID Field and 1-bit Reserved Flag (RF) in the IP header will be used to mark packets. The packet is marked by the interface closest to the source of the packet. A general principle in handling DDoS attacks is to rely only on the information transferred in the DPM mark. The DPM Mark can be used to not only transfer the bits of the ingress address but also some other information. This additional information should enable the destination to determine which ingress address segments be-long to which ingress address. At the victim, a table matching the source addresses to the ingress addresses is maintained. The reconstruction procedure utilizes the data structure called Reconstruction Table (RecTbl), in which the destination would first put the address segments. After segments corresponding to the same ingress address have arrived to the destination, the ingress address for a given source address becomes avail-able to the victim. Xiang et al (2009) presented a Flexible Deterministic Packet Marking (FDPM) which provides a defense system with the ability to find out the real sources of attacking packets that traverse through the network. The FDPM scheme utilizes various bits (called marks) in the IP header. The mark has flexible lengths depending on the network protocols used, which is called flexible mark length strategy. The flexibility of FDPM is twofold. First, it can use flexible mark length according to the network protocols that are used in the network. This characteristic of FDPM gives it much adaptability to current heterogeneous networks. Second, FDPM can adaptively adjust its marking process to obtain a flexible marking rate. This characteristic prevents a traceback router from the overload problems. It has been used to not only trace DDoS attacking packets but also enhance filtering attacking traffic.
  • 7. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 7 hoi and Dai (2004) presented a marking scheme (with marking and traceback algorithms) in which a router marks a packet with a link that the packet came through. Links of a router are represented by Huffman codes according to the traffic distribution among the links. When a router marks a packet with address information, the information is not of the router that is marking but of a router that sent the packet to the current router and it uses a special table called link table, which shows all the links between the router and its adjacent tector), in which it is distributed throughout the grid, in order to properly defend it. The XML-Based Detector is trained Back Propagation Neural Network, in order to detect and filter out Xml-Based Denial of Service (X-DoS) messages. XDetector is located before the web server in order to provide the greatest resource efficiency and protection. Chonka et al (2011) offered a solution for DDoS attacks by the use of service oriented traceback architecture in the area of cloud computing. Cloud TraceBack (CTB) is used to find the source of the attacks, and introduced the use of a back propa-gation neutral network, called Cloud Protector (XDetector), which was trained to detect and filter attack traffic. In an at-tack scenario, the attack client will request a web service from CTB, which in turn will pass the request to the web server. The attack client will then formulate a SOAP request message based on the service description formulated by WSDL. Upon receipt of SOAP request message, SOTA will place a SOTM within the header. Once the CTBM has been placed, the SOAP message will be sent to the Web Server. Upon discovery of an attack, the victim will ask for reconstruction to extract the mark and inform them of the origin of the message. The re-construction will also begin to filter out the attack traffic. It helps to detect and filter most of the attack messages and iden-tify the source of the attack within a short period of time. IN 2012--IEEE Transactions--Survey on DDoS Attacks and its Detection &Defence Approaches In Cloud environment, cloud servers providing requested cloud services, sometimes may crash after receiving huge amount of request. This situation is called Denial Of service attack. Cloud Computing is one of today's most exciting technologies due to its ability to reduce costs associated with computing while increasing flexibility and scalability for computer processes. Cloud Computing is changing the IT delivery model to provide on-demand self-service access to a shared pool of computing resources (physical and virtual) via broad network access to offer reduced costs, capacity utilization, higher efficiencies and mobility. Recently Distributed Denial
  • 8. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 8 of Service (DDoS) attacks on clouds has become one of the serious threats to this buzzing technology. Distributed Denial of Service (DDoS) attacks continue to plague the Internet. Distributed Denial-of-Service (DDoS) attacks are a significant problem because they are very hard to detect, there is no comprehensive solution and it can shut an organization off from the Internet. The primary goal of an attack is to deny the victim's access to a particular resource. In this paper, we want to review the current DoS and DDoS detection and defence mechanism. IN 2011--IEEE Transactions--An Innovative Approach to Provide Security in Cloud by Prevention of XML and HTTP DDoS Attacks The main problem faced in a cloud environment is the Distributed denial of service (DDoS). During such a DDoS attack all consumers will get affected at the same time and will not be able to access the resources on the cloud. All client users send their request in the form of XML messages and they generally make use of the HTTP protocol. So the threat coming from distributed REST attacks are more and easy to implement by the attacker, but such attacks are generally difficult to detect and resolve by the administrator. So to resolve these attacks we introduce a specific approach to providing security based on various filters. We make use of five different filters which are used to detect and resolve XML and HTTP DDoS attack. This allows the security expert to detect the attack before it occurs and block or remove the suspicious client. IN 2010--IEEE Transactions— Implementing Pushback: Router-Based Defense AgainstDDoS Attacks Pushback is a mechanism for defending against distributed denial-of-service (DDoS) attacks. DDoS attacks are treated as a congestion-control problem, but because most such congestion is caused by malicious hosts not obeying traditional end-to-end congestion control, the problem must be handled by the routers. Functionality is added to each router to detect and preferentially drop packets that probably belong to an attack. Upstream routers are also notified to drop such packets (hence the term Pushback) in order that the router‟s resources be used to route legitimate traffic. In this paper we present an architecture
  • 9. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 9 for Pushback, its implementation under FreeBSD, and suggestions for how such a system can be implemented in core routers. IN 2009--IEEE Transactions—Impact of DDOS Attacks on Cloud Environment Cloud Computing is an emerging area nowadays. Researchers are working on all aspects of cloud viz. cloud network architecture, scheduling policies, virtualization, hypervisor performance scalability, I/O efficiency, data integrity and data confidentiality of data intensive applications. The dynamic nature of cloud presents researchers new area of research that is cloud forensics. Cloud Forensics is the branch of forensics for applying computer science knowledge to prove digital artifacts. The DDOS is the widely used attack in cloud environment. To do the forensics of DDOS if it is identified a possible detection and prevention mechanisms would aid in cloud forensics solutions and evidence collection and segregation. This paper presents different types of DDOS attack at the different layers of OSI model with increasing complexity in performing attack and focuses more on prevention and detection of DDOS at different layer of OSI and effect of DDOS in cloud computing. IN 2008--IEEE Transactions— Cloud security defence to protect cloud computing against HTTP-DoS and XML-DoS attacks Cloud computing is still in its infancy in regards to its software as services (SAS), web services, utility computing and platform as services (PAS). All of these have remained individualized systems that you still need to plug into, even though these systems are heading towards full integration. One of the most serious threats to cloud computing itself comes from HTTP Denial of Service or XML-Based Denial of Service attacks. These types of attacks are simple and easy to implement by the attacker, but to security experts they are twice as difficult to stop. In this paper, we recreate some of the current attacks that attackers may initiate as HTTP and XML. We also offer a solution to traceback through our Cloud TraceBack (CTB) to find the source of these attacks, and introduce the use of a back propagation neutral network, called Cloud Protector, which was trained to detect and filter such attack traffic. Our results show that we were able to detect and filter most of the attack messages and were able to identify the source of the attack within a short period of time.
  • 10. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 10 CHAPTER 3 EXISTING SYSTEM Cloud computing suffers from major security threat problem by HTTP and XML Denial of Service (DoS) attacks. HX-DoS attack is a combination of HTTP and XML messages that are intentionally sent to flood and destroy the communication channel of the cloud service provider. To address the problem of HX-DoS attacks against cloud web services there is a need to distinguish between the legitimate and illegitimate message Disadvantage That are intentionally sent to flood and destroy the communication channel of the cloud service .
  • 11. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 11 CHAPTER 4 PROPOSED SYSTEM In a HX-DoS attack scenario, an attacker has compromised a client who has an account to access the cloud service provider server. This way they have a direct connection through the system. The attacker then installs the HX-DoS attack program at the user end and initiates it. To distinguish between them, the first method adopts Intrusion Detection System (IDS) by using a decision tree classification system called as CLASSIE. CLASSIE is located one hop away from host. CLASSIE‟s rule set has been built up over time to identify the known HDoS and X-DoS messages. With known HX-DoS attacks like XML injection or XML Payload Overload, CLASSIE is able to be trained and tested to identify these known attributes. Upon detection of HX-DoS message, CLASSIE drops the packetwhich matches the rule set. After examined by the CLASSIE, then the packets are subjected to marking.. Advantages of Proposed System 1. The packet marking overhead and the false positive rate of DoS attacks are greatly reduced.
  • 12. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 12 CHAPTER 5 DESIGN CONSIDERATIONS 5.1 Assumptions We are considering two legitimate users and an attacker. User send data through classie, modulo packet marking and reconstruct and drop(RAD) to server. Message will be identified and if it is from an attacker then that data will be dropped before reaching to the server. Modulo packet marking consists of two router 1. Edge router 2. Core router Assumptions about the Victim On the victim side, we assume that by the time that the victim starts collecting marked packets, all routers in the network have already invoked the packet marking procedure. In addition, we assume that the victim does not have any knowledge about the real network or the attack graph. However, the victim knows the marking probability that the routers are using. We assume that it is equipped with the ability to mark packets as in the original PPM algorithm. We also assume that each router shares the same marking probability. Specifically, a router can either be a transit router or a leaf router. A transit router is a router that forwards traffic from upstream routers to its downstream routers (or the victim), whereas a leaf router is a router whose upstream router is connected to client computers (not routers) and forwards the clients‟ traffic to its downstream routers (or the victim). Certainly, the clients are mixed with honest and malicious parties. Furthermore, we assume that every router has only one outgoing route toward the victim. For the ease of presentation, we name the “outgoing route toward the victim” the victim route. The assumption can be justified by the fact that modern routing algorithms favor the construction of routing trees.
  • 13. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 13 5.2 Goals This paper address the denial-of-service (DDoS) attacks, where an attacker intends to damage the network by exhausting its resource. The main goal of this project is to filter the legitimate message from the message and pass that legitimate message to the server, so that the legitimate user can get resources of Cloud server.
  • 14. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 14 CHAPTER 6 SYSTEM DESIGN System design is the process of defining the architecture, components, modules, interfaces and data for a system to satisfy specified requirements. One could see it as the application of systems theory to product development. There is some overlap with the disciplines of systems analysis, systems architecture and systems engineering. If the broader topic of product development "blends the perspective of marketing, design, and manufacturing into a single approach to product development," then design is the act of taking the marketing information and creating the design of the product to be manufactured. Systems design is therefore the process of defining and developing systems to satisfy specified requirements of the user. Use case Diagram Cloud Server Client Remote Connection Use case diagram Fig. 6.1.1: Use-case diagram
  • 15. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 15 6.1.2 DataFlow Diagram A data flow diagram is a graphical representation of the "flow" of data through an information system, modeling its process aspects. Often they are a preliminary step used to create an overview of the system which can later be elaborated. DFDs can also be used for the visualization of data processing (structured design). The DFD is also called as bubble chart. It is a simple graphical formalism that can be used to represent a system in terms of the input data to the system, various processing carried out on these data, and the output data is generated by the system. Fig.6.1.2: Dataflow diagram 6.1.3 Sequence Diagram A sequence diagram in a UML is a kind of interaction diagram that shows how processes operate with one another and in what order. It is a construct of a Message Sequence Chart. A sequence diagram shows object interactions arranged in time sequence. It depicts the objects and classes involved in the scenario and the sequence of messages exchanged between the objects Client Cloud Server Classie Modulo Packet Marking Reconstruct & drop Data flow diagram
  • 16. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 16 needed to carry out the functionality of the scenario. Sequence diagrams typically are associated with use case realizations in the Logical View of the system under development. 6.2 Flowcharts A flow chart is a graphical or symbolic representation of a process. Each step in the process is represented by a different symbol and contains a short description of the process step. The flow chart symbols are linked together with arrows showing the process flow direction. Fig.6.2 a) Classie Flowchart Start Receive Packet Check no. of headers Check unique value Forward packet
  • 17. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 17 No Yes Fig.6.2 b) Client Flowchart Start End Registered? Sign up Sign in Make packet Select file Send packet
  • 18. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 18 No yes Fig.6.2 c) Modulo packet marking Flowchart Start End Receive packets Send to Edge router Mark packet Send to core router If marked? Calculate marking value Forward packet
  • 19. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 19 No Yes Fig.6.2 d)RAD Flowchart Start End Receive packet Match marking value Drop packetForward packet to cloud
  • 20. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 20 CHAPTER 7 SYSTEM REQUIREMENT SPECIFICATION To be used efficiently, all computer software needs certain hardware components or other software resources to be present on a computer. These prerequisites are known as (computer) system requirements and are often used as a guideline as opposed to an absolute rule. Most software defines two sets of system requirements: minimum and recommended. With increasing demand for higher processing power and resources in newer versions of software, system requirements tend to increase over time. Industry analysts suggest that this trend plays a bigger part in driving upgrades to existing computer systems than technological advancements. 7.2 Non functional requirements Non functional requirements are the functions offered by the system. It includes time constraints and constraints on the development process and standards. The non functional requirements are as follows:  Speed: The system should process the given input into output within appropriate time.  Ease of use: The software should be user friendly. Then the customers can use easily, so it doesn‟t require much training time.  Reliability: The rate of failures should be less then only the system is more reliable  Portability: It should be easy to implement in any system. 7.2.1 Specific Requirements The specific requirements are:  User Interfaces: The external users are the clients. All the clients can use this software for indexing and searching.
  • 21. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 21  Hardware Interfaces: The external hardware interface used for indexing and searching is personal computers of the clients. The PC‟s may be laptops with wireless LAN as the internet connections provided will be wireless.  Software Interfaces: The Operating Systems can be any version of Windows.  Performance Requirements: The PC‟s used must be atleast Pentium 4 machines so that they can give optimum performance of the product. 7.3 Software requirements Software requirements deal with defining software resource requirements and prerequisites that need to be installed on a computer to provide optimal functioning of an application. These requirements or prerequisites are generally not included in the software installation package and need to be installed separately before the software is installed.  Java1.4 or higher – Java Swing – front end – JDBC –Database connectivity – UDP-User Datagram Protocol – TCP-Transmission Control Protocol – Networking-Socket programming  ORACLE –Back end  Windows 98 or higher-Operating System 7.4 Hardware requirements The most common set of requirements defined by any operating system or software application is the physical computer resources, also known as hardware, A hardware requirements list is often accompanied by a hardware compatibility list, especially in case of operating systems. An HCL lists tested, compatible, and sometimes incompatible hardware devices for a particular operating system or application. The following sub-sections discuss the various aspects of hardware requirements.
  • 22. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 22 All computer operating systems are designed for a particular computer architecture. Most software applications are limited to particular operating systems running on particular architectures. Although architecture-independent operating systems and applications exist, most need to be recompiled to run on a new architecture. The power of the central processing unit (CPU) is a fundamental system requirement for any software. Most software running on x86 architecture define processing power as the model and the clock speed of the CPU. Many other features of a CPU that influence its speed and power, like bus speed, cache, and MIPS are often ignored. This definition of power is often erroneous, as AMDAthlon and IntelPentium CPUs at similar clock speed often have different throughput speeds. 10GB HDD(min) 128 MB RAM(min) Pentium P4 Processor 2.8Ghz(min) 7.5 Overview of technologies The technologies used in TARF is described as below: 7.5.1 History of Java Java language was developed by James Gosling and his team at sun Microsystems and released formally in 1995. Its former name is oak. Java Development Kit 1.0 was released in 1996 to popularize java and is freely available on Internet. 7.5.2 Overview of Java Java is loosely based on c++ syntax, and is meant to be Object-Oriented Structure of java is midway between an interpreted and a compiled language. The java compiler into ByteCodes, which are secure and portable across different platforms, compiles Java programs. These byte codes are essentially instructions encapsulated in single type, to what is known as java virtual machine (JVM), which resides in standard browser.
  • 23. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 23 JVM is available for almost all OS. JVM converts these byte codes into machine specific instructions at runtime. Java is actually a platform consisting of three components: Java programming language. Java library of classes and interfaces. Java Virtual Machine 7.5.3 Features of Java Java is a simple language. It does not make use of pointers, function overloading etc,. Java is object-oriented language and supports encapsulation, inheritance, Polymorphism and dynamic binding, but does not support multiple inheritance. Everything in java is an object except some primitive data types. Java is portable. It is an architecture neutral that is java programs once compiled can be executed on any machine that is enabled. Java is distributed in its approach and used for Internet programming. Java is robust, secured, high performing and dynamic in nature. Java supports multithreading. Therefore different parts of the program can be executed at the same time. 7.6 Java Database Connectivity (JDBC) In an effort to set an independent database standard API for Java; Sun Microsystems developed Java Database Connectivity, or JDBC. JDBC offers a generic SQL database access mechanism that provides a consistent interface to a variety of RDBMSs. This consistent interface is achieved through the use of “plug-in” database connectivity modules, or drivers. If a database vendor wishes to have JDBC support, he or she must provide the driver for each platform that the database and Java run on.
  • 24. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 24 To gain a wider acceptance of JDBC, Sun based JDBC‟s framework on ODBC. As you discovered earlier in this chapter, ODBC has widespread support on a variety of platforms. Basing JDBC on ODBC will allow vendors to bring JDBC drivers to market much faster than developing a completely new connectivity solution. 7.6.1 Result set enhancements The JDBC 1.0 API provided result sets that had the ability to scroll in a forward directionally. Scrollable result sets allow for more flexibility in the processing of results by providing both forward and backward movement through their contents. In addition, scrollable result sets allow for relative and absolute positioning. For example, it's possible to move to the fourth row in a scrollable result set directly, or to move directly to the third row following the current row, provided the row exists. The JDBC API allows result sets to be directly updateable, as well. 7.6.2 Batch updates The batch update feature allows an application to submit multiple update statements (insert/update/delete) in a single request to the database which can provide a dramatic increase in performance when a large number of update statements need to be executed. 7.6.3 Prepared Statements An element in a batch consists of a parameterized command and an associated set of parameters when a Prepared Statement is used. The batch update facility is used with a Prepared Statement to associate multiple sets of input parameter values with a single Prepared Statement object. The sets of parameter values together can then be sent to the underlying DBMS engine for execution as a single unit.
  • 25. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 25 7.7JDBC drivers There are four types of JDBC drivers. They are:  JDBC-ODBC bridge plus ODBC driver  JDBC-Net all-Java driver  Native-API partly-Java driver  Native-protocol all-Java driver Figure 7.7: JDBC driver types. Each of the JDBC driver is explained in detail below.
  • 26. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 26 7.7.1 JDBC-ODBC bridge plus ODBC driver The Java Soft bridge product provides JDBC access via ODBC drives. The ODBC binary code and in many cases database client code must be loaded on each client machine that uses this driver. As a result, this kind of driver is most appropriate on a corporate network where client installations are not a major problem, or for application server code written in Java in three-tier architecture. Fig. 7.7.1: JDBC-ODBC Bridge plus ODBC driver 7.7.2 JDBC-Net all-Java driver This driver translates JDBC calls into a DB MS-independent net protocol, which is then translated, to a DBMS protocol by a Server. This net Server middle ware is able to connect its all-Java clients to many different databases. The specific protocol used depends on the vendor. In general this is most flexible JDBC alternative.
  • 27. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 27 It is likely that all vendors of this solution will provide products to also support Internet access through firewalls, etc, that the web imposes. Several vendors are adding JDBC drivers to their existing database middleware products. 7.7.3 Native-API partly-Java Driver: This kind of driver converts JDBC calls into calls on the client API for Oracle, Sybase, Informix, DB2, or other DBMS. Note that, like the Bridge driver, this style of driver requires that some binary code be loaded on each client machine. Fig 7.7.3 shows Native-API partly JAVA Driver, where the application program requires a driver to connect to the database. Usually we use sun.jdbc.odbc.jdbcodbc driver this driver should request driver manager using driver manager.getconnection. Fig.7.7.3: Native API partly Java driver
  • 28. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 28 7.7.4 Native-protocol all-Java driver: This kind of driver converts JDBC calls into the network protocol used by DBMS's directly. This allows a direct call from the client machine to the DBMS server and is practical solution for Internet access. Since many of these protocols are proprietary, database vendors themselves will be the primary source. Several database vendors have these in progress. 7.8 Java RMI Java Remote Method Invocation (Java RMI) enables the programmer to create distributed Java technology-based to Java technology-based applications, in which the methods of remote Java objects can be invoked from other Java virtual machines, possibly on different hosts. RMI uses object serialization to marshal and unmarshal parameters and does not truncate types, supporting true object-oriented polymorphism 7.9 Java Socket Programming URLs and URL Connections provide a relatively high-level mechanism for accessing resources on the Internet. Sometimes your programs require lower-level network communication, for example, when you want to write a client-server application. In client-server applications, the server provides some service, such as processing database queries or sending out current stock prices. The client uses the service provided by the server, either displaying database query results to the user or making stock purchase recommendations to an investor. The communication that occurs between the client and the server must be reliable. That is, no data can be dropped and it must arrive on the client side in the same order in which the server sent it. TCP provides a reliable, point-to-point communication channel that client-server application on the Internet use to communicate with each other. To communicate over TCP, a client program and a server program establish a connection to one another. Each program binds a socket to its end of the connection. To communicate, the client and the server each reads from and writes to the socket bound to the connection.
  • 29. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 29 7.9.1 What Is a Socket? Normally, a server runs on a specific computer and has a socket that is bound to a specific portnumber. The server just waits, listening to the socket for a client to make a connection request. On the client-side: The client knows the hostname of the machine on which the server is running and the port number on which the server is listening. To make a connection request, the client tries to rendezvous with the server on the server's machine and port. The client also needs to identify itself to the server so it binds to a local port number that it will use during this connection. This is usually assigned by the system. Fig.7.9.1: Socket connection request If everything goes well, the server accepts the connection. Upon acceptance, the server gets a new socket bound to the same local port and also has its remote endpoint set to the address and port of the client. It needs a new socket so that it can continue to listen to the original socket for connection requests while tending to the needs of the connected client. Fig. 7.9.2: Socket connection
  • 30. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 30 On the client side, if the connection is accepted, a socket is successfully created and the client can use the socket to communicate with the server. The client and server can now communicate by writing to or reading from their sockets. A socket is one endpoint of a two-way communication link between two programs running on the network. A socket is bound to a port number so that the TCP layer can identify the application that data is destined to be sent. An endpoint is a combination of an IP address and a port number. Every TCP connection can be uniquely identified by its two endpoints. That way you can have multiple connections between your host and the server. The java.net package in the Java platform provides a class, Socket, that implements one side of a two-way connection between your Java program and another program on the network. The Socket class sits on top of a platform-dependent implementation, hiding the details of any particular system from your Java program. By using the java.net.Socket class instead of relying on native code, your Java programs can communicate over the network in a platform- independent fashion. Additionally, java.net includes the Server Socket class, which implements a socket that servers can use to listen for and accept connections to clients. This shows how to use the Socket and Server Socket classes. If we are trying to connect to the Web, the URL class and related classes (URL Connection, URL Encoder) are probably more appropriate than the socket classes. In fact, URLs are a relatively high-level connection to the Web and use sockets as part of the underlying implementation. See Working with URLs for information about connecting to the Web via URLs. 7.10 Packages One of the most innovative features of java is packages. The packages both a naming and a visibility control mechanism we can define classes inside a package that are not accessible by code outside the package.It can define the class members that are only exposed to the other members of the same package. Java uses file system directories to store packages. For example the .class files for any classes you declare to be part of My Package must be stored in the directory called MyPackage remember that cases significant and directory name must match the package name exactly.
  • 31. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 31 A package hierarchy must be reflected in the file system of your java development system. For example the package declared as -package java.awt.image; needs to be stored in javaawtimage in a windows environment. 7.10.1 Java.lang package The java package, java.lang contains fundamental classes and interfaces closely tied to the language and run time system which includes the root classes that form the class hierarchy, types tied to the language definition, basic exceptions, math functions, threading, security functions as well as some information on the underlying native system. 7.10.2 Java.util Data structures that aggregate objects are the focus of the Java.util package included in the packet is the collections API and organized data structure hierarchy influenced heavily by design pattern consideration. 7.10.3 Java .security It provides the classes and interfaces for security framework. It includes classes that implement an easily configurable, fine grained access control security architecture. The packages also supports a generation and storage of cryptographic public key pairs. Finally this package provides classes that support signed/guarded objects and secure random number generation. 7.11 Swings Swing is a widget toolkit for Java. It‟s a part of sun Microsystems Java foundation classes-API for providing graphical user interface for Java programs. Swing was developed to provide a more sophisticated set of GUI components than the earlier abstract window toolkit. Swings provide a native look and feel that emulates look and feel of several look and feel unrelated to the underlying platform. Swings introduced a mechanism that allows the look and feel of every component in an application to be altered without making substantial changes to the application code. The introduction of support for a plugable look and feel allows swing components to emulate for the appearance of native components while still retaining the benefits
  • 32. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 32 of platform independence. The above feature also makes it easy to make an application written in swing look very different from native programs if desired. Look and feel In software design look and feel is used in respect of GUI and comprises of its design, including elements such as colors, shapes, layout and typefaces(the “LOOK”) as well as the behavior of dynamic elements such as button, boxes and menus(the “FEEL”). The term look and feel is used in reference to both software and websites.
  • 33. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 33 CHAPTER 9 TESTING Testing is a critical element which assures quality and effectiveness of the proposed system in (satisfying) meeting its objectives. Testing is done at various stages in the System designing and implementation process with an objective of developing an transparent, flexible and secured system. Testing is an integral part of software development. Testing process, in a way certifies, whether the product, that is developed, complies with the standards, that it was designed to. Testing process involves building of test cases, against which, the product has to be tested. 9.1Test objectives  Testing is a process of executing a program with the intent of finding an error.  A good case is one that has a high probability of finding an undiscovered error.  A successful test is one that uncovers a yet undiscovered error. If testing is conducted successfully (according to the objectives) it will uncover errors in the software. Testing can't show the absences of defects are present. It can only show that software defects are present. 9.2 Testing principles Before applying methods to design effective test cases, a software engineer must understand the basic principle that guides software testing. All the tests should be traceable to customer requirements. 9.3 Testing design Any engineering product can be tested in one of two ways:
  • 34. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 34 9.3.1 White box Testing This testing is also called as glass box testing. Inthis testing, by knowing the specified function that a product has been designed to perform test can be conducted that demonstrates each function is fully operation at the same time searching for errors in each function. it is a test case design method that uses the control structure of the procedural design to derive test cases. 9.3.2 Black box Testing Inthis testing by knowing the internal operation of a product, tests can be conducted to ensure that "all gears mesh", that is the internal operation performs according to specification and all internal components have been adequately exercised. It fundamentally focuses on the functional requirements of the software. The steps involved in black box test case design are: • Graph based testing methods • Equivalence partitioning • Boundary value analysis • Comparison testing 9.4 Testing strategies A software testing strategy provides a road map for the software developer. Testing is a set of activities that can be planned in advanced and conducted systematically. For this reason a template for software testing a set of steps into which we can place specific test case design methods should be defined for software engineering process. Any software testing strategy should have the following characteristics: a. Testing begins at the module level and works outward toward the integration of the entire computer based system. b. Different testing techniques are appropriate at different points in time. c. The developer of the software and an independent test group conducts testing.
  • 35. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 35 d. Testing and debugging are different activities but debugging must be accommodated in any testing strategy. 9.5 Levels of Testing Testing can be done in different levels of SDLC. They are: 9.5.1 Unit Testing The first level of testing is called unit testing. Unit testing verifies on the smallest unit of software designs-the module. The unit test is always white box oriented. In this, different modules are tested against the specifications produced during design for the modules. Unit testing is essentially for verification of the code produced during the coding phase, and hence the goal is to test the internal logic of the modules. It is typically done by the programmer of the module. Due to its close association with coding, the coding phase is frequently called “coding and unit testing.” The unit test can be conducted in parallel for multiple modules. The Test cases in unit testing are as follows: Table I: Unit Test Case 1 Test Case ID Unit Test Case 1 Description Check whether data is inserting Input User details Expected output Insert data into table Actual Result/Remarks Got the expected output Passed(?) Yes
  • 36. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 36 Table II: Unit Test Case 2 Test Case ID Unit Test Case 2 Description Sign in for client Input user name and password Expected output Sign in Actual Result/Remarks Got the expected output Passed(?) Yes Table III: Unit Test Case 3 Test Case ID Unit Test Case 3 Description Match rule set in classie Input packet Expected output If matches any rule set then drop else forward Actual Result/Remarks Got the expected output Passed(?) Yes Table IV: Unit Test Case 4 Test Case ID Unit Test Case 4 Description Calculate marking value in MPM Input packet Expected output Marking value appended to packet Actual Result/Remarks Working as required Passed(?) Yes
  • 37. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 37 Table V: Unit Test Case 5 Test Case ID Unit Test Case 5 Description Forwarding packet Input Sensed event(event occurrence) Expected output Receive packet at receiver Actual Result/Remarks Working as required Passed(?) Yes Table VI: Unit Test Case 6 Test Case ID Unit Test Case 6 Description Compare marking value with the stored value Input packet Expected output Drop or forward packet Actual Result/Remarks Working as required Passed(?) Yes Table VII: Unit Test Case 7 Test Case ID Unit Test Case 7 Description Server receiving packets Input packets Expected output Receive packets Actual Result/Remarks Working as required Passed(?) Yes
  • 38. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 38 Table VIII: Unit Test Case 8 Test Case ID Unit Test Case 8 Description Receiving data from the RAD Input packets Expected output Create file Actual Result/Remarks Working as required Passed(?) Yes 9.5.2 Integration Testing The second level of testing is called integration testing. Integration testing is a systematic technique for constructing the program structure while conducting tests to uncover errors associated with interfacing. In this, many tested modules are combined into subsystems, which are then tested. The goal here is to see if all the modules can be integrated properly. There are three types of integration testing:  Top-Down Integration: Top down integration is an incremental approach to construction of program structures. Modules are integrated by moving downwards throw the control hierarchy beginning with the main control module.  Bottom-Up Integration: Bottom up integration as its name implies, begins Construction and testing with automatic modules.  Regression Testing: In this contest of an integration test strategy, regression testing is the re execution of some subset of test that have already been conducted to ensure that changes have not propagated unintended side effects.
  • 39. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 39 Table IX: Integration Test Case Test Case ID Integration Test Case 1 Description All servers are running properly Input packets is passed from one to another Expected output packets is received at server Actual Result/Remarks Working as required Passed(?) Yes 9.5.3 Functional test Functional tests provide systematic demonstrations that functions tested are available as specified by the business and technical requirements, system documentation, and user manuals. Functional testing is centered on the following items: Table X: Functional Testing items Valid Input Identified classes of valid input must be accepted. Invalid Input Identified classes of invalid input must be rejected. Functions Identified functions must be exercised. Output Identified classes of application outputs must be exercised. Systems/Procedures:Interfacing systems or procedures must be invoked.
  • 40. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 40 Organization and preparation of functional tests is focused on requirements, key functions, or special test cases. In addition, systematic coverage pertaining to identify Business process flows; data fields, predefined processes, and successive processes must be considered for testing. Before functional testing is complete, additional tests are identified and the effective value of current tests is determined. 9.6 Validation testing At the culmination of integration testing, software is completely assembled as a package; interfacing errors have been covered and corrected, and final series of software tests-validating testing may begin. Validation can be defined in many ways, but a simple definition is that validation succeeds when software functions in a manner that can be reasonably expected by customers. Reasonable expectation is defined in the software requirement specification- a document that describes all user visible attributes of the software. The specification contains a section title “validation criteria”. Information contained in that section forms the basis for validation testing approach 9.7 Alpha testing It is virtually impossible for a software developer to forsee how the customer will really use a program. Instructions for use may be misinterpreted; strange combination of data may be regularly used and output that seemed clear to the tester may be unintelligible to a user in field. When custom software is built for one customer, a series of acceptance tests are conducted to enable the customer to validate all requirements by the end user rather than system developer and acceptable test can range from an informal “test drive” to a planned and systematically executed series of tests. In fact, acceptance testing can be conducted over a period of weeks or months, thereby uncovering cumulative errors that might degrade the system over time. If software is developed as a product to be used by many customers, it is impractical to perform formal acceptance test with each one. Most software product builders use a process called alpha and beta testing to uncover errors that only the end user seems able to find.
  • 41. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 41 A customer conducts the alpha test at the developer‟s site. The software is used in a natural setting with the developer “Looking over the shoulder” of the user and recording errors and usage problems. Alpha tests are conducted in controlled environment. 9.8 Beta testing The beta test is conducted at one or more customer sites by the end user of the software. Unlike alpha testing, the developer is generally not present. Therefore, the beta test is a “live” application of the software in an environment that cannot be controlled by the developer. The customer records all problems that are encountered during beta testing and reports these to the developer at regular intervals. As a result of problems reported during beta test, the software developer makes modification and then prepares for release of the software product to the entire customer base. 9.9 System Testing and Acceptance Testing System testing is actually a series of different tests whose primary purpose is to fully exercise the computer-based system. Include recovery testing during crashes, security testing for unauthorized user, etc. Acceptance testing is sometimes performed with realistic data of the client to demonstrate that the software is working satisfactorily. This testing in FDAC focuses on the external behavior of the system.
  • 42. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 42 CHAPTER 9 SCREENSHOTS Fig 9.1 Sign in page
  • 43. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 43 Fig 9.1 Sign up form
  • 44. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 44 Fig 9.1 Client
  • 45. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 45 Fig 9.1 Attacker
  • 46. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 46 Fig 9.1 Classie
  • 47. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 47 Fig 9.1 Modulo packet marking
  • 48. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 48 Fig 9.1 Reconstruct and drop
  • 49. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 49 Fig 9.1 Resources available in server
  • 50. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 50 Fig 9.1 Server information
  • 51. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 51 Fig 9.1 Server
  • 52. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 52 CONCLUSIONS One of the most serious threats to cloud computing comes from HTTP or XML- Based DoS attacks. These attacks can be efficiently detected by using packet based marking approach on the attacker side and the detected packets are filtered by dropping the marked packets on the victim side. So, the pack-et marking overhead and the false positive rate of DoS attacks are greatly reduced. The detection of DDoS attack is improved by replacing the Cloud Protector with RAD on the victim side and the introduction of CLASSIE and modulo marking at the source side. This improves the reduction of the false positive rate and increase the detection and filtering of DDoS attacks. The future work can be extended by integrating the proposed system with the source end defensive systems to detect on MAC spoofing.
  • 53. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 53 BIBILOGRAPHY [1] A.Belenky and N.Ansari (2003), „Tracing Multiple Attackers with Deterministic Packet Marking (DPM)‟, Proceedings of IEEE Pacific Rim conference on communications, computers and signal pro-cessing, Vol. 1, pp. 49–52. [2] A.Chonka W. Zhou and Y.Xiang (2008a), „Protecting Web Services with Service Oriented Traceback Architecture‟, Proceedings of the IEEE eighth international conference on computer and information technology, pp. 706-711. [3] A.Chonka, W.Zhou and Y.Xiang (2008b), „Protecting Web Services from DDoS Attacks by SOTA‟, Proceedings of the IEEE fifth interna-tional conference on information technology and applications, pp. 1-6. [4] A.Chonka, W.Zhou, J.Singh and Y.Xiang (2008c), „Detecting and Tracing DDoS Attacks by Intelligent Decision Prototype‟, Proceedings of the IEEE International Conference on Pervasive Computing and Communications, pp. 578-583. [5] A.Chonka, W.Zhou and Y.Xiang (2009a), „Defending Grid Web Ser-vices from X-DoS Attacks by SOTA‟, Proceedings of the third IEEE international workshop on web and pervasive security (WPS 2009), pp. 1-6. [6] A.Chonka, W.Zhou and J.Singh (2009b), „Chaos Theory Based Detec-tion against Network Mimicking DDoS Attacks‟, Journals of IEEE Communications Letters, Vol. 13, No. 9, pp. 717-719. [7] A.Chonka, Y.Xiang, W.Zhou and A.Bonti (2011), „Cloud Security Defence to Protect Cloud Computing against HTTP-DoS and XML-DoS attacks‟, Jour-nal of Network and Computer Applications, Vol. 34, No. 4, pp. 1097-1107.
  • 54. A Packet Marking Approach To Protect Cloud Environment Against DDoS Attacks KNSIT Page 54 [8] D.Dean (2002), „An algebraic Approach to IP traceback‟, Journal ACM Transactions on Information and System Security‟, Vol. 5, No. 2, pp.119-137. [9] S.Savage, D.Wetherall, A.Karlin and T.Anderson (2000), „Practical Network Support for IP traceback‟, Proceedings of the conference on Applications, Technologies, Architectures, and Protocols for Com-puter Communication, pp. 295-306. [10] H.Shabeeb, N.Jeyanthi and S.N.Iyengar (2012), „A Study on Security Threats in Clouds‟, Journal of Cloud Computing and Services Sci-ence, Vol. 1, No. 3, pp. 84-88. [11] X.Xiang, W.Zhou and M.Guo (2009), „Flexible Deterministic Packet Marking: an IP Traceback System to Find The Real Source of At-tacks‟, Journal of IEEE Transactions on Parallel and Distributed Sys-tems, Vol. 20, No. 4, pp. 567-580. [12] K.H.Choi and H.K.Dai (2004), „A Marking Scheme using Huffman Codes for IP Traceback‟, Proceeding of 7th International Symposium on Parallel Architectures, Algorithms and Networks (SPAN‟04).