SlideShare a Scribd company logo
1 of 6
Download to read offline
1

Replay of Malicious Traļ¬ƒc in Network Testbeds
Aleļ¬ya Hussain, Yuri Pradkin and John Heidemann
University of Southern California / Information Sciences Institute
Abstractā€”In this paper we present tools and methods to
integrate attack measurements from the Internet with controlled experimentation on a network testbed. We show
that this approach provides greater ļ¬delity than synthetic
models. We compare the statistical properties of real-world
attacks with synthetically generated constant bit rate attacks on the testbed. Our results indicate that trace replay
provides ļ¬ne time-scale details that may be absent in constant bit rate attacks. Additionally, we demonstrate the
eļ¬€ectiveness of our approach to study new and emerging
attacks. We replay an Internet attack captured by the LANDER system on the DETERLab testbed within two hours.

I. Introduction
Testbeds oļ¬€er a platform for testing and evaluation of
networked systems in controlled and repeatable environments. They facilitate studying the impact of network conditions on the security of real network systems and applications [3]. Such study depends on approaches to providing
representative network traļ¬ƒc.
Many approaches have been proposed to create controlled malicious and non-malicious traļ¬ƒc. Creating traļ¬ƒc
in testbeds always involves some level of modeling. Some
models may be completely abstract, such as constantbitrate attacks. Others create synthetic traļ¬ƒc models, possibly parameterized from real network traļ¬ƒc (such as [10],
[17]), or involve tools speciļ¬c to malicious traļ¬ƒc generation (such as Metasploit [12]). Synthetic traļ¬ƒc models
are attractive because they allow control and generation of
any amount of traļ¬ƒc with few or no external constraints.
Additionally, such models adapt well to testbeds and they
have no privacy concerns.
The risk of synthetic models is that one gets out only
what one puts in: they may miss the dynamics and subtleties of real-world traļ¬ƒc. Often malicious and regular
network traļ¬ƒc reļ¬‚ects not only end hosts, but also aspects
of the network path [6], [9]. In our previous work, we have
shown attack dynamics are inherently dependent on characteristics such as number of attackers and the attacker
host environment, such as the type of operating system,
system load, and hardware characteristics [7]. These details are often omitted in synthetic models of malicious
traļ¬ƒc.
Another challenge of synthetic models is that they always lag current malicious traļ¬ƒc. There is a delay in deploying new models, since models are designed only when
malware is understood, yet attacks change and mutate
rapidly. In the paper, we import denial-of-service (DoS) attacks that we have captured in LANDER, a network traļ¬ƒc
analysis system [5], to provide malicious traļ¬ƒc models in
the DETERLab cyber security testbed [15]. Our approach
recreates source and network path conditions by replaying

the attacks on the DETERLab testbed [15].
The contribution of this paper is to describe an approach
for attack capture and immediate replay on a networking
testbed. The attack replay tools provide a collective representation of one or more attackers located downstream
from the capture point mapped onto a speciļ¬ed node in
the testbed (Section II). In Section III, we show that this
approach provides greater ļ¬delity than synthetic models.
We also show that our approach can quickly feed new attacks into a testbed for evaluation (Section IV), where each
attack source may be isolated and modelled individually,
as required by some cyber defense strategies.
We expect our tools will provide timely and realistic trafļ¬c with which to test cyber-defenses such as intrusion detection systems and ļ¬rewalls. The data sets, our tools,
and the DETERLab testbed are all available for use by
network security researchers.
II. Approach
In this paper we discuss a methodology to detect and
capture attacks using LANDER, and then replay this trafļ¬c in a scenario constructed on the DETER testbed. We
discuss our approach in this section and present an analysis of the ļ¬delity this approach may oļ¬€er in the following
section.
A. Capturing Network Traļ¬ƒc with LANDER
We capture network traļ¬ƒc with the LANDER system [5]. LANDER supports capture and anonymization of
network traļ¬ƒc, using parallel processing to support high
data rates and signiļ¬cant and varying amounts of analystdriven-processing. It is currently in operation, with small
deployments running on single (but multi-core) systems
with commodity network cards, and large deployments
capturing traļ¬ƒc at 10 Gb/s with back-end anonymization
and analysis on shared 1000-node compute clusters.
For this paper, the important aspects of LANDER are
that it provides the policies that enable us to get permission to collect packet, the ļ¬‚exibility to plug in diļ¬€erent
detection modules to detect interesting events, and the
framework to extract anonymized traces in near-real time.
LANDER is built around multiple queues of data, each
of which triggers a callback to a system- or user-provided
function. This structure supports diļ¬€ering policies for
data anonymization. By default, data is placed into a raw
queue, then anonymized, by changing IP addresses using
preļ¬x-preserving anonymization [18] and removing all application payloads. This range of policies is important to
enable deployments from environments where high expectations of privacy must be enforced, to laboratories, where
it can serve as a tool to audit consenting but not-fully-
2

trusted parties. It allows us to experiment with diļ¬€erent
analysis tools, such as our DoS-detection methods [6], as
well as oļ¬€-the-shelf tools like SNORT [2], Bro [13], and
Suricata [1]. Once an attack is identiļ¬ed, we extract the
packet headers for replay.
B. DETERLab: A playground for Attacks
The DETERLab provides infrastructure, tools, and
methodologies for networking and cyber security experimentation. The DETERLab testbed includes 570 PCs
and specialized hardware located at two sites; USC/ISI in
Marina del Rey, CA and UC Berkeley, Berkeley, CA. The
two sites are interconnected with a shared 10Gbps links.
The DETERLab containerization tools enable researchers
to allocate internet-scale topologies using virtualization of
nodes. DETERLab also provides the Montage AGent Infrastructure (MAGI) tools for orchestrating a wide range
of networking and cyber security experimentation scenarios [15]. The MAGI tools provide an event-based control
overlay for deploying and controlling agent modules on the
topology. Each agent enacts a particular behavior, for example, a attacker behavior, and the behavior is conļ¬gured and controlled through an Agent Activation Language
(AAL) [15].
C. Replaying An Attack Scenario in DETERLab
DETERLab provides a playground for attack experiments, and the MAGI tools allow researchers to systematically explore the experimentation design space. The contribution in this paper is to take traces from the real world
with LANDER, and recreate the attack in DETERLab using the MAGI framework. To make this transformation,
we developed an new agent for MAGI, and a scenario to
replay the attack.
To replay the trace with MAGI we developed a attackreplay agent. It uses the Tcpreplay toolset [14] to adapt
the real-world trace to the testbed. It uses tcprewrite to
remap the source and destination MAC and IP addresses
in the LANDER attack trace. If the original attack makes
use of IP address spooļ¬ng, the source IP addresses are not
modiļ¬ed. It then uses tcpreplay to regenerate the attack
with proper timing and replay it from one or more locations
on a topology in the testbed.
We generate two types of traļ¬ƒc inside DETER. Attack
traļ¬ƒc and non-malicious (or background) traļ¬ƒc. We generate attack traļ¬ƒc using the attack-replay agent. We deploy additional agents to generate non-malicious traļ¬ƒc.
We mix the two, breaking the scenario into three phases:
(i) pre-attack phase during which we generate only nonmalicious network traļ¬ƒc. (ii) the attack phase, during
which we deploy the attack into the topology and observe
how it interacts with continuing non-malicious traļ¬ƒc. (iii)
The post-attack phase when non-malicious traļ¬ƒc recovers
from the attack.
We generate web traļ¬ƒc to create non-malicious network
traļ¬ƒc throughout the duration of the experiment. We use
the MAGI webserver and webclient agents. The MAGI
webserver agent deploys an Apache web server. The MAGI

webclient agent uses Curl to requests objects from the web
server. The webclient agent can be conļ¬gured to choose a
webserver in sequence or at random from a list of one or
more web servers. The size of the web page can be deļ¬ned
as a probabilistic distribution.
We compare our replay of a real attack to an alternative
attack, where we generate synthetic attack traļ¬ƒc with a
MAGI attack-CBR agent. The attack-CBR agent generates a constant bit rate packet rate stream on the network
that can be used to represent an attack. The agent can be
parametrized with protocol, packet size distributions, port
number distributions, ramp up and ramp down rates, low
and high attack rates.
Lastly, we deploy a packetCapture agent to capture all
the packets seen at the observation point in the topology. The packetCapture agent uses tcpdump to capture
packets. It can be conļ¬gured with a packet ļ¬lter, capture
packet size, and the capture ļ¬le storage location.
III. Richness in Replayed Attacks
We present an analysis of the additional ļ¬delity realworld attacks oļ¬€er in this section. First, we discuss how
the tools from the last section come together to allow us to
systematically explore and compare the diļ¬€erent attacks in
DETERLab. We then provide a qualitative and quantitative analysis of the attack replay as compared to synthetically generated CBR attacks.
A. Traces to Experiments
We now describe how we create three types of scenarios
to explore the richness of the replayed LANDER attacks
and compare them to synthetically generated CBR attacks
on the DETER testbed.
We conduct three controlled experiments on the DETER testbed: a replay of a real-world attack [16], a singleattacker, constant-bit rate, synthetic attack with a rate of
4200 packets/s, a ten-attacker, constant-bit rate, synthetic
attack with an aggregate rate of 4200 packets/s (each attacker at 420 packets/s).
Here we use a dataset from lander with three separate
denial-of-service attacks captured in 2002 and 2003 [16]. In
this paper, we use the ļ¬rst attack, a reļ¬‚ector attack that
send ICMP echo reply packets to a victim in Los Nettos.
We refer to this attack as LANDER-2002. The mix of
attack and background traļ¬ƒc is shown in Figure 1, both as
packets per second and bits per second. The attack consists
of echo reply packets from approximately 140 attackers,
with each packet carrying a 28 byte payload. With small
packets, we see the attack most clearly in Figure 1a; it
hardly shows up when one measures bitrate (Figure 1b).
We choose this attack from the dataset since it had the
lowest variance in the attack rate. Even though visually
this attack looks similar to a constant bit rate attack, in
the next section, we systematically explore the richness in
this attack as compared to synthetically generated CBR
attacks.
We ļ¬rst process this trace to ļ¬lter out attack traļ¬ƒc targeted to the victim. This attack was then transferred to
REPLAY OF MALICIOUS TRAFFIC IN NETWORK TESTBEDS

20000
18000
Rate in packets per sec

16000
14000
12000
10000
8000
6000
4000
2000
0

0

50

100

150

200

250

300

350

400

450

Time in seconds

(a) Attack and background traļ¬ƒc in packets/s.
60

Rate in MBits per sec

50
40

30

20
10

0

0

50

100

150

200

250

300

350

400

450

Time in seconds

(b) Attack and background traļ¬ƒc in Mbits/s.

Fig. 1: Real-world Captured DoS attack LANDER-2002.
the DETER testbed and processed by the MAGI attackreplay agent to generate an attack on the experiment topology. Additionally, we used the MAGI attack-CBR agent
to generate the two constant bit rate attacks as discussed
above.
We generate non-malicious background traļ¬ƒc using four
webclients agents and two webserver agents on a dumbbell
topology. The web clients request a constant webpage of
10MB from a randomly chosen webserver. The attack is
launched from a bank of attackers attached to one webserver. We deploy the packetCapture agent at an observation point on the bottleneck link. It captures all packets
in the experiment with zero packet loss. The complete experiment description along with the traces will be made
available on publication [4].
B. Attack Traļ¬ƒc
As discussed in the previous section, we compare two
types of synthetic attacks with attack trace replay to study
the dynamics.
Figure 2 show the attack rate in packets/second and the
empirical probability distributions for all three types of
attack. The top row shows the attack packet rates. The
x-axis show time in seconds and the y-axis shows the rate
in packets/second. We have parametrized the constant bit
rate attacks based on the LANDER-2002 attack rate. We
observe an attack rate of approximately 4200 packets/sec
in all three cases. The bottom row shows both the empir-

3

ical probability distribution function and the cumulative
distribution function of the attack rates. The x-axis is
the packet rate sampled at 1 millisecond to expose ļ¬netimescale dynamics of the packet rate. Bars in the plot
and the left y-axis show the probability distribution function as the normalized frequency of the observed packet
rate. The line and the right y-axis show the packet rates
plotted as a cumulative distribution function.
Qualitative diļ¬€erences in testbed attacks: We ļ¬rst
look at overall features in the diļ¬€erent methods of generating testbed attack traļ¬ƒc. We begin with the trace replay
in Figure 2c. On the top right, we see the attack traļ¬ƒc
is ā€œnoisyā€ with lots of ļ¬ne variation. The bottom right
graph shows that the exact attack rate varies quite a bit
in each millisecond with a wide range of possible values,
since both the PDF and the CDF show a smooth variation
over a continuous range. We argue that these variations
are not unusual, but are an inherent part in any real-world
distributed denial-of-service attack. These variations arise
because attacks originate on diļ¬€erent types of hardware,
sometimes with other software competing for compute cycles. They also traverse diļ¬€erent networks en-route to their
target, experiencing diļ¬€erent kinds of cross-traļ¬ƒc, including delay and sometimes loss. We have seen these eļ¬€ects
before in wide-area traces and explored them through controlled experiments [6], [7]. In this experiment we show
that through replay, we can recreate these subtleties on
the testbed.
By contrast, Figures 2a and Figure 2b show much less
richness and variation. The synthetic, single-source CBR
attacker in Figure 2a shows very regular traļ¬ƒc occasionally
interrupted by bursts of variation (top graph, with bursts
at about 45 s, 85 s, 125 s, etc.). The distribution of arrivals,however, is strongly modal, with 4 or 5 packets per
millisecond 80% of the time. There is far less variability
than the real-world attack. This lack of variability presents
a much diļ¬€erent attack to an intrusion detection systemā€”
an IDS that would see this periodic traļ¬ƒc would easily
ļ¬nd this synthetic attack while missing the more complex,
real-world one.
Figure 2b shows that more synthetic attackers provide
a richer attack. Each attacker sends at a lower rate, but
the aggregate rate is the same. We see more variation in
the CDF (Figure 2b bottom graph), because each of the 10
attackers is slightly diļ¬€erent. However, the aggregate trafļ¬c (top graph) is much smoother ā€”the 10 attackers jitter
largely cancels each other out.
We conclude that real-world traļ¬ƒc shows variation and
jitter at timescales of milliseconds (Figure 2c). This richness is lost with synthetic attacker, even when run as teams
on multiple machines. If a testbed is to reproduce the kind
of real-world complexity it must employ more sophisticated
tools and methodologies than just simple synthetic attacks.
Quantifying diļ¬€erences in testbed attacks: To
quantity the diļ¬€erence in the spread and variance we compute the average absolute deviation (AAD). For a given
sampling bin of p seconds, we deļ¬ne the arrival process x(t)
as the number of packets that arrive in the bin [t, t + p).
4

4000

2000

3000

2000

1000

1000

0

0

100

150

200

250

0

50

100

Time in seconds

150

200

1
0.7

0.3

1

0.7

0.5

0.3

cdf

0.3
0.2

0.1

0.5

20

250

1
0.9
0.8
0.7

0.2

0

Packets per msec

0.6

0.1

0.3
0.2

0.05

0.5
0.4

0.1

0.3
0.2

0.05

0.1
0

0

5

10

15

20

0

Packets per msec

(a) CBR: Single Attacker

0.15

0.4

0.1
15

200

0.6
0.15

0.4

0.2

150

0.3

0.7

0.2
pmf

pmf

0.5

10

100

0.25

0.8

0.6
0.4

5

50

0.9
0.25

0.8

0

0

Time in seconds

0.9

0.6

0

0

250

Time in seconds

pmf

50

2000

1000

cdf

0

3000

(b) CBR: Ten Attackers

cdf

3000

Rate in packets per sec

5000

4000
Rate in packets per sec

5000

4000
Rate in packets per sec

5000

0.1
0

0

5

10

15

20

25

30

35

40

0

Packets per msec

(c) Replay of LANDER-2002 trace

Fig. 2: The packet rates and empirical distributions of attack packet arrival times for the three types of attacks used
in the experiments in DETERLab.
Thus, a T second long packet trace will have N = T /p
samples. The AAD is then deļ¬ned as
N

ĀÆ
|(x(i) āˆ’ X)|/N

AAD =
i=1

ĀÆ
where X is the mean of the attack rate. Since this measure does not square the distance from the mean, it is
less aļ¬€ected by extreme observations than metrics such as
variance and standard deviation. Table I summarizes the
mean, median, standard deviation and the average absolute deviation for the three types of attack reply.
The constant bit rate attacks have a similar mean and
median value but for the LANDER-2002 attack, the mean
is almost twice the median value. The results indicate that
there is a wide range of attack packet arrival rates for realworld attacks when considering the ļ¬ne time scales. The
arrival process is averaged out as the size of the sampling
bin p increases. As seen in the top row on the Figure 2,
all three attacks qualitatively look similar when sampled
at 1 sec. However, as reported in Table I, the statistical properties of the attacks are very diļ¬€erent when the
attacks are sampled at 1 msec. We see that standard deviation is much higher as more attackers participate, and the
LANDER-2002 trace has the largest standard deviation of
all.
The AAD for the single source attack is the smallest indicating this experiment closely resembles a constant bit
rate of 4200 packets/sec. The AAD for the other two experiments in signiļ¬cantly higher indicating high variability
in the arrival rate of the attack packets.
These measures demonstrate the large diļ¬€erence between simpler synthetic attacks and trace replay. Taken
with the qualitative comparison, we argue that it is essential to use trace replay if ļ¬ne timescale details of attacks matter to whatever mechanism is being studied in a
testbed.

metric
(in pkts/ms)
mean
median
std. deviation
avg. abs. dev.

synthetic (CBR)
single multiple
4.20
4.20
4.00
4.00
1.77
2.46
0.84
2.18

LANDER-2002
trace
4.11
2.00
4.78
3.62

TABLE I: Statistical properties of two kinds of synthetic
attack and captured traces.

IV. Rapid Attack Turn-around: Todayā€™s Attacks
Today
Trace replay has the promise to rapidly turn around attacks into reusable models. This speed is possible because
replay of traces does not require understanding of their
contents. They can therefore be used to quickly test new
defenses, in parallel with analysis of the underlying attack
that can lead to more parsimonious, parameterized models
that will eventually explore a wider range of scenarios.
To demonstrate the ability to rapidly take an observation to a replayable tool in a testbed, we next carry out
an experiment to demonstrate the process. Our target is
a DNS ampliļ¬cation attack [8], such as those that took
place in March 2013 [11]. In principle, we could watch our
network for a new attack to appear. However, because of
publishing deadlines, we instead decided to stage our own
mini-attack.
We deployed six DNS servers at ISI in Marina del Rey,
California, U.S.A. An trigger computer, also at ISI, then
sent a stream of 400 DNS queries per second at these
servers with a spoofed source address of a machine loaned
to us by Christos Papadopoulos at Colorado State University (CSU) in Ft. Collins, Colorado. We recorded traļ¬ƒc
as it leaves ISI and as it enters CSU. We detect the attack with a custom SNORT rule. We then extract the
REPLAY OF MALICIOUS TRAFFIC IN NETWORK TESTBEDS

start
-0:45
0:00
1:13
6:52
14:28
15:23
16:14
22:09
35:22
80:40
85:40

duration
1:13
15:23
ā€”
19:57
1:46
ā€”
1:46
6:58
2:20
9:13
22:15

1400

Attack Rate in packets per sec

event
Start of 1st segment with attack start
Attack begins
End of 1st segment with attack start
SNORT detection under LANDER
Start of last segment with attack end
Attack ends
End of last segment with attack end
Processing trace ļ¬les
Moving trace ļ¬les to DETER
Swap in a topology in DETER
Process and deploy traces by MAGI

5

1200
1000
800
600
400
200
0

0

200

400

600

800

1000

1200

1400

1600

1800

1600

1800

1600

1800

Time in seconds

(a) DNS Reļ¬‚ection attack in packets/s
12

10
8

6

4
2

0

0

200

400

600

800

1000

1200

1400

Time in seconds

(b) DNS Reļ¬‚ection attack in Mbits/s
70000
All traffic at LANDER (packets per sec)

traces and hand them to MAGI for exploration in DETERLab. Figure 3 shows the attack and background trafļ¬c captured by LANDER during the attack. The attack
starts at 403 seconds and lasts for about 900 seconds. Each
attack packet is a 3700B DNS query response packet which
is fragmented by the network into three packets. The rate
changes seen at the start of the attack are due to us exploring diļ¬€erent attack parameters for the experiment.
Table II shows the events for trace detection through
replay. LANDER splits data into 512 MB segments as it
arrives, and processes segments concurrently with collection. The attack lasts for only about 15 minutes and spans
10 segments. We see that although the attack lasts for
more than 16 minutes, the ļ¬rst segment with the start of
the attack completes about 1 minute into the attack. The
attack continues, but this segment then becomes available
for analysis. The change in traļ¬ƒc rates triggers SNORT
about 7 minutes into the experiment. SNORT runs quickly
(about 5 s per segment), but needs to examine 1 minute of
traļ¬ƒc to establish that the attack traļ¬ƒc has changed the
baseline from regular operation. Due to this parallelism,
we detect the attack while itā€™s still in progress.
We do not begin processing the trace ļ¬les until the attack completes. When it completes, we convert the traces
to pcap format, compress them, and move them from the
LANDER system onto shared network. The 17 minutes of
traces are 1278 MB of data compressed. This processing
takes about 7 minutes. If our system was automated, processing and transfer could run concurrent with the attack,
reducing this time greatly.
Once LANDER detects and provides the attack trace
ļ¬les, we copy the data to the DETER testbedā€™s shared
ļ¬le system from where it can be accessed by the MAGI
attack-replay agent during the experiment. Transferring
the data to DETER takes about 2.5 minutes. We now
swap-in an experiment and orchestrate it using the MAGI
toolkit. Once the attack-replay agent is deployed on the
attacker nodes in the topology, the agent makes a copy of
the trace ļ¬le on the local disk for processing. The agent
then installs the tcpreplay tools, uncompresses the trace

Attack Rate in MBits per sec

TABLE II: Timing (in minutes:seconds) of our experiment
for rapid attack turn-around.

60000
50000
40000
30000
20000
10000
0

0

200

400

600

800

1000

1200

1400

Time in seconds

(c) All traļ¬ƒc at LANDER in packets/s

Fig. 3: A DNS ampliļ¬cation attack generated at ISI and
captured by ISI LANDER system.

ļ¬le, and changes the victim IP address and source and
destination MAC addresses in the trace. Processing the
trace ļ¬le for replay and deploying it into the testbed takes
about 22 minutes. The attack-replay agent signals the orchestrator that it is ready to start the experiment. For
the scenarios explored in the paper, the attack is replayed
100 seconds after the start of the non-malicious traļ¬ƒc.
This exercise shows that we are able to turn around an
attack into a testbed experiment in less than two hours.
The main uncertainty in the process is attack detection,
moving data from the locations of collection to replay, and
human decision making. Our current approach is far from
optimized. With better pipelining, we believe we could
6

replay attacks while they are still underway. Also with
greater automation, some steps that currently are humandriven can be removed. We believe this simple exercise
shows the potential of our approach for researchers requiring fresh data.
V. Conclusions
This paper has outlined the advantages and the potential
of trace replay as a means of improving network security
research. We show that replaying realā€“world attacks allows including ļ¬ne time scale detail that are absent in synthetically generated constant bit rate attacks. Integrating
LANDER data sets into the DETERLab provides higher
quality data for testbed experiments. The methodology
presented in this paper provides a way to rapidly incorporate new attacks into DETERLab experiments. Our tools
and datasets are available for public use [4]. Please contact
the authors if they could be of use to you.
Acknowledgments
We thank Christos Papadopoulos and Kaustubh Gadkari of Colorado State University for helping us to carry
out the experiment in Section IV as the cooperative victim
of the attack.
The research done by Aleļ¬ya Hussain is supported by the
Department of Homeland Security and Space and Naval
Warfare Systems Center, San Diego, under Contract No.
N66001-10-C-2018.
Research done by Yuri Pradkin and John Heidemann in
this paper is partially supported by the U.S. Department
of Homeland Security Science and Technology Directorate,
Cyber Security Division, via SPAWAR Systems Center Paciļ¬c under Contract No. N66001-13-C-3001. John Heidemann is also partially supported by DHS BAA 11-01-RIKA
and Air Force Research Laboratory, Information Directorate under agreement number FA8750-12-2-0344.
The U.S. Government is authorized to reproduce and
distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. Any opinions,
ļ¬ndings and conclusions or recommendations expressed in
this material are those of the author(s) and do not necessarily reļ¬‚ect the views of SSC-Paciļ¬c.
References
[1] Eugene Albin and Neil C. Rowe. A realistic experimental comparison of the Suricata and Snort intrusion-detection systems.
In Proceedings of the 26th International Conference on Advanced Information Networking and Applications Workshops,
pages 122ā€“127, Fukuoka, Japan, March 2012. IEEE.
[2] Jay Beale. Snort 2.1 Intrusion Detection. Syngress, second
edition edition, May 2004.
[3] Terry Benzel. The science of cyber security experimentation: the
DETER project. In Proceedings of the 27th Annual Computer
Security Applications Conference, ACSAC ā€™11, pages 137ā€“148,
New York, NY, USA, 2011. ACM.
[4] Aleļ¬ya Hussain. Magi replay agent and lander-2002 attack
secnario. Tools and data available at http://montage.isi.
deterlab.net/magi/hst2013tools, July 2013.
[5] Aleļ¬ya Hussain, Genevieve Bartlett, Yuri Pryadkin, John Heidemann, Christos Papadopoulos, and Joseph Bannister. Experiences with a continuous network tracing infrastructure. In
Proceedings of the ACM SIGCOMM MineNet Workshop, pages
185ā€“190, Philadelphia, PA, USA, August 2005. ACM.

[6] Aleļ¬ya Hussain, John Heidemann, and Christos Papadopoulos.
A framework for classifying denial of service attacks. In Proceedings of the ACM SIGCOMM Conference, pages 99ā€“110, Karlsruhe, Germany, August 2003. ACM.
[7] Aleļ¬ya Hussain, John Heidemann, and Christos Papadopoulos.
Identiļ¬cation of repeated denial of service attacks. In Proceedings of the IEEE Infocom, page to appear, Barcelona, Spain,
April 2006. IEEE.
[8] Georgios Kambourakis, Tassos Moschos, Dimitris Geneiatakis,
and Stefanos Gritzalis. A fair solution to DNS ampliļ¬cation attacks. In Proceedings of the Second IEEE International Workshop on Digital Forensics and Incident Analysis (WDFIA),
pages 38ā€“47. IEEE, August 2007.
[9] Dina Katabi and Charles Blake. Inferring congestion sharing
and path characteristics from packet interarrival times. Technical Report MIT-LCS-TR-828, Massachusetts Institute of Technology, Laboratory for Computer Science, 2001.
[10] Kun-Chan Lan and John Heidemann. A tool for RApid Model
Parameterization and its applications. In Proceedings of the
Workshop on Models, Methods and Tools for Reproducible Network Research (MoMeTools), page to appear, Karlsruhe, Germany, August 2003. ACM.
[11] John Markoļ¬€ and Nicole Perlroth. Attacks used the Internet
against itself to clog traļ¬ƒc. New York Times, page B1, March
28 2013.
[12] Metasploit Framework Website. http://www.metasploit.com/.
[13] Vern Paxson. Bro: A system for detecting network intruders in
real-time. Computer Networks, 31(23ā€“24):2435ā€“2463, December
1999.
[14] tcpreplay toolkit. http://tcpreplay.synļ¬n.net.
[15] The DETER Project. http://www.deter-project.org.
[16] USC/LANDER project. Scrambled Internet trace measurement
dataset, predict id usc-lander/dostraces-20020629. Provided
by the USC/LANDER project http://www.isi.edu/ant/lander.,
June 2002. Traces taken 2002-06-29 to 2003-11-30.
[17] Michele C. Weigle, Prashanth Adurthi, FĀ“lix HernĀ“ndeze
a
Campos, Kevin Jeļ¬€ay, and F. Donelson Smith. Tmix: a tool
for generating realistic tcp application workloads in ns-2. SIGCOMM Comput. Commun. Rev., 36(3):65ā€“76, July 2006.
[18] Jun Xu, Jinliang Fan, Mostafa H. Ammar, and Sue B. Moon.
Preļ¬x-preserving IP address anonymization: Measurementbased security evaluation and a new cryptography-based
scheme. In Proceedings of the 10th IEEE International Conference on Network Protocols, pages 280ā€“289, Washington, DC,
USA, November 2002. IEEE.

More Related Content

What's hot

Auto sign an automatic signature generator for high-speed malware filtering d...
Auto sign an automatic signature generator for high-speed malware filtering d...Auto sign an automatic signature generator for high-speed malware filtering d...
Auto sign an automatic signature generator for high-speed malware filtering d...
UltraUploader
Ā 
Mas based framework to protect cloud computing against ddos attack
Mas based framework to protect cloud computing against ddos attackMas based framework to protect cloud computing against ddos attack
Mas based framework to protect cloud computing against ddos attack
eSAT Journals
Ā 

What's hot (19)

Qualifying exam-2015-final
Qualifying exam-2015-finalQualifying exam-2015-final
Qualifying exam-2015-final
Ā 
Ananth1
Ananth1Ananth1
Ananth1
Ā 
Syed Ubaid Ali Jafri - Black Box Penetration testing for Associates
Syed Ubaid Ali Jafri - Black Box Penetration testing for AssociatesSyed Ubaid Ali Jafri - Black Box Penetration testing for Associates
Syed Ubaid Ali Jafri - Black Box Penetration testing for Associates
Ā 
ADRISYA: A FLOW BASED ANOMALY DETECTION SYSTEM FOR SLOW AND FAST SCAN
ADRISYA: A FLOW BASED ANOMALY DETECTION SYSTEM FOR SLOW AND FAST SCANADRISYA: A FLOW BASED ANOMALY DETECTION SYSTEM FOR SLOW AND FAST SCAN
ADRISYA: A FLOW BASED ANOMALY DETECTION SYSTEM FOR SLOW AND FAST SCAN
Ā 
Testbed For Ids
Testbed For IdsTestbed For Ids
Testbed For Ids
Ā 
DoS Forensic Exemplar Comparison to a Known Sample
DoS Forensic Exemplar Comparison to a Known SampleDoS Forensic Exemplar Comparison to a Known Sample
DoS Forensic Exemplar Comparison to a Known Sample
Ā 
A SYSTEM FOR VALIDATING AND COMPARING HOST-BASED DDOS DETECTION MECHANISMS
A SYSTEM FOR VALIDATING AND COMPARING HOST-BASED DDOS DETECTION MECHANISMSA SYSTEM FOR VALIDATING AND COMPARING HOST-BASED DDOS DETECTION MECHANISMS
A SYSTEM FOR VALIDATING AND COMPARING HOST-BASED DDOS DETECTION MECHANISMS
Ā 
AN ACTIVE HOST-BASED INTRUSION DETECTION SYSTEM FOR ARP-RELATED ATTACKS AND I...
AN ACTIVE HOST-BASED INTRUSION DETECTION SYSTEM FOR ARP-RELATED ATTACKS AND I...AN ACTIVE HOST-BASED INTRUSION DETECTION SYSTEM FOR ARP-RELATED ATTACKS AND I...
AN ACTIVE HOST-BASED INTRUSION DETECTION SYSTEM FOR ARP-RELATED ATTACKS AND I...
Ā 
Resource Allocation for Antivirus Cloud Appliances
Resource Allocation for Antivirus Cloud AppliancesResource Allocation for Antivirus Cloud Appliances
Resource Allocation for Antivirus Cloud Appliances
Ā 
A Back Propagation Neural Network Intrusion Detection System Based on KVM
A Back Propagation Neural Network Intrusion Detection System Based on KVMA Back Propagation Neural Network Intrusion Detection System Based on KVM
A Back Propagation Neural Network Intrusion Detection System Based on KVM
Ā 
Mas based framework to protect cloud computing
Mas based framework to protect cloud computingMas based framework to protect cloud computing
Mas based framework to protect cloud computing
Ā 
Auto sign an automatic signature generator for high-speed malware filtering d...
Auto sign an automatic signature generator for high-speed malware filtering d...Auto sign an automatic signature generator for high-speed malware filtering d...
Auto sign an automatic signature generator for high-speed malware filtering d...
Ā 
Aw36294299
Aw36294299Aw36294299
Aw36294299
Ā 
Mas based framework to protect cloud computing against ddos attack
Mas based framework to protect cloud computing against ddos attackMas based framework to protect cloud computing against ddos attack
Mas based framework to protect cloud computing against ddos attack
Ā 
AI for Cybersecurity Innovation
AI for Cybersecurity InnovationAI for Cybersecurity Innovation
AI for Cybersecurity Innovation
Ā 
M43057580
M43057580M43057580
M43057580
Ā 
How the CC Harmonizes with Secure Software Development Lifecycle
How the CC Harmonizes with Secure Software Development LifecycleHow the CC Harmonizes with Secure Software Development Lifecycle
How the CC Harmonizes with Secure Software Development Lifecycle
Ā 
B.tech Final year Cryptography Project
B.tech Final year Cryptography ProjectB.tech Final year Cryptography Project
B.tech Final year Cryptography Project
Ā 
HARDWARE SECURITY IN CASE OF SCAN-BASED ATTACK ON CRYPTO-HARDWARE
HARDWARE SECURITY IN CASE OF SCAN-BASED ATTACK ON CRYPTO-HARDWAREHARDWARE SECURITY IN CASE OF SCAN-BASED ATTACK ON CRYPTO-HARDWARE
HARDWARE SECURITY IN CASE OF SCAN-BASED ATTACK ON CRYPTO-HARDWARE
Ā 

Similar to Replay of Malicious Traffic in Network Testbeds

Unveiling-Patchwork
Unveiling-PatchworkUnveiling-Patchwork
Unveiling-Patchwork
Brandon Levene
Ā 
COPYRIGHTThis thesis is copyright materials protected under the .docx
COPYRIGHTThis thesis is copyright materials protected under the .docxCOPYRIGHTThis thesis is copyright materials protected under the .docx
COPYRIGHTThis thesis is copyright materials protected under the .docx
voversbyobersby
Ā 

Similar to Replay of Malicious Traffic in Network Testbeds (20)

Secure intrusion detection and countermeasure selection in virtual system usi...
Secure intrusion detection and countermeasure selection in virtual system usi...Secure intrusion detection and countermeasure selection in virtual system usi...
Secure intrusion detection and countermeasure selection in virtual system usi...
Ā 
06558266
0655826606558266
06558266
Ā 
A SYSTEM FOR VALIDATING AND COMPARING HOST-BASED DDOS DETECTION MECHANISMS
A SYSTEM FOR VALIDATING AND COMPARING HOST-BASED DDOS DETECTION MECHANISMSA SYSTEM FOR VALIDATING AND COMPARING HOST-BASED DDOS DETECTION MECHANISMS
A SYSTEM FOR VALIDATING AND COMPARING HOST-BASED DDOS DETECTION MECHANISMS
Ā 
Machine learning techniques applied to detect cyber attacks on web applications
Machine learning techniques applied to detect cyber attacks on web applicationsMachine learning techniques applied to detect cyber attacks on web applications
Machine learning techniques applied to detect cyber attacks on web applications
Ā 
Machine learning techniques applied to detect cyber attacks on web applications
Machine learning techniques applied to detect cyber attacks on web applicationsMachine learning techniques applied to detect cyber attacks on web applications
Machine learning techniques applied to detect cyber attacks on web applications
Ā 
Intrusion Detection Systems By Anamoly-Based Using Neural Network
Intrusion Detection Systems By Anamoly-Based Using Neural NetworkIntrusion Detection Systems By Anamoly-Based Using Neural Network
Intrusion Detection Systems By Anamoly-Based Using Neural Network
Ā 
Unveiling-Patchwork
Unveiling-PatchworkUnveiling-Patchwork
Unveiling-Patchwork
Ā 
COPYRIGHTThis thesis is copyright materials protected under the .docx
COPYRIGHTThis thesis is copyright materials protected under the .docxCOPYRIGHTThis thesis is copyright materials protected under the .docx
COPYRIGHTThis thesis is copyright materials protected under the .docx
Ā 
1762 1765
1762 17651762 1765
1762 1765
Ā 
1762 1765
1762 17651762 1765
1762 1765
Ā 
7 ijcse-01229
7 ijcse-012297 ijcse-01229
7 ijcse-01229
Ā 
Machine Learning Techniques Used for the Detection and Analysis of Modern Typ...
Machine Learning Techniques Used for the Detection and Analysis of Modern Typ...Machine Learning Techniques Used for the Detection and Analysis of Modern Typ...
Machine Learning Techniques Used for the Detection and Analysis of Modern Typ...
Ā 
EFFICIENT IDENTIFICATION AND REDUCTION OF MULTIPLE ATTACKS ADD VICTIMISATION ...
EFFICIENT IDENTIFICATION AND REDUCTION OF MULTIPLE ATTACKS ADD VICTIMISATION ...EFFICIENT IDENTIFICATION AND REDUCTION OF MULTIPLE ATTACKS ADD VICTIMISATION ...
EFFICIENT IDENTIFICATION AND REDUCTION OF MULTIPLE ATTACKS ADD VICTIMISATION ...
Ā 
Ransomware Attack Detection based on Pertinent System Calls Using Machine Lea...
Ransomware Attack Detection based on Pertinent System Calls Using Machine Lea...Ransomware Attack Detection based on Pertinent System Calls Using Machine Lea...
Ransomware Attack Detection based on Pertinent System Calls Using Machine Lea...
Ā 
Ransomware Attack Detection Based on Pertinent System Calls Using Machine Lea...
Ransomware Attack Detection Based on Pertinent System Calls Using Machine Lea...Ransomware Attack Detection Based on Pertinent System Calls Using Machine Lea...
Ransomware Attack Detection Based on Pertinent System Calls Using Machine Lea...
Ā 
IRJET - Netreconner: An Innovative Method to Intrusion Detection using Regula...
IRJET - Netreconner: An Innovative Method to Intrusion Detection using Regula...IRJET - Netreconner: An Innovative Method to Intrusion Detection using Regula...
IRJET - Netreconner: An Innovative Method to Intrusion Detection using Regula...
Ā 
Online stream mining approach for clustering network traffic
Online stream mining approach for clustering network trafficOnline stream mining approach for clustering network traffic
Online stream mining approach for clustering network traffic
Ā 
Online stream mining approach for clustering network traffic
Online stream mining approach for clustering network trafficOnline stream mining approach for clustering network traffic
Online stream mining approach for clustering network traffic
Ā 
A Review Of Network Security Metrics
A Review Of Network Security MetricsA Review Of Network Security Metrics
A Review Of Network Security Metrics
Ā 
2014 IEEE DOTNET PARALLEL DISTRIBUTED PROJECT A system-for-denial-of-service-...
2014 IEEE DOTNET PARALLEL DISTRIBUTED PROJECT A system-for-denial-of-service-...2014 IEEE DOTNET PARALLEL DISTRIBUTED PROJECT A system-for-denial-of-service-...
2014 IEEE DOTNET PARALLEL DISTRIBUTED PROJECT A system-for-denial-of-service-...
Ā 

More from DETER-Project

In Quest of Benchmarking Security Risks to Cyber-Physical Systems
In Quest of Benchmarking Security Risks to Cyber-Physical SystemsIn Quest of Benchmarking Security Risks to Cyber-Physical Systems
In Quest of Benchmarking Security Risks to Cyber-Physical Systems
DETER-Project
Ā 

More from DETER-Project (8)

The DETER Project: Towards Structural Advances in Experimental Cybersecurity ...
The DETER Project: Towards Structural Advances in Experimental Cybersecurity ...The DETER Project: Towards Structural Advances in Experimental Cybersecurity ...
The DETER Project: Towards Structural Advances in Experimental Cybersecurity ...
Ā 
First Steps Toward Scientific Cyber-Security Experimentation in Wide-Area Cyb...
First Steps Toward Scientific Cyber-Security Experimentation in Wide-Area Cyb...First Steps Toward Scientific Cyber-Security Experimentation in Wide-Area Cyb...
First Steps Toward Scientific Cyber-Security Experimentation in Wide-Area Cyb...
Ā 
The DETER Project: Advancing the Science of Cyber Security Experimentation an...
The DETER Project: Advancing the Science of Cyber Security Experimentation an...The DETER Project: Advancing the Science of Cyber Security Experimentation an...
The DETER Project: Advancing the Science of Cyber Security Experimentation an...
Ā 
In Quest of Benchmarking Security Risks to Cyber-Physical Systems
In Quest of Benchmarking Security Risks to Cyber-Physical SystemsIn Quest of Benchmarking Security Risks to Cyber-Physical Systems
In Quest of Benchmarking Security Risks to Cyber-Physical Systems
Ā 
Testimony of Terry V. Benzel, University of Southern California Information S...
Testimony of Terry V. Benzel, University of Southern California Information S...Testimony of Terry V. Benzel, University of Southern California Information S...
Testimony of Terry V. Benzel, University of Southern California Information S...
Ā 
The Science of Cyber Security Experimentation: The DETER Project
The Science of Cyber Security Experimentation: The DETER ProjectThe Science of Cyber Security Experimentation: The DETER Project
The Science of Cyber Security Experimentation: The DETER Project
Ā 
Current Developments in DETER Cybersecurity Testbed Technology
Current Developments in DETER Cybersecurity Testbed TechnologyCurrent Developments in DETER Cybersecurity Testbed Technology
Current Developments in DETER Cybersecurity Testbed Technology
Ā 
The Science of Cyber Security Experimentation: The DETER Project
The Science of Cyber Security Experimentation: The DETER ProjectThe Science of Cyber Security Experimentation: The DETER Project
The Science of Cyber Security Experimentation: The DETER Project
Ā 

Recently uploaded

Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
vu2urc
Ā 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
Earley Information Science
Ā 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
Enterprise Knowledge
Ā 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
giselly40
Ā 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
Joaquim Jorge
Ā 

Recently uploaded (20)

Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
Ā 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
Ā 
Evaluating the top large language models.pdf
Evaluating the top large language models.pdfEvaluating the top large language models.pdf
Evaluating the top large language models.pdf
Ā 
Tech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdfTech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdf
Ā 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
Ā 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
Ā 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Ā 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
Ā 
šŸ¬ The future of MySQL is Postgres šŸ˜
šŸ¬  The future of MySQL is Postgres   šŸ˜šŸ¬  The future of MySQL is Postgres   šŸ˜
šŸ¬ The future of MySQL is Postgres šŸ˜
Ā 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
Ā 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
Ā 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Ā 
Scaling API-first ā€“ The story of a global engineering organization
Scaling API-first ā€“ The story of a global engineering organizationScaling API-first ā€“ The story of a global engineering organization
Scaling API-first ā€“ The story of a global engineering organization
Ā 
Finology Group ā€“ Insurtech Innovation Award 2024
Finology Group ā€“ Insurtech Innovation Award 2024Finology Group ā€“ Insurtech Innovation Award 2024
Finology Group ā€“ Insurtech Innovation Award 2024
Ā 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
Ā 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
Ā 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
Ā 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
Ā 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
Ā 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
Ā 

Replay of Malicious Traffic in Network Testbeds

  • 1. 1 Replay of Malicious Traļ¬ƒc in Network Testbeds Aleļ¬ya Hussain, Yuri Pradkin and John Heidemann University of Southern California / Information Sciences Institute Abstractā€”In this paper we present tools and methods to integrate attack measurements from the Internet with controlled experimentation on a network testbed. We show that this approach provides greater ļ¬delity than synthetic models. We compare the statistical properties of real-world attacks with synthetically generated constant bit rate attacks on the testbed. Our results indicate that trace replay provides ļ¬ne time-scale details that may be absent in constant bit rate attacks. Additionally, we demonstrate the eļ¬€ectiveness of our approach to study new and emerging attacks. We replay an Internet attack captured by the LANDER system on the DETERLab testbed within two hours. I. Introduction Testbeds oļ¬€er a platform for testing and evaluation of networked systems in controlled and repeatable environments. They facilitate studying the impact of network conditions on the security of real network systems and applications [3]. Such study depends on approaches to providing representative network traļ¬ƒc. Many approaches have been proposed to create controlled malicious and non-malicious traļ¬ƒc. Creating traļ¬ƒc in testbeds always involves some level of modeling. Some models may be completely abstract, such as constantbitrate attacks. Others create synthetic traļ¬ƒc models, possibly parameterized from real network traļ¬ƒc (such as [10], [17]), or involve tools speciļ¬c to malicious traļ¬ƒc generation (such as Metasploit [12]). Synthetic traļ¬ƒc models are attractive because they allow control and generation of any amount of traļ¬ƒc with few or no external constraints. Additionally, such models adapt well to testbeds and they have no privacy concerns. The risk of synthetic models is that one gets out only what one puts in: they may miss the dynamics and subtleties of real-world traļ¬ƒc. Often malicious and regular network traļ¬ƒc reļ¬‚ects not only end hosts, but also aspects of the network path [6], [9]. In our previous work, we have shown attack dynamics are inherently dependent on characteristics such as number of attackers and the attacker host environment, such as the type of operating system, system load, and hardware characteristics [7]. These details are often omitted in synthetic models of malicious traļ¬ƒc. Another challenge of synthetic models is that they always lag current malicious traļ¬ƒc. There is a delay in deploying new models, since models are designed only when malware is understood, yet attacks change and mutate rapidly. In the paper, we import denial-of-service (DoS) attacks that we have captured in LANDER, a network traļ¬ƒc analysis system [5], to provide malicious traļ¬ƒc models in the DETERLab cyber security testbed [15]. Our approach recreates source and network path conditions by replaying the attacks on the DETERLab testbed [15]. The contribution of this paper is to describe an approach for attack capture and immediate replay on a networking testbed. The attack replay tools provide a collective representation of one or more attackers located downstream from the capture point mapped onto a speciļ¬ed node in the testbed (Section II). In Section III, we show that this approach provides greater ļ¬delity than synthetic models. We also show that our approach can quickly feed new attacks into a testbed for evaluation (Section IV), where each attack source may be isolated and modelled individually, as required by some cyber defense strategies. We expect our tools will provide timely and realistic trafļ¬c with which to test cyber-defenses such as intrusion detection systems and ļ¬rewalls. The data sets, our tools, and the DETERLab testbed are all available for use by network security researchers. II. Approach In this paper we discuss a methodology to detect and capture attacks using LANDER, and then replay this trafļ¬c in a scenario constructed on the DETER testbed. We discuss our approach in this section and present an analysis of the ļ¬delity this approach may oļ¬€er in the following section. A. Capturing Network Traļ¬ƒc with LANDER We capture network traļ¬ƒc with the LANDER system [5]. LANDER supports capture and anonymization of network traļ¬ƒc, using parallel processing to support high data rates and signiļ¬cant and varying amounts of analystdriven-processing. It is currently in operation, with small deployments running on single (but multi-core) systems with commodity network cards, and large deployments capturing traļ¬ƒc at 10 Gb/s with back-end anonymization and analysis on shared 1000-node compute clusters. For this paper, the important aspects of LANDER are that it provides the policies that enable us to get permission to collect packet, the ļ¬‚exibility to plug in diļ¬€erent detection modules to detect interesting events, and the framework to extract anonymized traces in near-real time. LANDER is built around multiple queues of data, each of which triggers a callback to a system- or user-provided function. This structure supports diļ¬€ering policies for data anonymization. By default, data is placed into a raw queue, then anonymized, by changing IP addresses using preļ¬x-preserving anonymization [18] and removing all application payloads. This range of policies is important to enable deployments from environments where high expectations of privacy must be enforced, to laboratories, where it can serve as a tool to audit consenting but not-fully-
  • 2. 2 trusted parties. It allows us to experiment with diļ¬€erent analysis tools, such as our DoS-detection methods [6], as well as oļ¬€-the-shelf tools like SNORT [2], Bro [13], and Suricata [1]. Once an attack is identiļ¬ed, we extract the packet headers for replay. B. DETERLab: A playground for Attacks The DETERLab provides infrastructure, tools, and methodologies for networking and cyber security experimentation. The DETERLab testbed includes 570 PCs and specialized hardware located at two sites; USC/ISI in Marina del Rey, CA and UC Berkeley, Berkeley, CA. The two sites are interconnected with a shared 10Gbps links. The DETERLab containerization tools enable researchers to allocate internet-scale topologies using virtualization of nodes. DETERLab also provides the Montage AGent Infrastructure (MAGI) tools for orchestrating a wide range of networking and cyber security experimentation scenarios [15]. The MAGI tools provide an event-based control overlay for deploying and controlling agent modules on the topology. Each agent enacts a particular behavior, for example, a attacker behavior, and the behavior is conļ¬gured and controlled through an Agent Activation Language (AAL) [15]. C. Replaying An Attack Scenario in DETERLab DETERLab provides a playground for attack experiments, and the MAGI tools allow researchers to systematically explore the experimentation design space. The contribution in this paper is to take traces from the real world with LANDER, and recreate the attack in DETERLab using the MAGI framework. To make this transformation, we developed an new agent for MAGI, and a scenario to replay the attack. To replay the trace with MAGI we developed a attackreplay agent. It uses the Tcpreplay toolset [14] to adapt the real-world trace to the testbed. It uses tcprewrite to remap the source and destination MAC and IP addresses in the LANDER attack trace. If the original attack makes use of IP address spooļ¬ng, the source IP addresses are not modiļ¬ed. It then uses tcpreplay to regenerate the attack with proper timing and replay it from one or more locations on a topology in the testbed. We generate two types of traļ¬ƒc inside DETER. Attack traļ¬ƒc and non-malicious (or background) traļ¬ƒc. We generate attack traļ¬ƒc using the attack-replay agent. We deploy additional agents to generate non-malicious traļ¬ƒc. We mix the two, breaking the scenario into three phases: (i) pre-attack phase during which we generate only nonmalicious network traļ¬ƒc. (ii) the attack phase, during which we deploy the attack into the topology and observe how it interacts with continuing non-malicious traļ¬ƒc. (iii) The post-attack phase when non-malicious traļ¬ƒc recovers from the attack. We generate web traļ¬ƒc to create non-malicious network traļ¬ƒc throughout the duration of the experiment. We use the MAGI webserver and webclient agents. The MAGI webserver agent deploys an Apache web server. The MAGI webclient agent uses Curl to requests objects from the web server. The webclient agent can be conļ¬gured to choose a webserver in sequence or at random from a list of one or more web servers. The size of the web page can be deļ¬ned as a probabilistic distribution. We compare our replay of a real attack to an alternative attack, where we generate synthetic attack traļ¬ƒc with a MAGI attack-CBR agent. The attack-CBR agent generates a constant bit rate packet rate stream on the network that can be used to represent an attack. The agent can be parametrized with protocol, packet size distributions, port number distributions, ramp up and ramp down rates, low and high attack rates. Lastly, we deploy a packetCapture agent to capture all the packets seen at the observation point in the topology. The packetCapture agent uses tcpdump to capture packets. It can be conļ¬gured with a packet ļ¬lter, capture packet size, and the capture ļ¬le storage location. III. Richness in Replayed Attacks We present an analysis of the additional ļ¬delity realworld attacks oļ¬€er in this section. First, we discuss how the tools from the last section come together to allow us to systematically explore and compare the diļ¬€erent attacks in DETERLab. We then provide a qualitative and quantitative analysis of the attack replay as compared to synthetically generated CBR attacks. A. Traces to Experiments We now describe how we create three types of scenarios to explore the richness of the replayed LANDER attacks and compare them to synthetically generated CBR attacks on the DETER testbed. We conduct three controlled experiments on the DETER testbed: a replay of a real-world attack [16], a singleattacker, constant-bit rate, synthetic attack with a rate of 4200 packets/s, a ten-attacker, constant-bit rate, synthetic attack with an aggregate rate of 4200 packets/s (each attacker at 420 packets/s). Here we use a dataset from lander with three separate denial-of-service attacks captured in 2002 and 2003 [16]. In this paper, we use the ļ¬rst attack, a reļ¬‚ector attack that send ICMP echo reply packets to a victim in Los Nettos. We refer to this attack as LANDER-2002. The mix of attack and background traļ¬ƒc is shown in Figure 1, both as packets per second and bits per second. The attack consists of echo reply packets from approximately 140 attackers, with each packet carrying a 28 byte payload. With small packets, we see the attack most clearly in Figure 1a; it hardly shows up when one measures bitrate (Figure 1b). We choose this attack from the dataset since it had the lowest variance in the attack rate. Even though visually this attack looks similar to a constant bit rate attack, in the next section, we systematically explore the richness in this attack as compared to synthetically generated CBR attacks. We ļ¬rst process this trace to ļ¬lter out attack traļ¬ƒc targeted to the victim. This attack was then transferred to
  • 3. REPLAY OF MALICIOUS TRAFFIC IN NETWORK TESTBEDS 20000 18000 Rate in packets per sec 16000 14000 12000 10000 8000 6000 4000 2000 0 0 50 100 150 200 250 300 350 400 450 Time in seconds (a) Attack and background traļ¬ƒc in packets/s. 60 Rate in MBits per sec 50 40 30 20 10 0 0 50 100 150 200 250 300 350 400 450 Time in seconds (b) Attack and background traļ¬ƒc in Mbits/s. Fig. 1: Real-world Captured DoS attack LANDER-2002. the DETER testbed and processed by the MAGI attackreplay agent to generate an attack on the experiment topology. Additionally, we used the MAGI attack-CBR agent to generate the two constant bit rate attacks as discussed above. We generate non-malicious background traļ¬ƒc using four webclients agents and two webserver agents on a dumbbell topology. The web clients request a constant webpage of 10MB from a randomly chosen webserver. The attack is launched from a bank of attackers attached to one webserver. We deploy the packetCapture agent at an observation point on the bottleneck link. It captures all packets in the experiment with zero packet loss. The complete experiment description along with the traces will be made available on publication [4]. B. Attack Traļ¬ƒc As discussed in the previous section, we compare two types of synthetic attacks with attack trace replay to study the dynamics. Figure 2 show the attack rate in packets/second and the empirical probability distributions for all three types of attack. The top row shows the attack packet rates. The x-axis show time in seconds and the y-axis shows the rate in packets/second. We have parametrized the constant bit rate attacks based on the LANDER-2002 attack rate. We observe an attack rate of approximately 4200 packets/sec in all three cases. The bottom row shows both the empir- 3 ical probability distribution function and the cumulative distribution function of the attack rates. The x-axis is the packet rate sampled at 1 millisecond to expose ļ¬netimescale dynamics of the packet rate. Bars in the plot and the left y-axis show the probability distribution function as the normalized frequency of the observed packet rate. The line and the right y-axis show the packet rates plotted as a cumulative distribution function. Qualitative diļ¬€erences in testbed attacks: We ļ¬rst look at overall features in the diļ¬€erent methods of generating testbed attack traļ¬ƒc. We begin with the trace replay in Figure 2c. On the top right, we see the attack traļ¬ƒc is ā€œnoisyā€ with lots of ļ¬ne variation. The bottom right graph shows that the exact attack rate varies quite a bit in each millisecond with a wide range of possible values, since both the PDF and the CDF show a smooth variation over a continuous range. We argue that these variations are not unusual, but are an inherent part in any real-world distributed denial-of-service attack. These variations arise because attacks originate on diļ¬€erent types of hardware, sometimes with other software competing for compute cycles. They also traverse diļ¬€erent networks en-route to their target, experiencing diļ¬€erent kinds of cross-traļ¬ƒc, including delay and sometimes loss. We have seen these eļ¬€ects before in wide-area traces and explored them through controlled experiments [6], [7]. In this experiment we show that through replay, we can recreate these subtleties on the testbed. By contrast, Figures 2a and Figure 2b show much less richness and variation. The synthetic, single-source CBR attacker in Figure 2a shows very regular traļ¬ƒc occasionally interrupted by bursts of variation (top graph, with bursts at about 45 s, 85 s, 125 s, etc.). The distribution of arrivals,however, is strongly modal, with 4 or 5 packets per millisecond 80% of the time. There is far less variability than the real-world attack. This lack of variability presents a much diļ¬€erent attack to an intrusion detection systemā€” an IDS that would see this periodic traļ¬ƒc would easily ļ¬nd this synthetic attack while missing the more complex, real-world one. Figure 2b shows that more synthetic attackers provide a richer attack. Each attacker sends at a lower rate, but the aggregate rate is the same. We see more variation in the CDF (Figure 2b bottom graph), because each of the 10 attackers is slightly diļ¬€erent. However, the aggregate trafļ¬c (top graph) is much smoother ā€”the 10 attackers jitter largely cancels each other out. We conclude that real-world traļ¬ƒc shows variation and jitter at timescales of milliseconds (Figure 2c). This richness is lost with synthetic attacker, even when run as teams on multiple machines. If a testbed is to reproduce the kind of real-world complexity it must employ more sophisticated tools and methodologies than just simple synthetic attacks. Quantifying diļ¬€erences in testbed attacks: To quantity the diļ¬€erence in the spread and variance we compute the average absolute deviation (AAD). For a given sampling bin of p seconds, we deļ¬ne the arrival process x(t) as the number of packets that arrive in the bin [t, t + p).
  • 4. 4 4000 2000 3000 2000 1000 1000 0 0 100 150 200 250 0 50 100 Time in seconds 150 200 1 0.7 0.3 1 0.7 0.5 0.3 cdf 0.3 0.2 0.1 0.5 20 250 1 0.9 0.8 0.7 0.2 0 Packets per msec 0.6 0.1 0.3 0.2 0.05 0.5 0.4 0.1 0.3 0.2 0.05 0.1 0 0 5 10 15 20 0 Packets per msec (a) CBR: Single Attacker 0.15 0.4 0.1 15 200 0.6 0.15 0.4 0.2 150 0.3 0.7 0.2 pmf pmf 0.5 10 100 0.25 0.8 0.6 0.4 5 50 0.9 0.25 0.8 0 0 Time in seconds 0.9 0.6 0 0 250 Time in seconds pmf 50 2000 1000 cdf 0 3000 (b) CBR: Ten Attackers cdf 3000 Rate in packets per sec 5000 4000 Rate in packets per sec 5000 4000 Rate in packets per sec 5000 0.1 0 0 5 10 15 20 25 30 35 40 0 Packets per msec (c) Replay of LANDER-2002 trace Fig. 2: The packet rates and empirical distributions of attack packet arrival times for the three types of attacks used in the experiments in DETERLab. Thus, a T second long packet trace will have N = T /p samples. The AAD is then deļ¬ned as N ĀÆ |(x(i) āˆ’ X)|/N AAD = i=1 ĀÆ where X is the mean of the attack rate. Since this measure does not square the distance from the mean, it is less aļ¬€ected by extreme observations than metrics such as variance and standard deviation. Table I summarizes the mean, median, standard deviation and the average absolute deviation for the three types of attack reply. The constant bit rate attacks have a similar mean and median value but for the LANDER-2002 attack, the mean is almost twice the median value. The results indicate that there is a wide range of attack packet arrival rates for realworld attacks when considering the ļ¬ne time scales. The arrival process is averaged out as the size of the sampling bin p increases. As seen in the top row on the Figure 2, all three attacks qualitatively look similar when sampled at 1 sec. However, as reported in Table I, the statistical properties of the attacks are very diļ¬€erent when the attacks are sampled at 1 msec. We see that standard deviation is much higher as more attackers participate, and the LANDER-2002 trace has the largest standard deviation of all. The AAD for the single source attack is the smallest indicating this experiment closely resembles a constant bit rate of 4200 packets/sec. The AAD for the other two experiments in signiļ¬cantly higher indicating high variability in the arrival rate of the attack packets. These measures demonstrate the large diļ¬€erence between simpler synthetic attacks and trace replay. Taken with the qualitative comparison, we argue that it is essential to use trace replay if ļ¬ne timescale details of attacks matter to whatever mechanism is being studied in a testbed. metric (in pkts/ms) mean median std. deviation avg. abs. dev. synthetic (CBR) single multiple 4.20 4.20 4.00 4.00 1.77 2.46 0.84 2.18 LANDER-2002 trace 4.11 2.00 4.78 3.62 TABLE I: Statistical properties of two kinds of synthetic attack and captured traces. IV. Rapid Attack Turn-around: Todayā€™s Attacks Today Trace replay has the promise to rapidly turn around attacks into reusable models. This speed is possible because replay of traces does not require understanding of their contents. They can therefore be used to quickly test new defenses, in parallel with analysis of the underlying attack that can lead to more parsimonious, parameterized models that will eventually explore a wider range of scenarios. To demonstrate the ability to rapidly take an observation to a replayable tool in a testbed, we next carry out an experiment to demonstrate the process. Our target is a DNS ampliļ¬cation attack [8], such as those that took place in March 2013 [11]. In principle, we could watch our network for a new attack to appear. However, because of publishing deadlines, we instead decided to stage our own mini-attack. We deployed six DNS servers at ISI in Marina del Rey, California, U.S.A. An trigger computer, also at ISI, then sent a stream of 400 DNS queries per second at these servers with a spoofed source address of a machine loaned to us by Christos Papadopoulos at Colorado State University (CSU) in Ft. Collins, Colorado. We recorded traļ¬ƒc as it leaves ISI and as it enters CSU. We detect the attack with a custom SNORT rule. We then extract the
  • 5. REPLAY OF MALICIOUS TRAFFIC IN NETWORK TESTBEDS start -0:45 0:00 1:13 6:52 14:28 15:23 16:14 22:09 35:22 80:40 85:40 duration 1:13 15:23 ā€” 19:57 1:46 ā€” 1:46 6:58 2:20 9:13 22:15 1400 Attack Rate in packets per sec event Start of 1st segment with attack start Attack begins End of 1st segment with attack start SNORT detection under LANDER Start of last segment with attack end Attack ends End of last segment with attack end Processing trace ļ¬les Moving trace ļ¬les to DETER Swap in a topology in DETER Process and deploy traces by MAGI 5 1200 1000 800 600 400 200 0 0 200 400 600 800 1000 1200 1400 1600 1800 1600 1800 1600 1800 Time in seconds (a) DNS Reļ¬‚ection attack in packets/s 12 10 8 6 4 2 0 0 200 400 600 800 1000 1200 1400 Time in seconds (b) DNS Reļ¬‚ection attack in Mbits/s 70000 All traffic at LANDER (packets per sec) traces and hand them to MAGI for exploration in DETERLab. Figure 3 shows the attack and background trafļ¬c captured by LANDER during the attack. The attack starts at 403 seconds and lasts for about 900 seconds. Each attack packet is a 3700B DNS query response packet which is fragmented by the network into three packets. The rate changes seen at the start of the attack are due to us exploring diļ¬€erent attack parameters for the experiment. Table II shows the events for trace detection through replay. LANDER splits data into 512 MB segments as it arrives, and processes segments concurrently with collection. The attack lasts for only about 15 minutes and spans 10 segments. We see that although the attack lasts for more than 16 minutes, the ļ¬rst segment with the start of the attack completes about 1 minute into the attack. The attack continues, but this segment then becomes available for analysis. The change in traļ¬ƒc rates triggers SNORT about 7 minutes into the experiment. SNORT runs quickly (about 5 s per segment), but needs to examine 1 minute of traļ¬ƒc to establish that the attack traļ¬ƒc has changed the baseline from regular operation. Due to this parallelism, we detect the attack while itā€™s still in progress. We do not begin processing the trace ļ¬les until the attack completes. When it completes, we convert the traces to pcap format, compress them, and move them from the LANDER system onto shared network. The 17 minutes of traces are 1278 MB of data compressed. This processing takes about 7 minutes. If our system was automated, processing and transfer could run concurrent with the attack, reducing this time greatly. Once LANDER detects and provides the attack trace ļ¬les, we copy the data to the DETER testbedā€™s shared ļ¬le system from where it can be accessed by the MAGI attack-replay agent during the experiment. Transferring the data to DETER takes about 2.5 minutes. We now swap-in an experiment and orchestrate it using the MAGI toolkit. Once the attack-replay agent is deployed on the attacker nodes in the topology, the agent makes a copy of the trace ļ¬le on the local disk for processing. The agent then installs the tcpreplay tools, uncompresses the trace Attack Rate in MBits per sec TABLE II: Timing (in minutes:seconds) of our experiment for rapid attack turn-around. 60000 50000 40000 30000 20000 10000 0 0 200 400 600 800 1000 1200 1400 Time in seconds (c) All traļ¬ƒc at LANDER in packets/s Fig. 3: A DNS ampliļ¬cation attack generated at ISI and captured by ISI LANDER system. ļ¬le, and changes the victim IP address and source and destination MAC addresses in the trace. Processing the trace ļ¬le for replay and deploying it into the testbed takes about 22 minutes. The attack-replay agent signals the orchestrator that it is ready to start the experiment. For the scenarios explored in the paper, the attack is replayed 100 seconds after the start of the non-malicious traļ¬ƒc. This exercise shows that we are able to turn around an attack into a testbed experiment in less than two hours. The main uncertainty in the process is attack detection, moving data from the locations of collection to replay, and human decision making. Our current approach is far from optimized. With better pipelining, we believe we could
  • 6. 6 replay attacks while they are still underway. Also with greater automation, some steps that currently are humandriven can be removed. We believe this simple exercise shows the potential of our approach for researchers requiring fresh data. V. Conclusions This paper has outlined the advantages and the potential of trace replay as a means of improving network security research. We show that replaying realā€“world attacks allows including ļ¬ne time scale detail that are absent in synthetically generated constant bit rate attacks. Integrating LANDER data sets into the DETERLab provides higher quality data for testbed experiments. The methodology presented in this paper provides a way to rapidly incorporate new attacks into DETERLab experiments. Our tools and datasets are available for public use [4]. Please contact the authors if they could be of use to you. Acknowledgments We thank Christos Papadopoulos and Kaustubh Gadkari of Colorado State University for helping us to carry out the experiment in Section IV as the cooperative victim of the attack. The research done by Aleļ¬ya Hussain is supported by the Department of Homeland Security and Space and Naval Warfare Systems Center, San Diego, under Contract No. N66001-10-C-2018. Research done by Yuri Pradkin and John Heidemann in this paper is partially supported by the U.S. Department of Homeland Security Science and Technology Directorate, Cyber Security Division, via SPAWAR Systems Center Paciļ¬c under Contract No. N66001-13-C-3001. John Heidemann is also partially supported by DHS BAA 11-01-RIKA and Air Force Research Laboratory, Information Directorate under agreement number FA8750-12-2-0344. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. Any opinions, ļ¬ndings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reļ¬‚ect the views of SSC-Paciļ¬c. References [1] Eugene Albin and Neil C. Rowe. A realistic experimental comparison of the Suricata and Snort intrusion-detection systems. In Proceedings of the 26th International Conference on Advanced Information Networking and Applications Workshops, pages 122ā€“127, Fukuoka, Japan, March 2012. IEEE. [2] Jay Beale. Snort 2.1 Intrusion Detection. Syngress, second edition edition, May 2004. [3] Terry Benzel. The science of cyber security experimentation: the DETER project. In Proceedings of the 27th Annual Computer Security Applications Conference, ACSAC ā€™11, pages 137ā€“148, New York, NY, USA, 2011. ACM. [4] Aleļ¬ya Hussain. Magi replay agent and lander-2002 attack secnario. Tools and data available at http://montage.isi. deterlab.net/magi/hst2013tools, July 2013. [5] Aleļ¬ya Hussain, Genevieve Bartlett, Yuri Pryadkin, John Heidemann, Christos Papadopoulos, and Joseph Bannister. Experiences with a continuous network tracing infrastructure. In Proceedings of the ACM SIGCOMM MineNet Workshop, pages 185ā€“190, Philadelphia, PA, USA, August 2005. ACM. [6] Aleļ¬ya Hussain, John Heidemann, and Christos Papadopoulos. A framework for classifying denial of service attacks. In Proceedings of the ACM SIGCOMM Conference, pages 99ā€“110, Karlsruhe, Germany, August 2003. ACM. [7] Aleļ¬ya Hussain, John Heidemann, and Christos Papadopoulos. Identiļ¬cation of repeated denial of service attacks. In Proceedings of the IEEE Infocom, page to appear, Barcelona, Spain, April 2006. IEEE. [8] Georgios Kambourakis, Tassos Moschos, Dimitris Geneiatakis, and Stefanos Gritzalis. A fair solution to DNS ampliļ¬cation attacks. In Proceedings of the Second IEEE International Workshop on Digital Forensics and Incident Analysis (WDFIA), pages 38ā€“47. IEEE, August 2007. [9] Dina Katabi and Charles Blake. Inferring congestion sharing and path characteristics from packet interarrival times. Technical Report MIT-LCS-TR-828, Massachusetts Institute of Technology, Laboratory for Computer Science, 2001. [10] Kun-Chan Lan and John Heidemann. A tool for RApid Model Parameterization and its applications. In Proceedings of the Workshop on Models, Methods and Tools for Reproducible Network Research (MoMeTools), page to appear, Karlsruhe, Germany, August 2003. ACM. [11] John Markoļ¬€ and Nicole Perlroth. Attacks used the Internet against itself to clog traļ¬ƒc. New York Times, page B1, March 28 2013. [12] Metasploit Framework Website. http://www.metasploit.com/. [13] Vern Paxson. Bro: A system for detecting network intruders in real-time. Computer Networks, 31(23ā€“24):2435ā€“2463, December 1999. [14] tcpreplay toolkit. http://tcpreplay.synļ¬n.net. [15] The DETER Project. http://www.deter-project.org. [16] USC/LANDER project. Scrambled Internet trace measurement dataset, predict id usc-lander/dostraces-20020629. Provided by the USC/LANDER project http://www.isi.edu/ant/lander., June 2002. Traces taken 2002-06-29 to 2003-11-30. [17] Michele C. Weigle, Prashanth Adurthi, FĀ“lix HernĀ“ndeze a Campos, Kevin Jeļ¬€ay, and F. Donelson Smith. Tmix: a tool for generating realistic tcp application workloads in ns-2. SIGCOMM Comput. Commun. Rev., 36(3):65ā€“76, July 2006. [18] Jun Xu, Jinliang Fan, Mostafa H. Ammar, and Sue B. Moon. Preļ¬x-preserving IP address anonymization: Measurementbased security evaluation and a new cryptography-based scheme. In Proceedings of the 10th IEEE International Conference on Network Protocols, pages 280ā€“289, Washington, DC, USA, November 2002. IEEE.