8. Verifying
Network
ProperKes
• “Does
my
SDN
work?”
– E.g.,
func&onal
correctness
– Same
control
program
+
OF
funcKonal
fidelity
• Does
not
try
to
solve
a
harder
problem:
– “How
does
my
SDN/network
perform?”
– That
is,
performance
proper&es.
– No
guarantee
or
expectaKon
of
perf.
fidelity.
9. Example:
ConnecKvity
in
a
Fat
Tree
servers
switches
10
20x
4-‐port
switches
16x
servers
sudo mn –-custom ft.py -–topo ft,4 –test pingall
10. Verifying
Network
ProperKes
• “Does
my
SDN
work?”
– E.g.,
func&onal
correctness
– Same
control
program
+
OF
funcKonal
fidelity
• “How
does
my
SDN
perform?”
– E.g.,
performance
proper&es
– No
guarantee
or
even
expectaKon
here
11. hosts
switches
12
A
B
Y
Z
full
throughput
Two
1
Gb/s
flows.
Disjoint
paths.
Example:
Performance
in
a
Fat
Tree
12. hosts
switches
13
A
B
Y
Z
X
collision
half
throughput
Two
1
Gb/s
flows.
Overlapping
paths.
Example:
Performance
in
a
Fat
Tree
Throughput
might
reduce.
But
by
how
much?
How
do
you
trust
the
results?
13. 14
overlapping
events
real
Kme
…
…
…
A B
Link
Events
A: send request
B: init
xmit 1
xmit 2
B: send reponse
A B B HiFiA x1 x2B A B
Packet xmit
2
// A: Client
while(1) {
send_request(socket);
wait_for_reply(socket);
}
// B: Server
init();
while(1) {
wait_for_request(socket);
send_response(socket);
}
1
Real
Setup
B: wait
idle
B
B: send reponse
// B: Server
init();
while(1) {
wait_for_request(socket);
send_response(socket);
}
Real
S
Sources
of
Emulator
Infidelity
Event
Overlap
14. 15
real
Kme
…
…
…
A B
Link
Events
A: send request
B: init
xmit 1
xmit 2
B: send reponse
A B B HiFiA x1 x2B A B
Packet xmit
2
// A: Client
while(1) {
send_request(socket);
wait_for_reply(socket);
}
// B: Server
init();
while(1) {
wait_for_request(socket);
send_response(socket);
}
1
Real
Setup
B: wait
idle
B
B: send reponse
// B: Server
init();
while(1) {
wait_for_request(socket);
send_response(socket);
}
Real
S
Sources
of
Emulator
Infidelity
So8ware
Forwarding
variable
delays
15. How
can
we
trust
emulator
results?
real
Kme
…
…
…
A B
Link
Events
A: send request
B: init
xmit 1
xmit 2
B: send reponse
A B B HiFiA x1 x2B A B
Packet xmit
2
// A: Client
while(1) {
send_request(socket);
wait_for_reply(socket);
}
// B: Server
init();
while(1) {
wait_for_request(socket);
send_response(socket);
}
1
Real
Setup
B: wait
idle
B
B: send reponse
// B: Server
init();
while(1) {
wait_for_request(socket);
send_response(socket);
}
Real
S
CPU
<=
50%,
so
not
overloaded,
right?
Wrong.
18. A
Workflow
for
High
Fidelity
EmulaKon
19
Create
experiment
Run
the
experiment
on
a
PC,
with
logging
Analyze
experiment
fidelity
using
“network
invariants”
Invariants
hold:
High
Fidelity
EmulaKon!
Instance(s)
of
behavior
that
differ
from
hardware
Run
again:
increase
resources
or
reduce
experiment
scale
1:
what
to
log?
2:
which
invariants?
3:
how
close?
19. 20
Packet
Gap
Invariants
queue
link
switch
queue
packet
spacing
(when
queue
occupied)
Rmeasured
≤
Rconfigured
?
link
capacity
20. Example
Workflow
for
One
Invariant
21
Create
experiment
Run
the
experiment
on
a
PC,
with
logging
Analyze
experiment
fidelity
using
“network
invariants”
Invariants
hold:
High
Fidelity
EmulaKon!
Instance(s)
of
behavior
that
differ
from
hardware
Run
again:
increase
resources
or
reduce
experiment
scale
2:
Measure
packet
spacing
3:
Is
any
packet
delayed
by
more
than
one
packet
Kme?
1:
Log
Dequeue
Events
If
this
workflow
is
valid,
“pass”
same
result
as
hardware.
DCTCP
21. Data
Center
TCP
(DCTCP)
Kme
packets
in
queue
TCP
DCTCP
22
marking
threshold
Queue
occupied
100%
throughput
Queue
occupied
100%
throughput
Packet
spacing
we
should
see:
23. Emulator
Results
24
Does
checking
an
invariant
(packet
spacing)
idenKfy
wrong
results?
same
result
wrong;
limits
exceeded
80
Mb/s
100%
tput
6
pkts
var
same
result
160
Mb/s
100%
tput
6
pkts
var
320
Mb/s
24. Packet
Spacing
Invariant
w/DCTCP
25
1
pkt
med.
low
high
CCDF
Percent
(log)
25
pkts
Error:
(log)
10%
of
the
Kme,
error
exceeds
one
packet
x
25. Percentage deviation from expected
0
1
10
100
Percent
26
1
pkt
error
10
20
40
numbers
are
in
Mb/s
80
CCDF
Percent
Packet
Spacing
Invariant
w/DCTCP
26. Percentage deviation from expected
0
1
10
100
Percent
27
10
20
40
numbers
are
in
Mb/s
80
1
pkt
error
CCDF
Percent
Packet
Spacing
Invariant
w/DCTCP
27. Percentage deviation from expected
0
1
10
100
Percent
28
10
20
40
numbers
are
in
Mb/s
80
CCDF
Percent
Packet
Spacing
Invariant
w/DCTCP
160
Mb/s:
failed
emulaKon?
Beauty
of
networks
invariants
is
that
it
catches
and
quanKfies
the
error
in
this
run.
1
pkt
error
28. DemonstraKng
Fidelity
• Microbenchmarks
• ValidaKon
Tests
• Reproducing
Published
Research
– Do
complex
results
match
published
ones
that
used
custom
hardware
topologies?
• DCTCP
[Alizadeh,
SIGCOMM
2010]
• Router
Buffer
Sizing
[Appenzeller,
SIGCOMM
2004]
• Hedera
ECMP
[Al-‐Fares,
NSDI
2010]
31. →
Pick
a
paper.
→
Reproduce
a
key
result,
or
challenge
it
(with
data).
→
You
have:
$100
EC2
credit,
3
weeks,
and
must
use
Mininet-‐HiFi.
32
32. CoDel
HULL
MPTCP
Outcast
Jellyfish
DCTCP
Incast
Flow
CompleKon
Time
Hedera
DCell
TCP
IniKal
CongesKon
Window
Misbehaving
TCP
Receivers
RED
Project
Topics:
Transport,
Data
Center,
Queuing
33
33. CoDel
HULL
MPTCP
Outcast
Jellyfish
DCTCP
Incast
Flow
CompleKon
Time
Hedera
DCell
TCP
IniKal
CongesKon
Window
Misbehaving
TCP
Receivers
RED
34
37
students
18
projects
16
replicated
34. CoDel
HULL
MPTCP
Outcast
Jellyfish
DCTCP
Incast
Flow
CompleKon
Time
Hedera
DCell
TCP
IniKal
CongesKon
Window
Misbehaving
TCP
Receivers
RED
37
students
18
projects
16
replicated
4
beyond
35
35. CoDel
HULL
MPTCP
Outcast
Jellyfish
DCTCP
Incast
Flow
CompleKon
Time
Hedera
DCell
TCP
IniKal
CongesKon
Window
Misbehaving
TCP
Receivers
RED
37
students
18
projects
16
replicated
4
beyond
2
not
replicated
36
36. Reproduced
Research
Examples
reproducingnetworkresearch.wordpress.com
(or
Google
“reproducing
network
research”)
37
37. Why
might
results
be
different?
• Student
error
/
out
of
Kme:
Incast
• Original
result
fragile:
RED
• Insufficient
emulator
capacity
to
match
hardware
of
original
experiment
– OpKon
1:
Scale
up
– OpKon
2:
Slow
down:
Time
DilaKon
– OpKon
3:
Scale
out:
Cluster
EdiKon
38. QuesKons?
• Check
out
Bob’s
Cluster
EdiKon
demo
Nikhil
Handigol
Brandon
Heller
Vimal
Jeyakumar
Bob
Lantz
[Team
Mininet]