Mininet: Moving Forward

1,301 views
1,099 views

Published on

Published in: Technology, Education
3 Comments
5 Likes
Statistics
Notes
No Downloads
Views
Total views
1,301
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
85
Comments
3
Likes
5
Embeds 0
No embeds

No notes for slide

Mininet: Moving Forward

  1. 1. Mininet:    Moving  Forward    Nikhil  Handigol  Brandon  Heller  Vimal  Jeyakumar  Bob    Lantz  [Team  Mininet]  ONRC  Annual  Day  2013  
  2. 2. Intro  to  Mininet  3  
  3. 3. TesKng  an  SDN  Idea  •  Physical  switches/hosts?  •  Testbed?  •  Simulator?  •  Emulator?    Mininet  
  4. 4. Mininet  Your  SDN  Command-­‐Line  Interface  ScripKng  API  VM,  naKve  laptop,  or  naKve  server  To  start  up  Mininet:  > sudo mn
  5. 5. Open-­‐Source  System  w/AcKve  User  Community  6  •  1000+  users  •  AcKve  mailing  list    •  700+  users,  200+  domains  •  20  Google  Summer  of  Code  apps  •  GitHub  pull  requests  mininet.github.com  
  6. 6. Talk  Outline  •  Mininet  1.0:  funcKonal  fidelity  •  Mininet  2.0  (HiFi)  – Performance  Fidelity  – “Network  Invariants”  – Reproducible  Research  •  Prototypes:  greater  scale  
  7. 7. Mininet  2.0  (HiFi)  8  
  8. 8. Verifying  Network  ProperKes  •  “Does  my  SDN  work?”  – E.g.,  func&onal  correctness  – Same  control  program  +  OF    funcKonal  fidelity  •  Does  not  try  to  solve  a  harder  problem:  – “How  does  my  SDN/network  perform?”  – That  is,  performance  proper&es.  – No  guarantee  or  expectaKon  of  perf.  fidelity.    
  9. 9. Example:  ConnecKvity  in  a  Fat  Tree  servers  switches  10  20x  4-­‐port  switches  16x  servers  sudo mn –-custom ft.py -–topo ft,4 –test pingall
  10. 10. Verifying  Network  ProperKes  •  “Does  my  SDN  work?”  – E.g.,  func&onal  correctness  – Same  control  program  +  OF    funcKonal  fidelity  •  “How  does  my  SDN  perform?”  – E.g.,  performance  proper&es  – No  guarantee  or  even  expectaKon  here  
  11. 11. hosts  switches  12  A   B   Y   Z  full  throughput  Two  1  Gb/s  flows.  Disjoint  paths.  Example:  Performance  in  a  Fat  Tree  
  12. 12. hosts  switches  13  A   B   Y   Z  X  collision  half  throughput  Two  1  Gb/s  flows.  Overlapping  paths.  Example:  Performance  in  a  Fat  Tree  Throughput  might  reduce.    But  by  how  much?    How  do  you  trust  the  results?  
  13. 13. 14  overlapping  events  real  Kme  …  …  …  A BLinkEventsA: send requestB: initxmit 1xmit 2B: send reponseA B B HiFiA x1 x2B A BPacket xmit2// A: Clientwhile(1) {send_request(socket);wait_for_reply(socket);}// B: Serverinit();while(1) {wait_for_request(socket);send_response(socket);}1RealSetupB: waitidleBB: send reponse// B: Serverinit();while(1) {wait_for_request(socket);send_response(socket);}RealS  Sources  of  Emulator  Infidelity  Event  Overlap  
  14. 14. 15  real  Kme  …  …  …  A BLinkEventsA: send requestB: initxmit 1xmit 2B: send reponseA B B HiFiA x1 x2B A BPacket xmit2// A: Clientwhile(1) {send_request(socket);wait_for_reply(socket);}// B: Serverinit();while(1) {wait_for_request(socket);send_response(socket);}1RealSetupB: waitidleBB: send reponse// B: Serverinit();while(1) {wait_for_request(socket);send_response(socket);}RealS  Sources  of  Emulator  Infidelity  So8ware  Forwarding  variable  delays  
  15. 15. How  can  we  trust  emulator  results?  real  Kme  …  …  …  A BLinkEventsA: send requestB: initxmit 1xmit 2B: send reponseA B B HiFiA x1 x2B A BPacket xmit2// A: Clientwhile(1) {send_request(socket);wait_for_reply(socket);}// B: Serverinit();while(1) {wait_for_request(socket);send_response(socket);}1RealSetupB: waitidleBB: send reponse// B: Serverinit();while(1) {wait_for_request(socket);send_response(socket);}RealS  CPU  <=  50%,  so  not  overloaded,  right?  Wrong.  
  16. 16. 17  The  Mininet-­‐HiFi  Approach  Resource  IsolaKon  HIGH!Fidelity!METER!MEDIUM!LOW!Fidelity    Monitor  +  500  Mhz  20  pkt  bufs/port  10  Mb/s,  1ms  
  17. 17. Network  Invariants  18  
  18. 18. A  Workflow  for  High  Fidelity  EmulaKon  19  Create  experiment  Run  the  experiment  on  a  PC,    with  logging  Analyze  experiment  fidelity  using  “network  invariants”  Invariants  hold:  High  Fidelity  EmulaKon!  Instance(s)  of  behavior  that  differ  from  hardware  Run  again:  increase  resources  or  reduce  experiment  scale  1:  what  to  log?  2:  which  invariants?  3:  how  close?  
  19. 19. 20  Packet  Gap  Invariants  queue   link   switch   queue  packet  spacing  (when  queue  occupied)  Rmeasured  ≤  Rconfigured  ?          link  capacity  
  20. 20. Example  Workflow  for  One  Invariant  21  Create  experiment  Run  the  experiment  on  a  PC,    with  logging  Analyze  experiment  fidelity  using  “network  invariants”  Invariants  hold:  High  Fidelity  EmulaKon!  Instance(s)  of  behavior  that  differ  from  hardware  Run  again:  increase  resources  or  reduce  experiment  scale  2:  Measure  packet    spacing  3:  Is  any  packet  delayed  by  more  than  one  packet  Kme?  1:  Log  Dequeue  Events  If  this  workflow  is  valid,  “pass”    same  result  as  hardware.  DCTCP  
  21. 21. Data  Center  TCP  (DCTCP)  Kme  packets  in  queue  TCP  DCTCP  22  marking  threshold  Queue  occupied  100%  throughput  Queue  occupied  100%  throughput  Packet  spacing  we  should  see:  
  22. 22. 0 30 60 90 120Seconds0510152025303540Packets q-dctcp-plot.txtHardware  Results,  100  Mb/s  23  packets  in  queue  100%  throughput  6  packets  variaKon  queue  occupied  
  23. 23. Emulator  Results  24  Does  checking  an  invariant  (packet  spacing)  idenKfy  wrong  results?  same  result   wrong;    limits  exceeded  80  Mb/s  100%  tput  6  pkts  var  same  result  160  Mb/s  100%  tput  6  pkts  var  320  Mb/s  
  24. 24. Packet  Spacing  Invariant  w/DCTCP  25  1  pkt  med.   low  high  CCDF  Percent  (log)  25  pkts  Error:  (log)  10%  of  the  Kme,  error  exceeds  one  packet  x  
  25. 25. Percentage deviation from expected0110100Percent26  1  pkt  error  10  20  40  numbers  are  in  Mb/s  80  CCDF  Percent  Packet  Spacing  Invariant  w/DCTCP  
  26. 26. Percentage deviation from expected0110100Percent27  10  20  40  numbers  are  in  Mb/s  80  1  pkt  error  CCDF  Percent  Packet  Spacing  Invariant  w/DCTCP  
  27. 27. Percentage deviation from expected0110100Percent28  10  20  40  numbers  are  in  Mb/s  80  CCDF  Percent  Packet  Spacing  Invariant  w/DCTCP  160  Mb/s:  failed  emulaKon?  Beauty  of  networks  invariants  is  that  it  catches  and  quanKfies  the  error  in  this  run.  1  pkt  error  
  28. 28. DemonstraKng  Fidelity  •  Microbenchmarks  •  ValidaKon  Tests  •  Reproducing  Published  Research  – Do  complex  results  match  published  ones  that  used  custom  hardware  topologies?  •  DCTCP  [Alizadeh,  SIGCOMM  2010]  •  Router  Buffer  Sizing  [Appenzeller,  SIGCOMM  2004]  •  Hedera  ECMP  [Al-­‐Fares,  NSDI  2010]  
  29. 29. Reproducing  Research  30  
  30. 30. 31  Stanford  CS244  Spring  ’12:  
  31. 31. →  Pick  a  paper.    →  Reproduce  a  key  result,  or  challenge  it  (with  data).  →  You  have:  $100  EC2  credit,  3  weeks,  and  must  use  Mininet-­‐HiFi.   32  
  32. 32. CoDel  HULL  MPTCP  Outcast  Jellyfish  DCTCP  Incast  Flow  CompleKon  Time  Hedera  DCell  TCP  IniKal  CongesKon  Window  Misbehaving  TCP  Receivers  RED  Project  Topics:  Transport,    Data  Center,  Queuing  33  
  33. 33. CoDel  HULL  MPTCP  Outcast  Jellyfish  DCTCP  Incast  Flow  CompleKon  Time  Hedera  DCell  TCP  IniKal  CongesKon  Window  Misbehaving  TCP  Receivers  RED  34  37  students  18  projects  16  replicated  
  34. 34. CoDel  HULL  MPTCP  Outcast  Jellyfish  DCTCP  Incast  Flow  CompleKon  Time  Hedera  DCell  TCP  IniKal  CongesKon  Window  Misbehaving  TCP  Receivers  RED  37  students  18  projects  16  replicated  4  beyond  35  
  35. 35. CoDel  HULL  MPTCP  Outcast  Jellyfish  DCTCP  Incast  Flow  CompleKon  Time  Hedera  DCell  TCP  IniKal  CongesKon  Window  Misbehaving  TCP  Receivers  RED  37  students  18  projects  16  replicated  4  beyond  2  not  replicated  36  
  36. 36. Reproduced  Research  Examples  reproducingnetworkresearch.wordpress.com  (or  Google  “reproducing  network  research”)  37  
  37. 37. Why  might  results  be  different?  •  Student  error  /  out  of  Kme:  Incast  •  Original  result  fragile:  RED  •  Insufficient  emulator  capacity  to  match  hardware  of  original  experiment  – OpKon  1:  Scale  up  – OpKon  2:  Slow  down:  Time  DilaKon  – OpKon  3:  Scale  out:  Cluster  EdiKon  
  38. 38. QuesKons?  •  Check  out  Bob’s  Cluster  EdiKon  demo  Nikhil  Handigol  Brandon  Heller  Vimal  Jeyakumar  Bob    Lantz  [Team  Mininet]  

×