4th SDN Interest Group Seminar-Session 2-2(130313)

588 views

Published on

지난 2013년 3월 13일 진행된 제4차 SDN Interest Group Seminar의 발표 자료 입니다.

Published in: Technology, Business
  • Be the first to comment

4th SDN Interest Group Seminar-Session 2-2(130313)

  1. 1. 2013  OpenFlow  Korea  All  Rights  Reserved   SDN  for  Cloud  Datacenter   March,  2013   넷 맨  -­‐  김 창 민     기술 매니져 @  OpenFlow  Korea   Worldwide  9th  Quintuple  CCIE#12303   charles.kim@aristanetworks.com  
  2. 2. 2013  OpenFlow  Korea  All  Rights  Reserved   Agenda   1. Overview  of  SDN  and  Cloud  Datacenter   2. ConsideraLon  for  Provisioning  and  AutomaLon  FuncLons   for  OpenFlow  Enabled  Switch   3. Moore’s  Law  and  Networking   4. Low  Latency  and  Non-­‐Blocking  2-­‐Ler  Leaf-­‐Spine  Design   for  OpenFlow  Enabled  Cloud  Datacenter  
  3. 3. 2013  OpenFlow  Korea  All  Rights  Reserved   What  is  SDN   •  In  the  SDN  architecture,  the  control  and  data  planes  are   decoupled,  network  intelligence  and  state  are  logically   centralized,  and  the  underlying  network  infrastructure  is   abstracted  from  the  applicaLons.   -­‐  Open  Networking  FoundaLon  white  paper   •  Let’s  call  whatever  we  can  ship  today  SDN  -­‐  Vendor  X   •  SDN  is  the  magic  buzzword  that  will  bring  us  VC  funding   from  -­‐  Startup  Y  
  4. 4. 2013  OpenFlow  Korea  All  Rights  Reserved   SDN  Use  Cases   -­‐  Let’s  focus  on  Intra  Cloud  Datacenter  only   VM   VM   VM   PHY   PHY   VM   VM   VM   PHY   PHY   L2/L3VPN WAN Data Center SDN  OrchestraLon  &   SDN  Controller   SDN  Cloud  Gateway   3   WAN  Network  VirtualizaLon   WAN  VirtualizaLon   App  &  SDN  Controller   DC 1 DC 210/100G WAN Customer 1 Customer 2 7   Services  CreaLon  &  InserLon   Services  InserLon   App  &  SDN  Controller   ADC   FW   Cache   AAA   6   WAN   Data Center Customer 1 Customer 2 Customer 3 ADC   ADP  APP  &  SDN  Controller   ApplicaLon  Delivery   2   DC  Network  VirtualizaLon   DC Network Fabric VM VM VM PHY PHY VM VM VM PHY PHY VM VM VM PHY PHY DC  VirtualizaLon   App  &  SDN  Controller   1   DC 1 DC 2Optical Packet-­‐OpLcal    IntegraLon   APP  &    SDN  Controller   Packet-­‐OpLcal  IntegraLon     MPLS/IP   DC1  SDN   Cloud  OrchestraLon   DC2  SDN  OTN   4   Network  AnalyLcs   App  &  SDN  Controller   Production 10/100G WAN Analytics Network Tool  1   Tool  2   Tool  3   5   Network  AnalyLcs   ?  
  5. 5. 2013  OpenFlow  Korea  All  Rights  Reserved   Where  I’m  focusing  on  …  
  6. 6. 2013  OpenFlow  Korea  All  Rights  Reserved   Real  Datacenters   •  Physical  Plant   •  Power   •  Cooling   •  IsolaLon   •  Lot’s  of  Servers   •  Lot’s  of  Storage   •  Lot’s  of  Cables,  Networks   •  Lot’s  of  complexity  
  7. 7. 2013  OpenFlow  Korea  All  Rights  Reserved   DefiniGon  of  Cloud  CompuGng  by  NIST   NaLonal  InsLtute  of  Standards  and  Technology,  U.S.  Department  of  Commerce   Cloud  compuGng  is  a  model  for  enabling  ubiquitous,  convenient,  on-­‐demand   network  access  to  a  shared  pool  of  configurable  compuGng  resources  (e.g.,   networks,  servers,  storage,  applicaLons,  and  services)  that  can  be  rapidly   provisioned  and  released  with  minimal  management  effort  or  service   provider  interacGon.  This  cloud  model  promotes  availability  and  is  composed   of  five  essenGal  characterisGcs,  three  service  models,  and  four  deployment   models.   CharacterisGcs   •  On-­‐demand  self-­‐service     •  Broad  network  access     •  Resource  pooling   Rapid  elasLcity   •  Measured  Service   Service  model   •  Infrastructure  as  a  Service   (IaaS)   •  Plagorm  as  a  Service  (PaaS)     •  Sohware  as  a  Service  (SaaS)     Deployment  model   •  Private  cloud     •  Public  cloud     •  Hybrid  cloud   •  Community  cloud     csrc.nist.gov/publicaLons/nistpubs/800-­‐145/SP800-­‐145.pdf  
  8. 8. 2013  OpenFlow  Korea  All  Rights  Reserved   Why  Cloud  CompuGng?   •  Cloud  compuLng  is  the  future   -  Regardless  of  personal  opinions  and  foggy  definiLons   •  Cloud  compuLng  requires  large-­‐scale  elasGc  data  centers   -  Hard  to  build  them  using  the  old  tricks   •  Modern  applicaLons  generate  lots  of  east-­‐west  (inter-­‐server)  traffic     -  ExisLng  DC  designs  are  focused  on  north-­‐south  (server-­‐to-­‐user)  traffic  
  9. 9. 2013  OpenFlow  Korea  All  Rights  Reserved   All  about  SDN  for  Cloud  Datacenter   •  Network  Programmability   -  API  interacLon  with  network  elements   -  Local  and  remote  programmability  via  structured  APIs   -  Open  OperaLng  Systems   •  SeparaGon  of  Control  Plane  and  Forwarding  Plane   -  Infrastructure  AgnosLc  and  broadest  array  of  controller   support,  freedom  of  choice  on  architecture  and  protocols   -  Forwarding  Plane  can  be  Sohware  or  Hardware   •  Strong  integraGon  with  leading  Cloud  Management   (OrchestraGon)  PlaXorms       -  OpenStack,  CloudStack,  vCloud  Director  etc  
  10. 10. 2013  OpenFlow  Korea  All  Rights  Reserved   SoYware-­‐Defined  Network  Architecture   Open  Networking  FoundaLon  white  paper  
  11. 11. 2013  OpenFlow  Korea  All  Rights  Reserved   SDN  Framework  for  Cloud  Datacenter   “SDN  is  a  soYware-­‐to-­‐infrastructure  interface     that  allows  applicaGons  to  drive  infrastructure  acGons.”  
  12. 12. 2013  OpenFlow  Korea  All  Rights  Reserved   OpenFlow  SpecificaGons   •  OpenFlow  1.0   -  Released  at  the  end  of  2009,  target  for  “Campus  research”   -  The  first  stable  and  most  deployed  version  at  the  moment   -  If  a  packet  match  in  the  flow  table  =>  perform  acLon   •  OpenFlow  1.1   -  Released  on  March  2011,  target  for  “WAN  research”   -  If  a  packet  match  in  the  flow  table  =>  look  at  instrucLons   -  InstrucLons  =  acLons,  OR  set  acLons  in  the  acLon  set  OR  change   pipeline  processing   -  Allows  mulLple  flow  tables   •  OpenFlow  1.2   -  Approved  on  Dec  2011,  described  as  “Extensible  Protocol”   -  Support  for  IPv6  and  support  of  mulLple  controllers   •  OpenFlow  1.3   -  Add  “Meter  table”  in  support  of  QoS  
  13. 13. 2013  OpenFlow  Korea  All  Rights  Reserved   (Almost)  Shipping  OpenFlow  Products   Switches  –  Commercial   •  Arista  7000  Family   •  Cisco  (roadmapped)   •  Brocade  MLX/NetIron  products   •  Extreme  BlackDiamond  X8   •  HP  ProCurve   •  IBM  BNT  G8264   •  NEC  ProgrammableFlow  switches   •  Juniper  MX-­‐Series  (SDK)   •  Smaller  vendors   Controllers  –  Commercial   •  Big  Switch  Networks  (EFT?)   •  NEC  ProgrammableFlow  Controller     •  Nicira  NVP   Switches  –  Open  Source   •  Open  vSwitch  (Xen,  KVM)   •  NetFPGA  reference  implementaLon   •  OpenWRT   •  Mininet  (emulaLon)   Controllers  –  Open  Source   •  NOX  (C++/Python)  •  Beacon  (Java)   •  Floodlight  (Java)   •  Maestro  (Java)   •  RouteFlow  (NOX,  Quagga,  ...)   More@  hup://www.sdncentral.com/shipping-­‐sdn-­‐products/   hup://www.sdncentral.com/comprehensive-­‐list-­‐of-­‐open-­‐source-­‐sdn-­‐projects  
  14. 14. 2013  OpenFlow  Korea  All  Rights  Reserved   Current  SDN  offerings  in  Silos  
  15. 15. 2013  OpenFlow  Korea  All  Rights  Reserved   SDN  Strategy  for  Cloud  Datacenter  
  16. 16. 2013  OpenFlow  Korea  All  Rights  Reserved   OpenFlow  Switch  Architecture     for  Cloud  Datacenter   •  In  a  pure  “OpenFlow”  device,  the  OS  is   minimal.  Only  chip  firmware  and  simple   device  management  funcLons  are   included.   •  Complexity  moves  to  the  controller/   SDN  layer.   •  But  a  device  could  also  maintain   protocols  AND  have  OpenFlow  support   •  x86  64bit  Linux/Unix  plaXorm  can  be   used  at  OpenFlow  switch   •  Support  for  add-­‐on  our  own  agents  on   Network  OS  for  Cloud  Datacenter   [Basic  OpenFlow  enabled  Switch]   [OpenFlow  enabled  Switch  for  Cloud  Datacenter]  
  17. 17. 2013  OpenFlow  Korea  All  Rights  Reserved   Why    needs  intelligence  on  Network  OperaGng   Systems  for  Cloud  Datacenter   •  The  Device  OperaLng  System  handles  all  device  operaLons  such  as  Boot,   Flash,  Memory  Management,  TCAM,  OpenFlow  Protocol  handler,  SNMP   agent  and  so  on.   •  Consider  a  device  with  no  OSPF,  MulLcast,  BGP,  STP,  MAC  address   tables,  VLAN  tagging,  LDP…Or  a  device  without  code  bloat,  only  what   you  need   •  Smaller  code  =  less  bugs,  less  resources,  less  cost   •  Cloud  Datacenter  needs  some  more  intelligent  funcGons  at  Device   OperaGng  System  for  provisioning  and  automaGon  purpose   •  Pure  Linux/Unix  plaXorm  for  this  purpose,  not  modified  one     •  All  Linux/Unix  distribuGons  can  be  …   •  Running  our  own  codes  at  OpenFlow  enabled  Switch    
  18. 18. 2013  OpenFlow  Korea  All  Rights  Reserved   OpenFlow  Is  Not  the  Only  SDN  Tool   Vendor  APIs   •  Cisco  :  Open  Networking  Environment  (ONE),  EEM  (Tcl),  PythonscripLng   •  Juniper  :  JUNOS  XML  API  and  SLAX  (human-­‐readable  XSLT)   •  Arista  :  XMPP,  Linux  scripLng  (including  Python  and  Perl)   •  DellForce10  :  Open  AutomaLon  Framework  (Perl,  Python,  NetBSDshell)   •  F5  :  iRules  (Tcl-­‐basedscripts)  
  19. 19. 2013  OpenFlow  Korea  All  Rights  Reserved   OpenFlow  Config   •  OpenFlow  ConfiguraLon  Protocol   •  OpenFlow  OperaLon  ConfiguraLon  (currently  v  1.1)   •  Main  purpose  is  remote  management.    cf.  OpenFlow  is  for  control   •  RFC  6241  NETCONF  is  mandatory  protocol     •  Data  Model  is  based  on  XML  &  YANG  
  20. 20. 2013  OpenFlow  Korea  All  Rights  Reserved   Comparing  SNMP  and  NETCONF   SNMP   NETCONF   Data  Models   Defined  in  MIBs   Defined  in  YANG  modules   (or  XML  schema  documents)   Data  Modeling  Language   Structure  of  Management   InformaLon  (SMI)   YANG  (and  XML  schema)   Management  OperaLons   SNMP   NETCONF   RPC  EncapsulaLon   Basic  Encoding  Rules  (BER)   XML   Transport  Protocol   UDP   TCP  (reliable  transport)   •  NETCONF  seems  to  be  almost  similar  with  SNMP  but…  
  21. 21. 2013  OpenFlow  Korea  All  Rights  Reserved   Current  limitaGon  of  NETCONF   •  Schemas  are  not  part  of  the  NETCONF  standard,  so  not  possible  to   reuse  schema  from  vendor/plagorm/product  to  another  (or  even   between  different  plagorms  from  the  same  vendor)  and  schema   ends  up  convoluted  and  non-­‐intuiLve   •  Only  covers  ‘config’  commands  and  a  subset  of  ‘show’   commands   •  Do  you  believe  whether  NETCONF  can  do  everything?   •  We  definitely  need  some  fancy  tools  for  provisioning  and   automaGon  for  our  Cloud  Datacenter    
  22. 22. 2013  OpenFlow  Korea  All  Rights  Reserved   Current  Management  Protocol  but  …   •  Needs  for  some  thing  fancy   agent  or  interfaces  within   management  protocol  areas  
  23. 23. 2013  OpenFlow  Korea  All  Rights  Reserved   40  Y   1,000,000X   2X/2Y   Moore’s  Law  1971-­‐2011  
  24. 24. 2013  OpenFlow  Korea  All  Rights  Reserved   100X   12Y   Semiconductor  Technology  Roadmap  
  25. 25. 2013  OpenFlow  Korea  All  Rights  Reserved   100X  Performance  by  2022   64-­‐bit  CPU  Cores  over  Time  
  26. 26. 2013  OpenFlow  Korea  All  Rights  Reserved   CPU:  2X/2Y  =  64X/12Y   1GigE-­‐10GigE:  10X/12Y   Time   Performance   What happened??? Moore’s  Law  and  Networking  
  27. 27. 2013  OpenFlow  Korea  All  Rights  Reserved   ASIC  Design:  10  Chips  Custom  Design:  1  Chip   8  ports   8  ports   8  ports   8  ports   8  ports   8  ports   8  ports   8  ports   XBAR   XBAR   64  port  10G  Switch:  Custom  vs  ASIC    
  28. 28. 2013  OpenFlow  Korea  All  Rights  Reserved   Technology" 130nm" 65nm" 40nm" 10G ports" 24" 64" 128" 40 ports" ---" 16" 32" Throughput" 360MPPS" 960MPPS" 2BPPS" Buffer Size" 2MB" 8MB" 12MB" Table Size" 16K" 128K" 256K" Availability" 2008" 2011" 2013" Improvement" N/A" 3X/3Y" 2X/2Y" Single Chip Switch Silicon Roadmap  
  29. 29. 2013  OpenFlow  Korea  All  Rights  Reserved   •  Next Two Generations follow Moore’s Law –  Table  sizes  double  every  process  generaLon   –  Industry  catching  up  on  process  roadmap   •  I/O Speed scales slower than Moore’s Law –  I/O  doubles  about  every  four  years   –  Next  step  is  25  Gbps  SERDES     •  Moore’s Law requires Custom Designs –  ASIC  flow  wastes  silicon  potenLal     Moore’s  Law  and  Networking  
  30. 30. 2013  OpenFlow  Korea  All  Rights  Reserved   Lower  Latency,  Lower  OversubscripGon   Benefits  of  2-­‐Ger  architecture   •  Lower  oversubscripLon,  lower  latency   •  Reduced  hierarchy,  fewer  management  points     •  Enabled  by  high-­‐density  core  switches       Crucial  quesGons  remain  but  OpenFlow   can  address  them   •  PosiLoning  of  services  infrastructure  (FW,LB)     •  RouLng  or  bridging  (N/S  and  E/W)  
  31. 31. 2013  OpenFlow  Korea  All  Rights  Reserved   Cost  of  InterconnecGng  Nodes   •  Network  cost  per  node  =    (  switches  +  power  +  opLcs  +  fiber)  /                                                                                    (  total  nodes  *  oversubscripLon)   •  2-­‐Ler  designs  provide  a  beuer  cost  basis  than  3-­‐Ler   •  Each  Ler  adds  significant  cost  due  to  opLcs/fiber  of  interconnects   •  Costs  go  up  with  scale    N  ports   (1  switch  of  N  ports)   2N  ports   3X  cost  per  usable  port   (6  switches  for  2x  increase  in  usable  ports   compared  to  single  switch)   4N  ports   3.5X  cost  per  usable  port     (14  switches  for  4x  increase  in  usable  ports   compared  to  single  switch)   Single  Tier   Two  Tier   Three  Tier   N   ½N   ½N   ½N   ½N   ½N   ½N   ½N   ½N  ½N   ½N   ½N   ½N  
  32. 32. 2013  OpenFlow  Korea  All  Rights  Reserved   Cloud  Spine  Leaf  Network  Design  (1)   2  Spine   1   2   72  Leaf   32 32 32 32 32 32 32   32 Scales  to  2,304  x  10G  nodes  non-­‐oversubscribed   1   72   ...   4  Spine   1   2   3   4   144  Leaf   32 32 32 32 32   32 32 32 Scales  to  4,608  x  10G  nodes  non-­‐oversubscribed   1   144   ...  
  33. 33. 2013  OpenFlow  Korea  All  Rights  Reserved   Cloud  Spine  Leaf  Network  Design  (2)   8  Spine   1   2   7   8   288  Leaf   32 32 32 32 32   32 32 32 Scales  to  9,126  x  10G  nodes  non-­‐oversubscribed   1   288   ...   ...   16  Spine   1   2   15   16   576  Leaf   32 32 32 32 32   32 32 32 Scales  to  18,432  x  10G  nodes  non-­‐oversubscribed   1   576   ...   ...  
  34. 34. 2013  OpenFlow  Korea  All  Rights  Reserved   Cloud  Spine  Leaf  Network  Design  (3)   2-­‐Ler  Leaf,  16-­‐Way  Spine  3:1  oversubscripLon   16  Spine  …   Scales  to  55,296  x  10G  nodes  @  3:1  oversubscribed   1   16 16 16 16 2   15   16   1,152  Leaf   48 48 48 48 1   1,152  
  35. 35. 2013  OpenFlow  Korea  All  Rights  Reserved     OpenFlow  Korea     (www.OPENFLOW.or.kr)  

×