Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

WAN - trends and use cases

646 views

Published on

Juniper Day 2016
Praha, 25.5.2016
Uwe Richter, Juniper Networks

Published in: Technology
  • Be the first to comment

  • Be the first to like this

WAN - trends and use cases

  1. 1. VMX  Update
  2. 2. Virtualization  concepts
  3. 3. Hardware  Virtualization • Guest  Virtual  Machines  run  on  top  of  a   Host  Machine   • Virtual  machine  acts  like  a  real   computer  with  an  operating  system  and   devices • Virtual  hardware  – CPUs,  Memory,  I/O • The  software  or  firmware  that  creates  a   virtual  machine  on  the  host  hardware  is   called  a  hypervisor HYPERVISOR
  4. 4. Virtualization  types • Guest  OS  is  not  modified.  Same  OS  is  spun  as  a  VM • Guest  OS  is  not  aware  of  virtualization.  Devices   emulated  entirely.   • Hypervisor  need  to  trap  and  translate  privileged  instructions Fully  Virtualized • Guest  OS  is  aware  that  it  is  running  in  virtualized  environment • Guest  OS  and  Hypervisor  communicate  through  “hyper  calls”  for  improved   performance  and  efficiency • Guest  OS  uses  a  front-­end  driver  for  I/O  operations • Example  :  Juniper  vRR,  vMX Para  Virtualized • Virtualization  aware  hardware  (processors,  NICs  etc) • Intel  VT-­x/VT-­d/vmdq,  AMD-­V • Example:  Juniper  VMX Hardware   assisted
  5. 5. VMX  Overview
  6. 6. VMX vCP vFP VMX vCP vFP VMX  overview CP FP MX x86  server VMX vCP vFP
  7. 7. Virtual  and  Physical  MX PFE vPFE Microcode TRIO x86 CONTROL   PLANE DATA  PLANE ASIC PLATFORM
  8. 8. VMX  Product • Virtual  JUNOS  to  be  hosted  on  a  VM • Follows  standard   JUNOS  release  cycles • Hosted  on  a  VM,  Bare  Metal,  Linux  Containers • Multi  Core • SR-­IOV,   virtIO,   vmxnet3,   … VCP (Virtualized Control Plane) VFP (Virtualized Forward Plane)
  9. 9. vMX Product  Overview VCPVFP Physical  NICs Management   traffic Guest  VM  (Linux) Guest  VM  (FreeBSD) Hypervisor:  KVM,  ESXi Cores Memory Bridge  /  vSwitch Physical  layer PCI  Pass  through  SR-­IOV VirtIO Virtual  Control  Plane  (VCP) • JUNOS  hosted  in  a  VM.  Offers  all  the  capabilities   available  in  JUNOS • Management  remains  the  same  as  physical  MX • SMP  capable   Virtual  Forwarding  Plane  (VFP) • Virtualized  Trio  software  forwarding  plane.  Feature   parity  with  physical  MX.  Utilizes  Intel  DPDK  libraries • Multi-­threaded  SMP  implementation  allows  for   elasticity • SR-­IOV  capable  for  high  throughput   • Can  be  hosted  in  VM or  bare-­metal Orchestration • vMX instance  can  be  orchestrated  through  OpenStack Kilo  HEAT  templates • Package  comes  with  scripts  to  launch  vMX instance
  10. 10. VMX  DETAILS
  11. 11. CENTER  CHIP  (MQ,  XM,..) VMX  Forwarding  Model LOOKUP  CHIP  (LU,  XL…) Queuing  Chip   (QX,  XQ,..) FORWARDING  WITH   TRIO  ASICS  on  MX DPDK RIOT DPDK FORWARDING  WITH   x86  on  VMX
  12. 12. VMX  Detailed  View Physical  nics Virtual  nics DPDK Internal   Bridge 172.16.0.3/16 vfp-­‐int eth1  :   172.16.0.2/16 em1:   172.16.0.1/16vcp-­‐int rpd chasd VMXT RIOT External   Bridge x.x.x.y/m eth0  :   x.x.x.b/m fxp0:   x.x.x.a/m vfp-­‐ext vcp-­‐ext vCP vFP dcd DPDK
  13. 13. Using  VMX:  SRIOV  Mode Physical   nics Virtual   nics VCP VFP eth0 eth1 eth2 eth3 0 1 2 3 eth0:  vf 0 ge-­‐0/0/0 eth1:  vf 0 eth2:  vf 0 eth3:  vf 0 ge-­‐0/0/1 ge-­‐0/0/2 ge-­‐0/0/3 VFP  ports JUNOS   portsvCP vFP
  14. 14. Using  VMX:  Virt-­IO  Mode Input  can  be  physical  or  virtual Virtual   nics VCP VFP 0 1 2 3 Virtio-­‐0 ge-­‐0/0/0 Virtio-­‐1 Virtio-­‐2 Virtio-­‐3 ge-­‐0/0/1 ge-­‐0/0/2 ge-­‐0/0/3 VFP  ports JUNOS   ports vCP vFP
  15. 15. Using  VMX:  Virt-­IO  Mode VCP1 VFP1 0 1 2 3 ge-­‐0/0/0 ge-­‐0/0/1 ge-­‐0/0/2 ge-­‐0/0/3vCP vFP vCP vFP VCP2 VFP2 0 1 2 3 ge-­‐0/0/0 ge-­‐0/0/1 ge-­‐0/0/2 ge-­‐0/0/3
  16. 16. VMX  QoS LEVEL-­1 LEVEL-­ 2 LEVEL-­ 3 PORT S I X Q U E U E S Q0 Q1 Q2 Q3 Q4 Q5 VLAN  1   VLAN  2   VLAN  n High Medium Low § Port: § Shaping-­rate § VLAN: § Shaping-­rate § 4k  per  IFD § Queues: § 6  queues § 3  priorities § 1  High   § 1  medium § 4  low   § Priority  groups  scheduling  follows  strict   priority  for  a  given  VLAN § Queues  of  the  same  priority  for  a  given   VLAN    use  WRR § High  and  medium  queues  are  capped  at   transmit-­rate
  17. 17. VMX  PERFORMANCE
  18. 18. Revisit:  X86  Server  Architecture CPU  Socket  0 CPU  Socket  1 Memory Memory Memory  Controller Memory  Controller PCI  Controller PCI  Controller NICs NICs Cor e   Cor e   Cor e   Cor e   Cor e   Cor e Cor e   Cor e Cor e   Cor e Cor e   Cor e Cor e   Cor e   Cor e   Cor e   Cor e   Cor e Cor e   Cor e Cor e   Cor e Cor e   Cor e
  19. 19. vMX Environment Description Value Sample  system  configuration Intel  Xeon  E5-­‐2667  v2  @  3.30GHz  25  MB  Cache. NIC:  Intel  82599  (for  SR-­‐IOV  only) Memory Minimum:  8  GB   (2GB  for  vRE,  4GB  for  vPFE,  2GB   for  Host  OS) Storage Local  or  NAS Sample  system  configuration Sample  configuration  for  number  of  CPUs Use-­‐cases Requirement VMX  for up  to  100Mbps  performance Min  #  of  vCPUs:  4  [1  vCPU for  VCP  and  3  vCPUs for  VFP]. Min  #  of  Cores:  2  [  1  core  for   VFP  and  1  core  for  VCP].  Min  memory  8G.  VirtIO NIC  only. VMX  for  up  3G  of  performance Min  #  of  vCPUs:  4  [1  vCPU for  VCP  and  3  vCPUs for  VFP]. Min  #  of  Cores:  4  [  3  cores  for   VFP,    1  core  for  VCP].  Min  memory  8G.  VirtIO or  SR-­‐IOV  NIC.     VMX  for  3G  and  beyond  (assuming  min  2  ports   of  10G) Min  #  of  vCPUs:  5  [1  vCPU for  VCP  and  4  vCPUs for  VFP]. Min  #  of  Cores:  5  [  4  cores  for   VFP,    1  core  for  VCP].  Min  memory  8G.   SR-­‐IOV  only  NIC.  
  20. 20. vMX Environment Use-­case   1:  vMX instance   up  to  100Mbps Min  #  of  vCPUs:  4  [1  vCPU for  VCP  &  3  vCPUs for  VFP] Min  #  of  Cores:  2  [1  core  for  VCP.  1  core  for  VFP] Min  memory  8G.   NIC:  VirtIO is  sufficient Core  0 Core  1 Core  2 Core  3 Core  4 Core  5 Core  6 Core  7 VCPU  0 VCPU  1 VCP  (Virtual   Control  Plane) VFP  (Virtual  Forwarding  Plane) JUNOS I/O   – TX   &  RX VCPU  2 Worker Host  OS   CPU  Socket Use-­case   2:  vMX instance   up  to  3Gbps Min  #  of  vCPUs:  4  [1  vCPU for  VCP  &  3  vCPUs for  VFP] Min  #  of  Cores:  4  [  1  core  for  VCP.  For  VFP  assume  2  port   1G/10G  with  a  dedicated  I/O  core,  1  core  for  each  Worker,  1   core  for  Host  Interface  ] Min  memory  8G.   NIC:  VirtIO is  sufficient.  SR-­IOV  can  also  be  used.   Core  0 Core  1 Core  2 Core  3 Core  4 Core  5 Core  6 Core  7 VCPU  0 VCPU  1 VCP  (Virtual   Control  Plane) VFP  (Virtual  Forwarding  Plane) JUNOS I/O   port  1   TX  &  RX VCPU  3 Worker Host  OS   CPU  Socket I/O   port  2   TX  &  RX VCPU  2VCPU  1 Host   Interface VCPU  0 Host   Interface
  21. 21. vMX Environment Use-­case   3:  >3Gbps   of  throughput  per   instance Assume  2  port  10G  for  I/O   Min  #  of  vCPUs:  5  [1  vCPU for  VCP  &  4  vCPUs for  VFP] Min  #  of  Cores:  5 [  1  core  for  VCP.  For  VFP  assume  2  port  10G   each  with  a  dedicated  I/O  core,  1  core  for  each  Worker,  1  core   for  Host  Interface] Min  memory  8G.   NIC:  SR-­IOV  must  be  used Core  0 Core  1 Core  2 Core  3 Core  4 Core  5 Core  6 Core  7 VCPU  0 VCPU  2 VCP  (Virtual   Control  Plane) VFP  (Virtual  Forwarding  Plane) JUNOS I/O   port  1   TX  &  RX VCPU  3 Worker  1 Host  OS   CPU  Socket I/O   port  2   TX  &  RX VCPU  2VCPU  0 Host   Interface VCPU  3 Worker  2 VCPU  n Worker  n
  22. 22. VMX  Performance  in  14.1 vFP vCP CPU  Socket  0 CPU  Socket  1 Memory Memory Memory   Controller Memory   Controller PCI  Controller PCI  Controller NICs NICs Core   Core  Core   Core   Core   CoreCore   Core Core   CoreCore   Core Core   Core  Core   Core   Core   CoreCore   Core Core   CoreCore   Core 4 5 6 7 8 9 10 11 2 4 6 8 10 12 14 16 18 20 12 13 14 15 16 17 vMX Gbps Cores Performance  for  256B  packets 17  Cores 16  gbps
  23. 23. VMX  Performance  in  15.1 vFP vCP CPU  Socket  0 CPU  Socket  1 Memory Memory Memory   Controller Memory   Controller PCI  Controller PCI  Controller NICs NICs Core   Core  Core   Core   Core   CoreCore   Core Core   CoreCore   Core Core   Core  Core   Core   Core   CoreCore   Core Core   CoreCore   Core 4 5 6 7 8 9 10 11 2 4 6 8 10 12 14 16 18 20 12 13 14 15 16 17 vMX with vHyper vMX Gbps Cores Performance  for  256B  packets 6  Cores 20  gbps
  24. 24. vMX Use  Cases
  25. 25. vLNS for  business  or  wholesale  -­ retail • Separate  vLNS instance   available  for  each • Business  VPN • Retail  ISP • vLNS sized  precisely  to   serve    required    PPP  and   L2TP  sessions CPE Aggregation Access Node PPP PPPoE L2TP tunnel LAC/   vLAC Wholesale ISP AAA server Retail ISP AAA server Internet vLNS In Data Centre vLNS Peer   Port PPE Core   side port Customer VPN Retail ISP
  26. 26. SERVICE  PROVIDER  VMX  USE  CASE  – VIRTUAL  PE  (VPE) DC/CO  Gateway   vPE Provider   MPLS  cloud CPE L2  PE L3  PE CPE Peering Internet SMBCPE Pseudowire L3VPN IPSEC/Overlay   technology Branch Office Branch Office DC  Fabric
  27. 27. vBNG for  BNG  near  CO vBNG Deployment  Model SP  Core vBNG Internet OLT/DSLAM DSL  or  Fiber  CPE OLT/DSLAM DSL  or  Fiber  CPE OLT/DSLAM DSL  or  Fiber  CPE Central  Office With  Cloud  Infrastructure L2  Switch L2  Switch • Business  case  is  strongest  when  vBNG aggregates  12K  or  fewer  subscribers Ethernet Ethernet
  28. 28. Parts  of  a  cloud § CGWR Cloud  gateway   router   Could   be  router,   server,  switch § Switches Switch  features  and  overlay   technology   as  needed § Servers Includes  cabling  between   servers  and  ToRs,  mapping   of   virtual  instances  to  ports,  core   capacity  and  virtual  machines 3 Leaf SpineSpine Leaf Cloud Gateways vLNS IP address 1.1.1.1 Other VNFs IP address 2.2.2.2 Server-1 KVM ge1 ge2 ge3 ge4 Leaf/TOR NIC1 NIC2 vLNS IP 3.3.3.3 Server-2 KVM ge1 ge2 ge3 ge4 Leaf/TOR NIC1 NIC2 Other VNFs IP 4.4.4.4
  29. 29. VMX  with  service  chaining  – potential  vCPE use  case   vMX as   vCPE vFirewall vNATBranch Office Switch   Provider   MPLS   cloud DC/CO  GW Branch Office Switch Provider   MPLS   cloud DC/CO  Fabric  +    Contrail  overlay vPE Branch Office Switch CPE  like  functionality   in  the  cloud L2  PE L2  PE PE Internet
  30. 30. Thank  you

×