Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

NFV в сетях операторов связи

2,104 views

Published on

Запись презентации с партнерского саммита Juniper Networks от 18.11.2014.

Published in: Education
  • Be the first to comment

NFV в сетях операторов связи

  1. 1. NFV в сетях операторов связи Евгений Бугаков Старший системный инженер, JNCIE-SP Juniper Networks Russia and CIS Москва, 18 ноября 2014
  2. 2. Virtualization strategy & Goals
  3. 3. Branch Office HQ Carrier Ethernet Switch Cell Site Router Mobile & Packet GWs Aggregation Router/ Metro Core DC/CO Edge RouterService Edge Router Core Enterprise Edge/Mobile Edge Aggregation/Metro/Metro core Service Provider Edge/Core and EPC vCPE, Virtual Branch Virtual PE (vPE), Virtual BNG (vBNG) Virtual Routing Engine (vRE), Virtual Route Reflector (vRR) MX SDN Gateway MX Virtualization strategy Hardware Virtualization SW Control plane and OS: Virtual JunOS, Forwarding plane: Virtualized Trio Leverage development effort and JunOS feature velocity across all virtualization initiatives vBNG, vPE, vCPE Data center Applications
  4. 4. Juniper Networks Carried Grade Virtual Router vMX
  5. 5. VMX goals Agile and Scalable Orchestrated Leverage JUNOS and Trio • Scale-out elasticity by spinning up new instances • Faster time-to-market offering • Ability to add new services via service chaining • vMX treated similar to a cloud based application • Leverages the forwarding feature set of Trio • Leverages the control plane features of JUNOS
  6. 6. VMX product overview
  7. 7. VMX a scale-out router Scale-out (Virtual MX)Scale-up (Physical MX) • Optimize for density in a single instance of the platform. • Innovate in ASIC, power and cooling technologies to drive density and most efficient power footprint. • Virtualized platforms not optimized to compete with physical routers with regards to capacity per instance. • Each instance is a router with its own dedicated control-plane and data-plane. Allows for a smaller footprint deployment with administrative separation per instance.
  8. 8. Virtual and Physical MX PFE vPFE Microcod e TRIO x86 CONTROL PLANE DATA PLANE ASIC PLATFOR M
  9. 9. Virtualization techniques Application Virtual NICs Physical NICs Guest VM#1 Hypervisor: KVM, VMWare ESXi Physical layer VirtIO drivers Device emulation Para-virtualization • Guest and Hypervisor work together to make emulation efficient • Offers flexibility for multi-tenancy but with lower I/O performance • NIC resource is not tied to any one application and can be shared across multiple applications • vMotion like functionality possible PCI-Pass through with SR-IOV • Device drivers exist in user space • Best for I/O performance but has dependency on NIC type • Direct I/O path between NIC and user-space application bypassing hypervisor • vMotion like functionality not possible Application Virtual NICs Guest VM#2 VirtIO drivers Application Virtual NICs Physical NICs Guest VM#1 Hypervisor: KVM, VMWare ESXi Physical layer Device emulation Application Virtual NICs Guest VM#2 Device emulation PCIPass-through SR-IOV
  10. 10. VMX Product • Virtual JUNOS to be hosted on a VM • Follows standard JUNOS release cycles • Additional software licenses for different applications (vPE, vRR, vBNG) • Hosted on a VM, Bare Metal, Linux Containers • Multi Core • DPDK, SR-IOV, virtIO VCP (Virtualized Control Plane) VFP (Virtualized Forward Plane)
  11. 11. VMX overview Efficient separation of control and data-plane – Data packets are switched within vTRIO – Multi-threaded SMP implementation allows core elasticity – Only control packets forwarded to JUNOS – Feature parity with JUNOS (CLI, interface model, service configuration) – NIC interfaces (eth0) are mapped to JUNOS interfaces (ge-0/0/0) Guest OS (Linux) Guest OS (JUNOS) Hypervisor x86 Hardware CHASSISD RPD LC-Kernel DCD SNMP Virtual TRIO VFP VCP Intel DPDK
  12. 12. vMX Performance
  13. 13. VMX Environment • CPU assignments • Packet Processing Engine in VFP • Variable based on desired performance • Packet IO • One core per 10G port • VCP - RE/Control plane • VCP-VFP Communication • Emulators • 20 GB memory • 16 GB for RIOT [VFP] • 4 GB for RE [VCP] • 6x10G Intel NICs with DPDK
  14. 14. VMX Baseline Performance VMX Tester Test setup 8.9G 8.9G 8.9G 8.9G 8.9G 8.9G 8.9G 8.9G 8.9G 8.9G 8.9G 8.9G • Single instance of VMX with 6 ports of 10G sending bidirectional traffic • 16 cores total • Up to 60G of bidirectional (120G unidirectional) performance per VMX instance (1 VCP instance and 1 VFP instance) @ 1500 bytes • No packet loss • IPv4 Throughput testing only Port0 Port1 Port2 Port3 Port4 Port5
  15. 15. VMX performance improvements Ivy Bridge Haswell (current gen) Sandy Bridge Forwardingperformance Intel architecture changes & increase in # of cores/socket • VMX performance improvements will leverage the advancements in Intel Architecture • Generational changes happen every 2 years and provide about a 1.5x-2x improvement in performance • Iterative process to optimize efficiency of Trio ucode compiled as x86 instructions • Streamline forwarding plane with reduced feature-set to increase packet per second performance i.e Hypermode for vMX Incremental improvements in virtualized forwarding plane Forwardingperformance Incremental improvements in forwarding efficiency Broadwell (Next gen) Virtual Routing Engine VM X VM X VM X • Scale-out VMX deployment with multiple VMXs controlled by a single control plane Scale out VMX
  16. 16. vMX use cases and deployment models
  17. 17. Service Provider VMX use case – virtual PE (vPE) DC/CO Gateway Provider MPLS cloudCPE L2 PE L3 PE CPE Peering Internet SMB CPE Pseudowire L3VPN IPSEC/Overlay technology Branch Offic e Branch Offic e DC/CO Fabric vPE • Scale-out deployment scenarios • Low bandwidth, high control plane scale customers • Dedicated PE for new services and faster time- to-market Market Requirement • VMX is a virtual extension of a physical MX PE • Orchestration and management capabilities inherent to any virtualized application apply VMX Value Proposition
  18. 18. Example VMX connectivity model – option 1 L2PE P DC/CO GW&ASBR Provider MPLS Cloud vPE LDP+IGP LDP+IGP L2/L3 overlay BGP-LU MPLS BGP-LU CPE RR NHS no NHS Pseudowire DC/CO Fabric ASBR BGP-LU NHSno NHS RR • Extend SP MPLS cloud to the VPE • L2 backhaul from CPE • Scale the number of vPEs within the DC/CO by using concepts from Seamless MPLS
  19. 19. VMX as a DC Gateway VM VM VM ToR (IP) ToR (L2) Non Virtualized environment (L2) VXLAN Gateway (VTEP) VTEP VM VM VM VTEP Virtualized Server Virtualized Server VPN Cust A VPN Cust B VRF A VRF B MPLS Cloud VPN Gateway (L3VPN) VMX Virtual Network B Virtual Network A VM VM VM VM VM VM Data Center/ Central Office • Service Providers need a gateway router to connect the virtual networks to the physical network • Gateway should be capable of supporting different DC overlay, DC Interconnect and L2 technologies in the DC such as GRE, VXLAN, VPLS and EVPN Market Requirement • VMX supports all the overlay, DCI and L2 technologies available on MX • Scale-out control plane to scale up VRF instances and number of VPN routes VMX Value Proposition
  20. 20. VMX to offer managed CPE/centralized CPE  Service providers want to offer a managed CPE service and centralize the CPE functionality to avoid “truck rolls”  Large enterprises want a centralized CPE offering to manage all their branch sites  Both SPs and enterprises want the ability to offer new services without changing the CPE device MARKET REQUIREMENT  VMX with service chaining can offer best of breed routing and L4-L7 functionality  Service chaining offers the flexibility to add new services in a scale-out manner VMX VALUE PROPOSITION vMX as vCPE (IPSec, NAT) vSRX (Firewall) Branch Offic e Switch Provider MPLS cloud DC/CO GW Branch Offic e Switch Provider MPLS cloud DC/CO Fabric + Contrail overlay vMX as vPE Branch Offic e Switch L2 PE L2 PE PE Internet Contrail Controller Switch Switch Switch Branch Office Branch Office Branch Office Private MPLS cloud Internet vMX as vCPE (IPSec, NAT) vSRX (Firewall) vMX as WAN router Contrail Controller Enterprise HQ Switch Branch Office Storage & Compute Enterprise Data Center Service Provider Managed Virtual CPE Large enterprise centralized Virtual CPE
  21. 21. Example VMX connectivity model – option 2 L2PE P DC/CO GW Provider MPLS Cloud vPE LDP+IGP VLANMPLS CPE Pseudowire DC/CO Fabric • Terminate the L2 connection from CPE on the DC/CO GW • Create a NNI connection from the DC/CO GW to the VPE instances
  22. 22. vMX FRS
  23. 23. VMX FRS product • Official FRS target date for VMX Phase-1 is targeted for Q1 2015 with JUNOS release 14.1R5 • High level overview of FRS product • DPDK integration. Min 60G throughput per VMX instance • OpenStack integration • 1:1 mapping between VFP and VCP • Hypervisor support: KVM, VMWare ESXi, Xen • High level feature support for FRS: • Full IP capabilities • MPLS: LDP, RSVP • MPLS applications: L3VPN, L2VPN, L2Circuit • IP and MPLS multicast • Tunneling: GRE, LT • OAM: BFD • QoS: Intel DPDK QoS feature-set VFP VCP Hypervisor/Linux NIC drivers, DPDK Server, CPU, NIC Juniper deliverable Customer defined
  24. 24. vMX Roadmap
  25. 25. VMX QoS model VFP Physical NICs Virtual NICs WAN traffic • Utilize the Intel DPDK QoS toolkit to implement the scheduler • Existing JUNOS QoS configuration applies • Destination Queue + Forwarding Class used to determine scheduler queue • Scheduler instance per Virtual NIC QoS scheduler implemented per VNIC instance
  26. 26.  Port:  Shaping-rate  VLAN:  Shaping-rate  4k per IFD  Queues:  6 queues  3 priorities  1 High  1 medium  4 low  Priority groups scheduling follows strict priority for a given VLAN  Queues of the same priority for a given VLAN use WRR  High and medium queues are capped at transmit-rate Queue0 Queue1 VLAN-1 Port High Priority Queue5 Queue2 Queue4 Queue3 Low PriorityMedium Priority Rate-limiter VMX QoS model Rate-limiter
  27. 27. VMX with vRouter and Orchestration • vMX with vRouter integration • VirtIO utilized for Para- virtualized drivers • Contrail OpenStack for • VM management • Setting up overlay network • NFV Orchestrator (potentially OpenStack Heat templates) utilized to easily create and replicate VMX instances • Utilize OpenStack Ceilometer to determine VMX instance utilization for billing VCPVFP Physical NICs WAN traffic Guest VM (Linux + DPDK) Cores Memory OOB Management Contrail vRouter vRouter AgentvRouter Agent Contrail controller NFV orchestrator Template based config • BW per instance • Memory • # of WAN ports VirtIO VirtIO Guest VM (FreeBSD) Physical layer
  28. 28. Physical & Virtual MX • Offer a scale-out model across both physical and virtual resources • Depending on the type of customer and service offering NFV orchestrator decides whether to provision the customer on a physical or virtual resource Physical Forwarding resources L2 interconnect Virtual Forwarding resources Contrail controller NFV orchestrator Template based config • BW per instance • Memory • # of WAN ports Virtual Routing Engine VMX1 VMX2
  29. 29. vBNG (Virtual Unified Edge Solution)
  30. 30. vBNG, what is it? • Runs on x86 inside virtual machine • Two virtual machines needed, one for forwarding and one for control plane • First iteration supports KVM for hypervisor and OpenStack for orchestration • VMWARE support planned • Based on the same code base and architecture as Juniper’s VMX • Runs Junos • Full featured and constantly improving • Some features, scale and performance of vBNG will be different than pBNG • Easy migration from pBNG • Supports multiple BB models • vLNS • BNG based on PPP, DHCP, C-VLAN and PWHT connections types
  31. 31. vBNG Value proposition • Assumptions • Highly utilized physical BNGs (pBNG) cost less (capex) than x86 based BNGs (vBNG) • Installation (rack and stack) of pBNG costs more (opex) than installation of vBNGs • Capex cost of the cloud infrastructure (switches, servers and software) is spread over multiple applications (vRouters and other applications) • vBNG is a candidate when • a single pBNG serves 12,000 or fewer subscribers or • pBNG peak utilization is about 20 Gb/s or less or • BNG utilization and subscriber count fluctuates significantly over time or • The application has many subscribers and small bandwidth • pBNG is the best answer when • BNGs are centralized and serve >12000 subscribers or >20 Gb/s
  32. 32. Target use cases for vBNG • vBNG for BNG near CO • vLNS for business • vBNG for lab testing new features or new releases • vLNS for applications where the subscribers count fluctuates
  33. 33. vBNG for BNG near CO vBNG Deployment Model SP Core vBNG Internet OLT/DSLAM DSL or Fiber CPE in BB Homes Last Mile OLT/DSLAM DSL or Fiber CPE in BB Homes Last Mile OLT/DSLAM DSL or Fiber CPE in BB Homes Last Mile Central Office With Cloud Infrastructure L2 Switch L2 Switch • Business case is strongest when vBNG aggregates 12K or fewer subscribers • 1 – 10 OLTs/DSLAMs
  34. 34. vRR (Virtual Route Reflector)
  35. 35. Route Reflector PAIN POINTs addressed by VRR Route Reflectors are characterized by RIB scale (available memory) and BGP Performance (Policy Computation, route resolver, network I/O - determined by CPU speed) Memory drives route reflector scaling • Larger memory means that RRs can hold more RIB routes • With higher memory an RR can control larger network segments – lower number of RRs required in a network CPU speed drives faster BGP performance • Faster CPU clock means faster convergence • Faster RR CPUs allow larger network segments controlled by one RR - lower numbers of RRs required in a network vRR product addresses these pain point by running Junos image as an RR application on faster CPUs and with memory on standard servers/appliances
  36. 36. Juniper vRR DEVELOPMENT Strategy • vRR development is following three pronged approach 1. Evolve platform capabilities using virtualization technologies • Allow instantiation of Junos image on a non RE hardware • Any Intel Architecture Blade Server / Server 2. Evolve Junos OS and RPD capabilities • 64 bit Junos kernel • 64 bit RPD improvements for increased scale • RPD modularity / multi-threading for better convergence performance 3. Evolve Junos BGP capabilities for RR application • BGP Resilience and Reliability improvements • BGP monitoring protocol • BGP Driven Application control – DDoS prevention via FlowSpec
  37. 37. Virtual Route Reflector delievering • Support network based as well as data center based RR design • Easy deployment as scaling & flexibility is built into virtualization technology, while maintaining all essential product functionality Virtual RR Junos Image Any Intel Server for instantiating vRR GenericX86 Platform Junos Software  JunosXXXX.img software image as vRR  No hardware is included  Includes ALL currently supported address families - IPv4 /IPv6, VPN, L2, multicast AF (as today’s product does)  Exact same RR functionality as MX  No Forwarding Plane  Software SKUs for primary and standby RR  Customer can choose any x86 platform  Customer can choose CPU and memory size as per scaling needs
  38. 38. VRR: First implementation • Junos Virtual RR • Official Release:13.3 R3 • 64-bit kernel; 64-bit RPD • SCALING: driven by memory allocated to vRR instance • Virtualization Technology: QEMU-KVM • Linux distribution: CENTOS 6.4, Ubuntu 14.04 LTS • Orchestration Platform: LIBVIRT 0.9.8 , Openstack (Icehouse), ESXi 5.5
  39. 39. vRR scaling
  40. 40. VRR: Reference hardware • Juniper is testing vRR on following reference hardware • CPU: 16-core Intel(R) Xeon(R) CPU E5620 @ 2.40GHz • Available RAM: 128G • Only 32G per VM instance is being tested • On-chip cache memory: • L1 cache • I-cache: 32KB D-cache:32KB • L2 cache: 256KB • L3 cache: 12MB • Linux distribution: CentOS release 6.4 - KVM/QEMU • Juniper will provide scaling guidance based on this hw specs • Performance behavior might defer if a different HW is chosen 64 bit FreeBSD does not work on IvyBridge, due to known software bugs, please refrain from using IvyBridge
  41. 41. VRR Scaling Results  * The convergence numbers also improve with higher clock CPU Tested with 32G vRR instance Address Family # of advertizing peers active routes Total Routes Memory Utilization(for receive all routes) Time taken to receive all routes # of receiving peers Time taken to advertise the routes and Mem Utils. IPv4 600 4.2 million 42Mil(10path) 60% 11min 600 20min(62%) IPv4 600 2 million 20Mil(10path) 33% 6min 600 6min(33%) IPv6 600 4 million 40Mil(10path) 68% 26min 600 26min(68%) VPNv4 600 2Mil 4Mil (2 paths ) 13% 3min 600 3min(13%) VPNv4 600 4.2Mil 8.4Mil (2 paths ) 19% 5min 600 23min(24%) VPNv4 600 6Mil 12Mil (2 paths ) 24% 8min 600 36min(32%) VPNv6 600 6Mil 12Mil (2 paths ) 30% 11min 600 11min(30%) VPNv6 600 4.2Mil 8.4Mil (2 paths ) 22% 8min 600 8min(22%)
  42. 42. vRR FRS
  43. 43. VRR: FEATURE Support vRR Features Support Status Support for all BGP address families Supported today : same as chassis based implementation L3 unicast address families IPv4, IPv6, VPNv4 and VPNv6, BGP-LU Supported today : same as chassis based implementation L3 multicast address families IPv4, IPv6, VPNv4 and VPNv6 Supported today : same as chassis based implementation L2VPN address families (RFC4761, RFC6074) Supported today : same as chassis based implementation Route Target address family (RFC4684) Supported today : same as chassis based implementation Support for the BGP ADD_PATH feature starting in 12.3 (IPv4, IPv6, Labeled unicast v4 and labeled unicast v6 Support for 4-byte AS numbers Supported today BGP neighbors Supported today : same as chassis based implementation OSPF adjacencies Supported today : same as chassis based implementation ISIS adjacencies Supported today : same as chassis based implementation LDP adjacencies Not supported at FRS
  44. 44. VRR: FEATURE Support – part 2 vRR Features Support Status Ability to control BGP learning and advertising of routes based on any combination of the following attributes: Prefix, prefix length, AS-Path, Community Supported today : same as chassis based implementation Interfaces must support 802.1Q VLAN encapsulation Supported Interfaces must support 802.1ad (QinQ) VLAN encapsulation Supported Ability to run at least two route reflectors as virtual routers in the one physical router Yes - via difference spawning different instances of route reflectors on different cores Non-stop routing for all routing protocols and address families Not at FRS; need to schedule Graceful restart for all routing protocols and address families Supported today ; same as chassis based implementation BFD for BGPv4 Supported today - control plane BFD implementation BFD for BGPv6 Supported today - control plane BFD implementation Multihop BFD for both BGPv4 and BGPv6 Supported today
  45. 45. vRR Use cases and deployment models
  46. 46. Network based Virtual Route Reflector Design Client 1  vRRs can be deployed in the same locations in the network  Same connectivity paradigm between vRRs and clients as today’s RRs and clients  vRR instantiation and connectivity (“underlay”) provided by Openstack Client 2 Client 3 Client n Junos VRR on VMs On standard servers
  47. 47. CLOUD Based Virtual Route Reflector DESIGN Solving the best path selection problem for cloud virtual route reflector VRR 1 Region 1 Regional Network 2 VRR 2 Region 2Data Center Cloud Backbone GRE, IGP VRR 2 selects path based on R1 view R1 R2 VRR 2 selects path based on R2 view  vRR as an “Application” hosted in DC  GRE tunnel is originated from gre.X (control plane interface)  VRR behaves like it is locally attached to R1 (requires resolution RIB config) Client 2 Client 1 Regional Network 1 Client 3 iBGP Cloud Overlay w/ Contrail or VMWare
  48. 48. Virtual CPE
  49. 49. “There is a App for That” EVOLVING SERVICE DELIVERY to bring cloud properties to managed BUSINESS services “30Mbps Firewall” “Application Acceleration” “Remote access for 40 employees” “Application Reporting” “There is an App for That”
  50. 50. The concept of Cloud Based CPE • A Simplified CPE • Remove CPE barriers to service innovation • Lower complexity & cost DHCPFirewall Routing / IP ForwardingNAT Modem / ONTSwitch Access Point VoiceMoCA/ HPAV/ HPNA3 DHCP FWRouting / IP Forwarding NAT Modem / ONTSwitch Access Point VoiceMoCA/ HPAV/ HPNA3 In Network CPE functions  Leverage & integrate with other network services  Centralize & consolidate  Seamless integrate with mobile & cloud based services Direct Connect  Extend reach & visibility into the home  Per device awareness & state  Simplified user experience  Simplify the device required on the customer premise  Centralize key CPE functions & integrate them into the network edge BNG / PE in SP Network
  51. 51. CLOUD CPE ARCHITECTURE HIGH LEVEL COMPONENTS Customer Site vCPE Context Onsite CPE Services Management Edge Router Simplified CPE device with switching, access point, upstream QoS and WAN interfaces Optional L3 and Tunneling CPE specific context in router Layer 3 services (addressing & routing) Basic value services (NAT, Firewall,…) Advanced security services (UTM, IPS, Firewall,…) Extensible to other value added services (M2M, WLAN, Hosting, Business apps,…) Cloud based Provisioning Monitoring Customer Self-care
  52. 52. Virtual CPE use cases
  53. 53. VCPE MODELS SCENARIO A: Integrated v- BRANCH ROUTER Ethernet NID Switch with Smart SFP DHCP Routing NAT, FW VPN Cloud CPE Context Edge Router L2 CPE (optionally with L3 awareness for QoS and Assurance) LAG, VRRP, OAM, L2 Filters,.. Statistics and Monitoring per vCPE Addressing, Routing, Internet & VPN, QoS NAT, Firewall, IDP vCPE instance = VPN routing instance Pros • Simplest onsite CPE • Limited investments • LAN extension • Device visibility Cons • Access network impact • Limited services • Management impact Juniper • MX • JS Self-Care App • NID Partners
  54. 54. VCPE MODELS SCENARIO B : OVERLAY v- BRANCH ROUTER CPE VPN Lightweight L3 CPE (Un)Secure Tunnel L2 or L3 Transport vCPE instance = VR on VM Pros • No domain constraint • Operational isolation • VM flexibility • Transparent to existing network Cons • Pre-requisites on CPE • Blindsided Edge • Virtualization Tax Juniper • Firefly • Virtual Director CPE VM VM VM VM can be shared across sites
  55. 55. BROADBAND DEVICE VISIBILITY EXAMPLE: PARENTAL CONTROL BASED ON DEVICE POLICIES HOME NETWORK Laptop L2 Bridge Tablet Little Jimmy’s Desktop ACTIVITY REPORTING Volumes Content Facebook.com Twitter.com Hulu.com Wikipedia.com Iwishiwere21.com Portal / Mobile App Self-care & Reporting CONTENT FILTER You have tried to access www.iwishiwere21.com This site is filtered in order to protect you TIME OF DAY Internet access from this device is not permitted between 7pm and 7am. Try again tomorrow
  56. 56. CLOUD CPE APPLICATIONS CURRENT STATUS Market Technology  Benefits well understood  Existing demand at the edge and in the data center  NSP and government projects  Driven by product, architecture, planning departments Business Cloud CPE Residential Cloud CPE  Emerging concept, use cases under definition (which cloud services?)  No short term commercial demand  Standardization at BBF (NERG)  Driven by NSP R&D departments  Extension to MPLS VPN requirements  Initial focus on routing and security services  L2 CPE initially, L3 CPE coming  Concern on redundancy  Extension to BNG  Focus on transforming complex CPE features into services  Key areas: DHCP, NAT, UPnP, management, self-care  Requires very high scale  All individual components available and can run separately  MX vCPE context based on routing instance, with core CPE features and basic services on MS-DPC. L2 based.  “System integration” work in progress + roadmap for next steps  Evangelization  Working on marketing demo (focused on use cases/ benefits)  Involvement in NSP proofs of concept  Standardization  Design in progress
  57. 57. How to get this presentation? • Scan it • Download it! • Join at Facebook: Juniper.CIS.SE (Juniper techpubs ru)
  58. 58. Вопросы?
  59. 59. Спасибо за внимание!

×