Your SlideShare is downloading. ×
Network functions virtualisation
Upcoming SlideShare
Loading in...5

Thanks for flagging this SlideShare!

Oops! An error has occurred.

Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Network functions virtualisation


Published on



Published in: Law

  • Be the first to comment

  • Be the first to like this

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

No notes for slide
  • The Classical network appliance approach uses a proprietary chassis for every new service, some typical examples are shown above left. Although inside the appliances use similar chips they are packaged very differently outside. This creates complexity through the entire life cycle of network services e.g. Design, procurement, test, deployment, configuration, repair, maintenance, replacement, end-of-life, removal. There is no economies of scale (some boxes may only sell in the thousands as opposed to 9 Million IT servers every year). The need for start-ups to develop new hardware, including getting it NEBs and ETSI qualified, is a significant cost and deterrent to enter the telecoms market.
    The Network Virtualisation (NV) approach replaces physical network appliances with software virtual appliances running on commodity IT servers. Using open IT techniques allows a competitive and innovative ecosystem to exploit the many x86 and Linux programmers in the world. The Virtual Appliances can be installed on IT servers using orchestration software, this will automatically and remotely install software, driven either by traffic demands or customer orders. To complement the standard high volume IT servers we will also use standard high volume storage and Ethernet switches. The standard high volume Ethernet switches will use merchant silicon which spreads the cost of switching ASICs over the widest possible Enterprise & Carrier market.
  • NV does not have to perform faster than traditional hardware using ASICs, NPUs, FPGAs etc. but simply perform fast enough that it provides a cost equivalent solution. This will mean all the additional benefits of NV, reduced C2M, L2C etc, will come “for free”.
  • Transcript

    • 1. Network Functions Virtualisation Bob Briscoe + Don Clarke, Pete Willis, Andy Reid, Paul Veitch (BT) Chief Researcher + further acknowledgements within slides BT
    • 2. Network Functions Virtualisation Approach Independent Software Vendors Session Border Controller Carrier Grade NAT BRAS Firewall DPI CDN WAN Acceleration Tester/QoE monitor Message Router Radio Network Controller PE Router Classical Network Appliance Approach SGSN/GGSN Orchestrated, automatic remote install hhyyppeerrvviissoorrss Generic High Volume Servers Generic High Volume Storage Generic High Volume Ethernet Switches
    • 3. If price-performance is good enough, rapid deployment gains come free Mar’12: Proof of Concept testing • Combined BRAS & CDN functions on Intel® Xeon® Processor 5600 Series HP c7000 BladeSystem using Intel® 82599 10 Gigabit Ethernet Controller sidecars acknowledge: 3 – BRAS chosen as an “acid test” – CDN chosen as architecturally complements BRAS • BRAS created from scratch so minimal functionality: – PPPoE; only PTA, priority queuing; no RADIUS, VRFs • CDN COTS – fully functioning commercial product Intelligent MSE Ethernet Switch 10GigE Content server Internet Test Equipment & Traffic Generators Video Viewing & Internet Browsing Ethernet Switch 10GigE BRAS & CDN PPPoE IPoE Systems Management IP VPN Router Intranet server IPoE Significant management stack : 1.Instantiation of BRAS & CDN modules on bare server 2.Configuration of BRAS & Ethernet switches via Tail-F 3.Configuration of CDN via VVue mgt. sys. 4.Trouble2Resolve via HP mgmt system
    • 4. 4 Mar’12: Proof of Concept Performance Test Results • Average 3 Million Packets Per Second per Logical Core for PPPoE processing. – Equivalent to 94 M PPS/97 Gbps per Blade = 1.5 G PPS/1.5 Tbps per 10 U chassis1. – Test used 1024 PPP sessions & strict priority QoS – Test used an Intel® Xeon® E5655 @ 3.0 GHz, 8 physical cores, 16 logical cores (not all used). • Scaled to 9K PPPoE sessions per vBRAS. – Can support 3 vBRAS per server. • Subsequent research: – implemented & testing software Hierarchical QoS – results so far show processing is still not the bottleneck – (also tested vCDN performance & video quality) very useful performance potential to match the performance per footprint of existing BRAS equipment == Test Id Description Result 1.1.1 Management access Pass 1.2.1 Command line configuration: add_sp_small Pass 1.2.2 Command line configuration: add_sub_small Pass 1.2.3 Command line configuration: del_sub_small Pass 1.2.4 Command line configuration: del_sp_small Pass 1.3.1 Establish PPPoE session Pass 1.4.1 Block unauthorized access attempt: invalid password Pass 1.4.2 Block unauthorized access attempt: invalid user Pass 1.4.3 Block unauthorized access attempt: invalid VLAN Pass 1.5.1 Time to restore 1 PPPoE session after BRAS reboot Pass 1.6.1 Basic Forwarding Pass 1.7.1 Basic QoS - Premium subscriber Pass 1.7.2 Basic QoS - Economy subscriber Pass 2.1.1 Command line configuration: add_sp_medium Pass 2.1.2 Command line configuration: add_sub_medium Pass 2.2.1 Establish 288 PPPoE sessions Pass 2.3.1 Performance forwarding: downstream to 288 PPPoE clients Pass 2.3.2 Performance forwarding: upstream from 288 PPPoE clients Pass 2.3.3 Performance forwarding: upstream and downstream from/to 288 PPPoE clients Pass 2.4.1 Time to restore 288 PPPoE sessions after BRAS reboot Pass 2.5.1 Dynamic configuration: add a subscriber Pass 2.5.2 Dynamic configuration: connect new subscribers to BRAS Pass 2.5.3 Dynamic configuration: delete a subscriber Pass 2.5.4 Dynamic configuration: delete service provider Pass 2.6.1 QoS performance – medium configuration Pass 3.1.1 Command line configuration: add_sp_large Pass 3.1.2 Command line configuration: add_sub_large Pass 3.2.1 Establish 1024 PPPoE sessions Pass 3.3.1 Performance forwarding: downstream to 1024 PPPoE clients Pass 3.3.2 Performance forwarding: upstream from 1024 Pass 1 - Using128 byte packets. A single logical core handles traffic only in one direction so figures quoted are half-duplex.
    • 5. 3 Complementary but Independent Networking Developments 5 Creates operational flexibility Reduces Reduces CapEx, OpEx, space & power delivery time consumption Network Functions Virtualisation Open Innovation Software Defined Networks Creates control abstractions to foster innovation. Creates competitive supply of innovative applications by third parties
    • 6. • Network-operator-driven ISG – Initiated by 13 carriers shown – Consensus in white paper – Network Operator Council offers requirements – grown to 23 members so far 6 New NfV Industry Specification Group (ISG) • First meeting mid-Jan 2013 > 150 participants > 100 attendees from > 50 firms • Engagement terms –under ETSI, but open to non-members –non-members sign participation agreement • essentially, must declare relevant IPR and offer it under fair & reasonable terms –only per-meeting fees to cover costs • Deliverables –White papers identifying gaps and challenges –as input to relevant standardisation bodies • ETSI NfV collaboration portal –white paper, published deliverables –how to sign up, join mail lists, etc
    • 7. applications network functions operating systems hypervisors compute infrastructure switching infrastructure rack, cable, power, cooling 7 gaps & challenges examples • management & orchestration – infrastructure management standards – multi-level identity standard – resource description language applications operating systems hypervisors compute infrastructure network infrastructure switching infrastructure rack, cable, power, cooling • security – Topology Validation & Enforcement – Availability of Management Support Infrastructure – Secure Boot – Secure Crash – Performance Isolation – Tenant Service Accounting
    • 8. Q&A and spare slides
    • 9. 9 Domain Architecture NfV Applications Domain Infrastructure Network Domain NfV Container Interface Virtual Network Container Interface Orchestration and Management Domain Virtual Machine Container Interface Hypervisor Domain Compute Container Interface Compute Domain Carrier Management
    • 10. 10 NVF ISG Organisation Structure… Assistant Technical Manager
    • 11. Performance & Portability Francisco Javier Ramón Salguero (TF) 11 ISG Working Group Structure Working Group Architecture of the Virtualisation Infrastructure Steve Wright (AT&T) + Yun Chao Hu (HW) Managing Editor: Andy Reid (BT) Working Group Management & Orchestration Diego Lopez (TF) + Raquel Morera (VZ) Working Group Software Architecture Fred Feisullin (Sprint) + Marie-Paule Odini (HP) Working Group Reliability & Availability Chair: Naseem Khan (VZ) Vice Chair: Markus Schoeller (NEC) Expert Group Expert Group Security Bob Briscoe (BT) Technical Steering Committee Chair: Technical Manager : Don Clarke (BT) Vice Chair / Assistant Technical Manager : Diego Lopez (TF) Programme Manager : TBA NOC Chair (ISG Vice Chair) + WG Chairs + Expert Group Leaders + Others Additional Expert Groups can be convened at discretion of Technical Steering Committee HW = Huawei TF = Telefonica VZ = Verizon
    • 12. 12 Sequential thread emulation instruction policing, mapping and emulation ... instruction policing, mapping and emulation instruction policing, mapping, and emulation Sequential thread emulation Sequential thread emulation Virtual machine mgt and API Hypervisor Domain • General cloud hypervisor is designed for maximum application portability – Hypervisor creates • Virtual CPUs • Virtual NICs – Hypervisor provides • Virtual Ethernet switch – Hypervisor fully hides real CPUs and NICs • NfV Hypervisor is aimed at removing packet bottlenecks – Direct binding of VM to core – Direct communication between VMs and between VMs and NIC • User mode polled drivers • DMA remapping • SR-IOV • Many features already emerging in hypervisors vSwitch ... ... Any performance hardware ... NIC Virtual machine mgt and API VM vSwitch NfV performance hardware core ... core VM core core ... core NIC
    • 13. 13 Orchestration and Infrastructure Ops Domain • Automated deployment of NfV applications – Orchestration console – Higher level carrier OSS • Tools exist for automated cloud deployment – vSphere – Openstack – Cloudstack • NfV infrastructure profile for NfV application to – Select host – Configure host – Start VM(s) • Application profile to specify – Service address assignment (mechanism) – Location specific configuration Orche-stration and infra-structure ops domain Hypervisor domain Infrastructure network domain Compute domain NfV applications domains Carrier OSS domain