Network functions virtualisation
Upcoming SlideShare
Loading in...5
×
 

Network functions virtualisation

on

  • 66 views

Learning

Learning

Statistics

Views

Total Views
66
Views on SlideShare
66
Embed Views
0

Actions

Likes
0
Downloads
1
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • The Classical network appliance approach uses a proprietary chassis for every new service, some typical examples are shown above left. Although inside the appliances use similar chips they are packaged very differently outside. This creates complexity through the entire life cycle of network services e.g. Design, procurement, test, deployment, configuration, repair, maintenance, replacement, end-of-life, removal. There is no economies of scale (some boxes may only sell in the thousands as opposed to 9 Million IT servers every year). The need for start-ups to develop new hardware, including getting it NEBs and ETSI qualified, is a significant cost and deterrent to enter the telecoms market. <br /> The Network Virtualisation (NV) approach replaces physical network appliances with software virtual appliances running on commodity IT servers. Using open IT techniques allows a competitive and innovative ecosystem to exploit the many x86 and Linux programmers in the world. The Virtual Appliances can be installed on IT servers using orchestration software, this will automatically and remotely install software, driven either by traffic demands or customer orders. To complement the standard high volume IT servers we will also use standard high volume storage and Ethernet switches. The standard high volume Ethernet switches will use merchant silicon which spreads the cost of switching ASICs over the widest possible Enterprise & Carrier market. <br />
  • NV does not have to perform faster than traditional hardware using ASICs, NPUs, FPGAs etc. but simply perform fast enough that it provides a cost equivalent solution. This will mean all the additional benefits of NV, reduced C2M, L2C etc, will come “for free”. <br />

Network functions virtualisation Network functions virtualisation Presentation Transcript

  • Network Functions Virtualisation Bob Briscoe + Don Clarke, Pete Willis, Andy Reid, Paul Veitch (BT) Chief Researcher + further acknowledgements within slides BT
  • Network Functions Virtualisation Approach Independent Software Vendors Session Border Controller Carrier Grade NAT BRAS Firewall DPI CDN WAN Acceleration Tester/QoE monitor Message Router Radio Network Controller PE Router Classical Network Appliance Approach SGSN/GGSN Orchestrated, automatic remote install hhyyppeerrvviissoorrss Generic High Volume Servers Generic High Volume Storage Generic High Volume Ethernet Switches
  • If price-performance is good enough, rapid deployment gains come free Mar’12: Proof of Concept testing • Combined BRAS & CDN functions on Intel® Xeon® Processor 5600 Series HP c7000 BladeSystem using Intel® 82599 10 Gigabit Ethernet Controller sidecars acknowledge: 3 – BRAS chosen as an “acid test” – CDN chosen as architecturally complements BRAS • BRAS created from scratch so minimal functionality: – PPPoE; only PTA, priority queuing; no RADIUS, VRFs • CDN COTS – fully functioning commercial product Intelligent MSE Ethernet Switch 10GigE Content server Internet Test Equipment & Traffic Generators Video Viewing & Internet Browsing Ethernet Switch 10GigE BRAS & CDN PPPoE IPoE Systems Management IP VPN Router Intranet server IPoE Significant management stack : 1.Instantiation of BRAS & CDN modules on bare server 2.Configuration of BRAS & Ethernet switches via Tail-F 3.Configuration of CDN via VVue mgt. sys. 4.Trouble2Resolve via HP mgmt system
  • 4 Mar’12: Proof of Concept Performance Test Results • Average 3 Million Packets Per Second per Logical Core for PPPoE processing. – Equivalent to 94 M PPS/97 Gbps per Blade = 1.5 G PPS/1.5 Tbps per 10 U chassis1. – Test used 1024 PPP sessions & strict priority QoS – Test used an Intel® Xeon® E5655 @ 3.0 GHz, 8 physical cores, 16 logical cores (not all used). • Scaled to 9K PPPoE sessions per vBRAS. – Can support 3 vBRAS per server. • Subsequent research: – implemented & testing software Hierarchical QoS – results so far show processing is still not the bottleneck – (also tested vCDN performance & video quality) very useful performance potential to match the performance per footprint of existing BRAS equipment == Test Id Description Result 1.1.1 Management access Pass 1.2.1 Command line configuration: add_sp_small Pass 1.2.2 Command line configuration: add_sub_small Pass 1.2.3 Command line configuration: del_sub_small Pass 1.2.4 Command line configuration: del_sp_small Pass 1.3.1 Establish PPPoE session Pass 1.4.1 Block unauthorized access attempt: invalid password Pass 1.4.2 Block unauthorized access attempt: invalid user Pass 1.4.3 Block unauthorized access attempt: invalid VLAN Pass 1.5.1 Time to restore 1 PPPoE session after BRAS reboot Pass 1.6.1 Basic Forwarding Pass 1.7.1 Basic QoS - Premium subscriber Pass 1.7.2 Basic QoS - Economy subscriber Pass 2.1.1 Command line configuration: add_sp_medium Pass 2.1.2 Command line configuration: add_sub_medium Pass 2.2.1 Establish 288 PPPoE sessions Pass 2.3.1 Performance forwarding: downstream to 288 PPPoE clients Pass 2.3.2 Performance forwarding: upstream from 288 PPPoE clients Pass 2.3.3 Performance forwarding: upstream and downstream from/to 288 PPPoE clients Pass 2.4.1 Time to restore 288 PPPoE sessions after BRAS reboot Pass 2.5.1 Dynamic configuration: add a subscriber Pass 2.5.2 Dynamic configuration: connect new subscribers to BRAS Pass 2.5.3 Dynamic configuration: delete a subscriber Pass 2.5.4 Dynamic configuration: delete service provider Pass 2.6.1 QoS performance – medium configuration Pass 3.1.1 Command line configuration: add_sp_large Pass 3.1.2 Command line configuration: add_sub_large Pass 3.2.1 Establish 1024 PPPoE sessions Pass 3.3.1 Performance forwarding: downstream to 1024 PPPoE clients Pass 3.3.2 Performance forwarding: upstream from 1024 Pass 1 - Using128 byte packets. A single logical core handles traffic only in one direction so figures quoted are half-duplex.
  • 3 Complementary but Independent Networking Developments 5 Creates operational flexibility Reduces Reduces CapEx, OpEx, space & power delivery time consumption Network Functions Virtualisation Open Innovation Software Defined Networks Creates control abstractions to foster innovation. Creates competitive supply of innovative applications by third parties
  • • Network-operator-driven ISG – Initiated by 13 carriers shown – Consensus in white paper – Network Operator Council offers requirements – grown to 23 members so far 6 New NfV Industry Specification Group (ISG) • First meeting mid-Jan 2013 > 150 participants > 100 attendees from > 50 firms • Engagement terms –under ETSI, but open to non-members –non-members sign participation agreement • essentially, must declare relevant IPR and offer it under fair & reasonable terms –only per-meeting fees to cover costs • Deliverables –White papers identifying gaps and challenges –as input to relevant standardisation bodies • ETSI NfV collaboration portal –white paper, published deliverables –how to sign up, join mail lists, etc http://portal.etsi.org/portal/server.pt/community/NFV/367
  • applications network functions operating systems hypervisors compute infrastructure switching infrastructure rack, cable, power, cooling 7 gaps & challenges examples • management & orchestration – infrastructure management standards – multi-level identity standard – resource description language applications operating systems hypervisors compute infrastructure network infrastructure switching infrastructure rack, cable, power, cooling • security – Topology Validation & Enforcement – Availability of Management Support Infrastructure – Secure Boot – Secure Crash – Performance Isolation – Tenant Service Accounting
  • Q&A and spare slides
  • 9 Domain Architecture NfV Applications Domain Infrastructure Network Domain NfV Container Interface Virtual Network Container Interface Orchestration and Management Domain Virtual Machine Container Interface Hypervisor Domain Compute Container Interface Compute Domain Carrier Management
  • 10 NVF ISG Organisation Structure… Assistant Technical Manager
  • Performance & Portability Francisco Javier Ramón Salguero (TF) 11 ISG Working Group Structure Working Group Architecture of the Virtualisation Infrastructure Steve Wright (AT&T) + Yun Chao Hu (HW) Managing Editor: Andy Reid (BT) Working Group Management & Orchestration Diego Lopez (TF) + Raquel Morera (VZ) Working Group Software Architecture Fred Feisullin (Sprint) + Marie-Paule Odini (HP) Working Group Reliability & Availability Chair: Naseem Khan (VZ) Vice Chair: Markus Schoeller (NEC) Expert Group Expert Group Security Bob Briscoe (BT) Technical Steering Committee Chair: Technical Manager : Don Clarke (BT) Vice Chair / Assistant Technical Manager : Diego Lopez (TF) Programme Manager : TBA NOC Chair (ISG Vice Chair) + WG Chairs + Expert Group Leaders + Others Additional Expert Groups can be convened at discretion of Technical Steering Committee HW = Huawei TF = Telefonica VZ = Verizon
  • 12 Sequential thread emulation instruction policing, mapping and emulation ... instruction policing, mapping and emulation instruction policing, mapping, and emulation Sequential thread emulation Sequential thread emulation Virtual machine mgt and API Hypervisor Domain • General cloud hypervisor is designed for maximum application portability – Hypervisor creates • Virtual CPUs • Virtual NICs – Hypervisor provides • Virtual Ethernet switch – Hypervisor fully hides real CPUs and NICs • NfV Hypervisor is aimed at removing packet bottlenecks – Direct binding of VM to core – Direct communication between VMs and between VMs and NIC • User mode polled drivers • DMA remapping • SR-IOV • Many features already emerging in hypervisors vSwitch ... ... Any performance hardware ... NIC Virtual machine mgt and API VM vSwitch NfV performance hardware core ... core VM core core ... core NIC
  • 13 Orchestration and Infrastructure Ops Domain • Automated deployment of NfV applications – Orchestration console – Higher level carrier OSS • Tools exist for automated cloud deployment – vSphere – Openstack – Cloudstack • NfV infrastructure profile for NfV application to – Select host – Configure host – Start VM(s) • Application profile to specify – Service address assignment (mechanism) – Location specific configuration Orche-stration and infra-structure ops domain Hypervisor domain Infrastructure network domain Compute domain NfV applications domains Carrier OSS domain