Nfv open stack-shuo-yang


Published on

Published in: Technology
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Nfv open stack-shuo-yang

  1. 1. OpenStack and NFV: Convergence of IT and CT Infrastructure Shuo Yang Principal Architect of Cloud Computing, US R&D Center
  2. 2. Agenda 1 Relevant Industry Trends 2 Telco Operator Perspective 3 Challenges and Opportunities 1
  3. 3. Software-Defined Everything  SDN, SDS, SDDC…  Cloud – a Software-defined Resource Pool    Virtualizing the underlying components and making it accessible through an API Abstraction, aggregation(pooled), and automation via API What has been done for compute is being repeated for network and storage Compute API  Virtualization and consolidation  Programmatic provision, orchestration  Scale-out  Commoditization (standard components) 2 Storage API Network API Virtual Machines Virtual Storage Virtual Networks
  4. 4. OpenStack – a 10,000 Feet View  OpenStack Mission  To produce the ubiquitous Open Source cloud computing platform that will meet the needs of public and private cloud providers regardless of size…  Common Interest from IT Vendors   No single dominant vendor, avoid vendor lock-in   Oct. 2010, first design summit held in Austin Linux for cloud API Driven Architecture  Proven architecture model used by the largest public clouds and made it accessible to masses for use on commodity hardware 3
  5. 5. NFV – a 10,000 Feet View  NFV Mission  Replace purposely-built hardware with standard hardware for better price performance ratio  Common Interest from Telco Service Providers   No vendor lock-in   Born in Oct. 2012 by a list of top Telco service providers Apache family for network applications Adding API to Control Network Functionality  use IT methodology to solve CT problems  Functionalities L4-L7 4
  6. 6. Agenda 1 Relevant Industry Trends 2 Telco Operator Perspective 3 Challenges and Opportunities 5
  7. 7. NFV – a Carrier-lead Initiative  Benefit of NFV Initiative  Reduction of Capex  Reduction of Opex  Faster time-to-market  Agility and Flexibility of Delivering Network Functionality Source: NFV_White_Paper 6
  8. 8. NFV Reference Architecture NFV E2E Reference Architecture 4 Os-Ma OSS/BSS MANO 5 5 MANO: Management and Orchestration 4 OSS/BSS:Traditional OSS/BSS (Operations/Business Support Systems) 3 EMS : Traditional EMS (Element Management System ) 2 VNF: Virtual Network Function 1 NFVI: NFV Infrastructure Orchestrator Se-Ma Service, VNF and Infrastructure Description Or-Vnfm 3 EMS 1 EMS 2 EMS 3 VNF 2 VNF 3 Ve-Vnfm 2 VNF 1 VNF Manager(s) Or-Vi Vn-Nf NFVI 1 Virtual Storage Virtual Computing Virtual Network Vi-Vnfm Virtualisation Layer Vl-Ha Computing Hardware Storage Hardware Hardware resources Network Hardware Nf-Vi Execution reference points Other reference points Main NFV reference points Virtualised Infrastructure Manager(s)  7 Source: NFV GS NFV-0010 V0.1.6 2013/8
  9. 9. Cloud – Perfect Platform for NFV BOX Software Model Chassis Switch Fabric Controller NFV Software Model DC/Rack IP/Ethernet Fabric Controller VM LBS App 2 App 2 App 3 Controller LBS App 1 App 2 App 2 App 3 App 4    LBS App 2 App 2 App 3 App 4 EMS/NMS Consistent app and physical resource view  ATCA DC/Rack No change to the EMS Physical and app, 1:1 mapping ATCA App 4 VM VM VM VM VM Physical Server Physical Server Decoupling between software(Telecom App) and hardware Standardized interface between NFV elements and infrastructure Telco network/element deployment automation, roll-out efficiency Resource pools cover all segment networks from access network, GW, core networks, to OSS/BSS and application 8
  10. 10. NFV on OpenStack BBU GGSN EPC FW DPI SBC BRAS SRC RNC VNF Managers & MANO PCRF SGSN MME SDN Contro ller &APPs IMS BSS OSS EMS …… Office ThirdParty APP CT Middleware IT Middleware ICT DevOps Tool Sets OpenStack+ API OpenStack Nova NFVI managers Virtualization Layer COTS Hardware ESX/HyperV Cinder KVM/XEN Server Server Ceph Server Server NFV is expected to realize the many benefits of cloud 9 Neutron OVS/VGW Storage Server Switch/GW
  11. 11. Agenda 1 Relevant Industry Trends 2 Telco Operator Perspective 3 Challenges and Opportunities 10
  12. 12. Differences Between NFV Wants and Cloud Provides  Computation vs. Connectivity   Compute-centric cloud platform vs. connect-centric NFV workload Small App vs. Big App  Cloud enable multiple tenants with many relatively small VMs to share compute, storage, and connect resources in a cost- and energy-efficient manner while NFV need to scale network functions to serve millions and even tens of millions of subscribers for one or a few large operators  Virtual and Physical Separation  Cloud decouple the virtual and physical domains while NFV want orderly handoffs between definitive segments and service boundaries 11
  13. 13. Problems Observed with NFV Migration zero loss, low latency control link BOX Software Model Chassis Switch Fabric Controller NFV Software Model Dedicated bandwidth LBS App 2 App 2 Physical and app, 1:1 mapping Controller ATCA LBS App 1 App 2 App 2 App 3 App 4 ATCA DC/Rack IP/Ethernet Fabric Controller App 3 App 4 Separation of the VM and physical infrastructure mgmt ? VM Overlay tunnel, no QoS or BW assurance DC/Rack No change to the EMS LBS App 2 App 2 App 3 App 4 EMS/NMS Consistent app and physical resource view VM VM VM VM VM ? Physical Server Physical Server  The BOX based application relied on network fabrics that is not provided by the best efforts cloud network  The lack of the connected view of the physical and virtual alarms/events makes EMS impossible to function 12
  14. 14. NFV Challenges for OpenStack  Cross-DC Hierarchy Resource Scheduling    Single telecom app instance spread across multiple DCs for resilience/performance Operator wants to treat DCs as one when deploying such apps Affinity Scheduling   High availability requirements (99.999%)  • Telecom apps have multiple, tightly coupled VMs, and heavy Inter-VM traffic Policies govern redundancy & performance relationships Cross Version Upgrade  Telco apps have large data-plane traffic, often use SR-IOV & H/W accelerators  Redundancy mechanisms adopted to protect against hardware failures  Traditionally upgrades are coordinated with app 13
  15. 15. Cross-DC Hierarchy Resource Scheduling • Stack DCs into Tree Hierarchy • Each DC can operate independently but also be managed by a higher level DC • Tenant requests resources from the “root” DC, which coordinates everything else • Use only standard OpenStack REST interfaces between DCs • Child is not aware it is being managed • Cells Provide Tree-only for Nova, Don’t Work Well with Multiple DCs • Need to configure firewalls for inter-cell, non HTTP traffic • Other services in each DC need to be coordinated • Each DC still needs to function independently 14
  16. 16. Affinity Scheduling Enhancement NE005 (Short term) solution    Manually (pre) partition the site into AvailabilityZones and host aggregates Use same/different hostAggregate/AZ/... policies Deploy VMs into batches of "compatible VMs”  Deploy batches in (right) sequence.  Can lead to “unemployable” VMs. (Long term) solution     Create batches of VMs with different properties. Specify constraints between VMs  Within a batch & between batches. Capture available bandwidth info & bandwidth requirements for use as scheduling constraints. Use linear programming to provide optimal solutions 15 same host aggregate diff AZ same host aggregate NE005 ? ` diff host HOST Aggregate1 HOST Aggregate2 HOST1 HOST3 HOST2 HOST Aggregate1 HOST4 HOST Aggregate2
  17. 17. Service Container Based Resource Allocation EMS/NMS Consistent app and physical resource view EMS sees multiple BOXes which is consistent as pre-NFV DC/Rack Hypervisor Service Container vSwitch Controller VM Sub aware LBS Inter VM traffic is essentially “memory copy” Service Container DC/Rack DC/Rack Service Container LBS Function wise equivalent of a BOX Service Container App 2 App 2 App 3 App 4 VM VM VM VM VM Physical Server   Service Container Based Resource Allocation  Each service container encapsulated all the apps of a BOX  The service container will be allocated/scaled as an inseparable resource unit  A lightweight Linux container (LXC) based app will further reduce the computing overhead The Subscriber Flow-based Load Balance Direct Flow to Service Containers, the Containers Are Added to Scale the Entire Systems Capability 16
  18. 18. NFV Needs an App-Centric Architecture in Cloud  Application-Centric Networking Platform  App Container on Provisioning and Management  Linux container, server template  App configuration management  AWS OpsWork/formation, OpenStack Heat,  App Level HA and Data Protection  Application-aware infrastructure  DR (data recovery) strategy  App Level Monitoring and SLA  App-driven dynamic resource scheduling 17
  19. 19. Together, We Can Make a History • Fear Not, Facing Challenges is the Only Constant in Our Industry • Telcos Service Providers Define/Understand NFV Real Needs with Decades of Operational Experience • Telco Solution Vendors, such as Huawei, Understand Telcos Networking Stacks with Decades of Engineering Exercise • OpenStack Provides a Common Platform to Start our Common Journey • No Vendor Lock-in • Create/Provide New Values on Top of Cloud Architecture 18
  20. 20. Copyright©2013 Huawei Technologies Co., Ltd. All Rights Reserved. The information in this document may contain predictive statements including, without limitation, statements regarding the future financial and operating results, future product portfolio, new technology, etc. There are a number of factors that could cause actual results and developments to differ materially from those expressed or implied in the predictive statements. Therefore, such information is provided for reference purpose only and constitutes neither an offer nor an acceptance. Huawei may change the information at any time without notice.
  21. 21. Challenges on Current Cloud Platforms  Parity issues with existing NFs   Manageability, performance, reliability, integration, and maintainability Existing operation tools not longer meet the needs   Operation monitoring especially networking   Auto deployment and configuration in cloud platform Troubleshooting across physical and virtual world Lack of app-level infrastructure support  Only VM or platform level, if any, but on big app that has dependent modules  Resource provisioning  Resource pooling and scale-out  App-level policy support: dependency, sequencing, HA and SLA 20