Successfully reported this slideshow.
Your SlideShare is downloading. ×

ONS 2018 LA - Intel Tutorial: Cloud Native to NFV - Alon Bernstein, Cisco & Kuralamudhan Ramakrishnan, Intel

Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Upcoming SlideShare
NFV features in kubernetes
NFV features in kubernetes
Loading in …3
×

Check these out next

1 of 48 Ad

ONS 2018 LA - Intel Tutorial: Cloud Native to NFV - Alon Bernstein, Cisco & Kuralamudhan Ramakrishnan, Intel

Download to read offline

The first wave of NFV was about taking a network function and running it as-is in a virtual environment. The web giants follow a different approach called Cloud Native. Cloud Native views the cloud as a huge distributed compute platform, applications are broken into micro-services and deployed in a container based environment using DevOps.
Communication Service Providers are looking to adopt Cloud Native, yet the existing Cloud Native principles are not sufficient to meet their business and NFV use case needs. In this session, Intel and Cisco will explore and share experiences addressing challenges, technology gaps and migration path to Cloud Native for NFV.
Join us to alleviate your concerns around data plane performance, control, and DevOps deployment when using micro-services, Containers, and Kubernetes implementations.

The first wave of NFV was about taking a network function and running it as-is in a virtual environment. The web giants follow a different approach called Cloud Native. Cloud Native views the cloud as a huge distributed compute platform, applications are broken into micro-services and deployed in a container based environment using DevOps.
Communication Service Providers are looking to adopt Cloud Native, yet the existing Cloud Native principles are not sufficient to meet their business and NFV use case needs. In this session, Intel and Cisco will explore and share experiences addressing challenges, technology gaps and migration path to Cloud Native for NFV.
Join us to alleviate your concerns around data plane performance, control, and DevOps deployment when using micro-services, Containers, and Kubernetes implementations.

Advertisement
Advertisement

More Related Content

Slideshows for you (20)

Similar to ONS 2018 LA - Intel Tutorial: Cloud Native to NFV - Alon Bernstein, Cisco & Kuralamudhan Ramakrishnan, Intel (20)

Advertisement

Recently uploaded (20)

ONS 2018 LA - Intel Tutorial: Cloud Native to NFV - Alon Bernstein, Cisco & Kuralamudhan Ramakrishnan, Intel

  1. 1. CloudNativeforNFV AlonBernstein alonb@cisco.com DistinguishedEngineer CiscoSystemsinc. KuralamudhanRamakrishnan Kuralamudhan.Ramakrishnan@intel.com SeniorSoftwareEngineer -Intel RepresentingonBehalfof IVANCOUGHLAN IVAN.COUGHLAN@INTEL.COM SeniorSoftwareArchitect SoftwareDefinedDatacenterSolutionsGroup-Intel
  2. 2. Cloud Native Network Function Virtualization Alon Bernstein Distinguished Engineer Cisco Systems Inc.
  3. 3. Cloud Native Network Functions • The term “Cloud Native Networking” covers many topics • This presentation focuses on micro-services dedicated to packet forwarding • The first part of the presentation is a high level outline of the motivations and challenges in cloud native network functions (CNF) • Second part will focus on tools used for CNF • Cisco is building a CNF based CMTS (cable modem termination system).
  4. 4. What Is Cloud Native Software ? • Allow a software developer to focus on the core business logic - Break functions to “micro services” - Scaling/Availability are provided by the infrastructure - Open source software tools that reduce development time (databases/buses/productivity tools etc). - “12 factor app” and domain driven design • More in “cloud native computing foundation” • How does that translate to the network function virtualization ?
  5. 5. POD (x+y) POD (x+y) POD (x+y) Load sharing Assume a very simple stateless micro- service: add two numbers. Run the micro- service in a POD. - If a POD crashes just start another one. Almost no service impact - If we run out of processing capacity add another POD - If we don’t need capacity remove PODs - To upgrade the code start adding PODs with the new SW version and remove the old ones - Kubernetes declarative model helps deploy design patterns such as this example very easily. Add (x,y) Cloud Native Software By Example
  6. 6. • Cloud Native NFV What are the benefits of cloud native ? • Service velocity • Availability • Scale
  7. 7. What Are The Risks Of Going Cloud Native ? • Complex. • Just the infra consists of 20+ software packages. • Automation is a must. • Managing a highly distributed system is not easy. • Complete rearrangement of the organization (CI/CD, devops, merge of IT/DC/access orgs)
  8. 8. What About Containers ? • Containers are a packaging option. • Cloud native uses containers, but its possible to build a container system that is not cloud native. • Containers are useful in the cloud native context because of the breakup to micro-services (lots of lightweight functions).
  9. 9. • Cloud Native NFV can be high-performance What About Performance ?
  10. 10. Cloud Native For NFV • Current reference for NFV is ETSI-NFV which is a “lift-and- shift” architecture; create an equivalent network appliance in a VM and place it in the data center. • Cloud native is focused on web and e-commerce. Some adjustments need to be made for NFV.
  11. 11. NFV Cloud Native Goal: implement networking functions in a data center environment. Networking focused Goal: treat a data center as a single OS. Application focus Orchestration Scheduling Relies on active/standby and orchestration for scaling/availability/upgrade Relies on load balancing for scaling/availability/upgrade Network Management Framework Application Deployment Framework ETSI-NFV is a common framework Depends on the tools used Long lived application, dynamic configuration Short lived application, configuration stored centrally, immutable
  12. 12. Availability • Breakdown to micro- services is the first line of defense • Cattle vs. Pets
  13. 13. Two key applications • Control plane: for the most part “classic cloud native” • Data plane: not supported out-of-the-box
  14. 14. Issues With Cloud Native Data Plane • IP packet streams are not HTTP transactions – how does load balancing work ? • CPU and memory are counted as resource in the Kubernetes scheduler – but what about bandwidth ? • Solutions have to be built, they don’t come as part as the basic package
  15. 15. Conclusion Back to basics. Behind the term “cloud native” are: • ABCs of software engineering (modularity) • Parallel computing • Distributed systems All of the above can be, and should be applied to NFV
  16. 16. CloudNativeForNFV ONSLA March2018 Intel KuralamudhanRaMakrishnan Kuralamudhan.Ramakrishnan@intel.com SeniorSoftwareEngineer, SoftwareDefinedDatacenterSolutionsGroup-Intel Representing Onbehalfof IVANCOUGHLAN IVAN.COUGHLAN@INTEL.COM SeniorSoftwareArchitect, SoftwareDefinedDatacenterSolutionsGroup-Intel Contributors:AbdulHalim;LouiseDaly;SwatiSehgal
  17. 17. SoftwareDefinedDatacenterSolutionGroup 18 NoticesandDisclaimers© 2017 Intel Corporation. Intel, the Intel logo, Xeon and Xeon logos are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Learn more at intel.com, or from the OEM or retailer. All products, computer systems, dates, and figures specified are preliminary based on current expectations, and are subject to change without notice. No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document. ​Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non- infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade. Intel processors of the same SKU may vary in frequency or power as a result of natural variability in the production process. For more complete information about performance and benchmark results, visit www.intel.com/benchmarks. Intel does not control or audit third-party benchmark data or the web sites referenced in this document. You should visit the referenced web site and confirm whether referenced data are accurate. Optimization Notice: Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice. Notice Revision #20110804. Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No computer system can be absolutely secure. Intel® Advanced Vector Extensions (Intel® AVX)* provides higher throughput to certain processor operations. Due to varying processor power characteristics, utilizing AVX instructions may cause a) some parts to operate at less than the rated frequency and b) some parts with Intel® Turbo Boost Technology 2.0 to not achieve any or maximum turbo frequencies. Performance varies depending on hardware, software, and system configuration and you can learn more at http://www.intel.com/go/turbo. Intel® Hyper-Threading Technology available on select Intel® processors. Requires an Intel® HT Technology-enabled system. Your performance varies depending on the specific hardware and software you use. Learn more by visiting http://www.intel.com/info/hyperthreading. All SKUs, frequencies, features and performance estimates are PRELIMINARY and can change without notice Results have been estimated based on internal Intel analysis and are provided for informational purposes only. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit http://www.intel.com/performance. Configurations: Based on Intel estimates.
  18. 18. SoftwareDefinedDatacenterSolutionGroup 19 Monolith CloudnativeSoftwareEvolutionjourney Monolith VM Containers Micro-services TodayEmerging Hindsight What Happened Virtualize Physical Appliances Automated; Secure; Flexible; Performant; Resilient;
  19. 19. SoftwareDefinedDatacenterSolutionGroup 20 Monolith CloudnativeSoftwareEvolutionjourney Monolith VM Containers Micro-services TodayEmerging Hindsight What Happened Containers Bare Metal Deployment Model Automated; Secure; Flexible; Performant; Resilient;
  20. 20. SoftwareDefinedDatacenterSolutionGroup 21 ContainerBareMetalDeploymentModel Collaboratewithearlymovers,driveOpenSourcedevelopmentsandenabletheindustry VNFs vCMTS vIMS vEPC vCPE vSBC NFVi- Network SR-IOV NFV Orchestration CLOUD NATIVE COMPUTING FOUNDATION Containers Bare Metal Hardware Containerized Virtual Network Functions vRNCvCPE vEPC vFirewall vSBCvCMTS vIMS vRouter Orchestration Host OS Docker Engine
  21. 21. SoftwareDefinedDatacenterSolutionGroup 22 AddresskeyChallengesincontainersBareMetal Open Source: Available on Intel github https://github.com/Intel-Corp | NFD at https://github.com/kubernetes-incubator/node-feature-discovery Ability to request/allocate platform capabilities High performance Data Plane (N-S) High performance Data Plane (E-W) Multiple network interfaces for VNFs CPU Core-Pinning and isolation for K8s pods Dynamic Huge Page allocation Platform telemetry information Challengesbeingaddressed Discovery, Advertise, schedule and manage devices with K8s Guarantee NUMA node resource alignment Kubernetes Networking Data Plane Acceleration Telemetry Enhance Platform Awareness (EPA) Open Source: CNI plug-in - V2.0 June ‘17 Upstream K8s: TBD Open Source: CNI plug-in - V1.0 Sep ‘17 Open Source: CNI plug-in - V2.0 April ‘17 Open Source: Nov. ‘16 Upstream K8: Incubation Graduation TBD Open Source: V1.2 April ‘17 Upstream K8: Phase 1 - V1.8 Sept ‘17 Upstream K8: V1.8 Sept ‘17 SOFTWARE AVAILABILITY* Upstream collectd: V5.7.2 June ‘17 ; 5.8.0 ((Q4 2017 date TBD) Solution Node Feature Discovery CPU Manager for Kubernetes Native Huge page support for Kubernetes VHOST USER SR-IOV Device Plugin NUMA Manager New implementation WIP Upstream K8: Phase 1 - V1.8 Alpha New implementation – WIP PoC proposal Upstream K8: TBD
  22. 22. SoftwareDefinedDatacenterSolutionGroup 23 MultipleNetworkInterfacesforVNFs PROBLEM Kubernetes support only one Network interface – “eth0” In NFV use cases, it is required to provide multiple network interfaces to the virtualized operating environment of the VNF eth0 Pod eth1 eth2 eth0 interface Pod Container Container Container Container Container Container USE CASES Functional separation of control and data network planes link aggregation/bonding for redundancy of the network Support for implementation of different network SLAs Network segregation and Security REFERENCE Multus CNI – https://github.com/Intel-Corp/multus-cni Network plumbing WG Kubernetes proposal- Multus PoC https://docs.google.com/document/d/1Ny03h6IDVy_e_vmElOqR7Ud TPAG_RNydhVE1Kx54kFQ Network Control Flow with Multus Pod Network Interfaces with Multus SR-IOV net1 KUBELET SR-IOV net0eth0 LINUX BRIDGE VF0 VF1 SR-IOV Logging vFirewall net0 net1 eth0 FlannelLinuxBridge Kubernetes Pod Mutli net Example
  23. 23. SoftwareDefinedDatacenterSolutionGroup 24 VhostUserCNIPlugin PROBLEM No Container Networking with software acceleration for NFV, particularly for East – West Traffic SOLUTION Virtio_user/ vhost_user performance better than VETH pairs Supports VPP as well as DPDK OVS Vhost_user CNI plugin enables K8s to leverage data plane acceleration REFERENCE https://github.com/intel/vhost-user-net-plugin (V1.0 Sep ’17) NIC eth0 OVS- DPDK / VPP vhostuser Kubernetes Pod Container VNF Application DPDK virtio_user
  24. 24. SoftwareDefinedDatacenterSolutionGroup 25 DPDK–SRIOVCNIPlugin Kubernetes Pod Container VNF Application DPDK Kernel SR-IOV Enabled Network Interface VFVF VF uio_pci_generic/igb_uio/vfio-pci PROBLEM Lack of support for physical platform resource isolation No guaranteed network IO performance No support for Data Plane Networking SOLUTION Allows SRIOV support in Kubernetes via a CNI plugin Supports two modes of operation: SR-IOV: SR-IOV VFs are allocated to pod network namespace DPDK: SR-IOV VFs are bounded to DPDK drivers in the userspace REFERENCE github.com/Intel-Corp/sriov-cni
  25. 25. SoftwareDefinedDatacenterSolutionGroup 26 NodeFeatureDiscovery SR-IOV Network Features Single Root I/O Virtualization BootGuard A hardware-based boot integrity protection mechanism (New feature on Purley). UEFI Secure Boot Boot Firmware verification and authorization of OS Loader/Kernel components AVX CPUID Features: Intel® Advances Vector Extensions 512 (Intel® AVX 512) Turbo Boost Intel® Turbo Boost Processor accelerator Node Feature Discovery Label Details NODE FEATUR DISCOVERY IN K8sPROBLEM No way to identify hardware capabilities or configuration Inability for workload to request certain hardware feature SOLUTION Node Feature Discovery brings Enhanced Platform Awareness (EPA) in K8s NFD detects resources on each node in a Kubernetes cluster and advertises those features Allows matching of workload to platform capabilities REFERENCE github.com/kubernetes-incubator/node-feature-discovery NODE 1 NFD DISCOVERY POD NODE 2 NFD DISCOVERY POD Application A Application B POD label: Application B Application A POD label: SR-IOV Bootguard MASTER ETCD NODE 1 NODE 2 AVX Turbo boost Bootguard Secureboot SR-IOV NFD DISCOVERY POD
  26. 26. SoftwareDefinedDatacenterSolutionGroup 27 CPUManagerforKubernetes–CPUPinningandIsolation PROBLEM Kubernetes has no mechanism to support core pinning and isolation Results in high priority workloads not achieving SLAs * Noisy Neighbor Workload: An application that effect causes other virtual applications that share the infrastructure to suffer from uneven performance WITHOUT CMK: CPU Pinning and Isolation Core0 CPU0 CPU1 Target Workload Core1 CPU2 CPU3 Noisy Neighbour Workload REFERENCE https://github.com/Intel-Corp/CPU-Manager-for-Kubernetes SOLUTION CPU-Manager-For-Kubernetes introduces core pinning and isolation to K8s without requiring changes to the code base CMK guarantees high priority workloads are pinned to exclusive cores Gives a performance boost to high priority applications Negates the noisy neighbour* scenario Core0 CPU0 CPU1 Target Workload Core1 CPU2 CPU3 Noisy Neighbour Workload WITH CMK: CPU Pinning and Isolation Noisy Neighbour Workload
  27. 27. SoftwareDefinedDatacenterSolutionGroup 28 ExperienceKitExample:CPUManagerforKubernetesBenchmarkTest Core Isolation decrease latency of target workload up >x13 in presence of Noisy Workload Uptox55latencydecrease Test are done with 16 Target Workloads” (=16 Containers) and with or without Noisy Workload present 1 Core with 2 threads are assigned to each container. Noisy Workload uses any available (non-isolated) cores in the system Platform: Intel® Xeon® Gold Processor 20C@2.00 GHz (6138T); DPDK L2 Forwarding using XXV710 NICs Core Isolation increase throughput of target-workload >200% for small packets in presence of Noisy Workload CoreIsolationleadstoperformanceconsistencysolvingnoisyworkloadsproblem Uptox4throughput increase
  28. 28. SoftwareDefinedDatacenterSolutionGroup 29 DevicePlugins-overview WHY? Device vendors have to write custom Kubernetes code in order to integrate their device with the ecosystem Results in multiple vendors maintaining custom code making it difficult for a customer to consume REFERENCE https://kubernetes.io/docs/concepts/cluster-administration/device-plugins/ HOW? Provide a device plugin framework which enables vendors to advertise, schedule and setup devices with native Kubernetes integration Device Plugins are easily deployed via K8s Daemonsets and workload device requests made via extended resource requests in the Pod Specification Provides predictability to workloads as Devices are allocated in a controlled manner with device health being monitored - *Predictability to K8s* WorkloadResourceRequests: Device KiubernetesNode Kubelet–devicemgr DevicePlugin Devicea Devicec gRPC Workload Deviceb DeviceC Devicea LIMITATIONS? Device Plugin API has no de-allocation resulting in no hook to free devices on Pod completion
  29. 29. SoftwareDefinedDatacenterSolutionGroup 30 NUMAManagerforKubernetes–NUMAalignmentofResources PROBLEM Kubernetes has multiple independent components that handle resource allocation resulting in no alignment on Multi NUMA Node systems Results in high priority workloads not achieving SLAs or increased resource utilization REFERENCE https://github.com/kubernetes/community/pull/1680 SOLUTION NUMA Manager provides a mechanism to guarantee NUMA Node Affinity of resources requested by a workload NUMA Manager interfaces with components( eg. CPU Manager & Device Manager) that have NUMA awareness to enable NUMA aligned resource allocations Gives a performance boost to high priority applications as resources are NUMA Node aligned CPU1 Interconnect Device0 NUMANode1 Socket0 Socket1Device1 PriorityWorkload It Interconnect NUMANode0 WITH NUMA MANAGER Device0 Socket0 It Interconnect WITHOUT NUMA MANAGER PriorityWorkloadResourceRequests: CPU Device Socket1Device1 NUMANode0 NUMANode1 PriorityWorkload
  30. 30. SoftwareDefinedDatacenterSolutionGroup 31 Monolith CloudnativeSoftwareEvolutionjourney Monolith VM Containers Micro-services TodayEmerging Hindsight What Happened Cloud-Native Network Functions Automated; Secure; Flexible; Performant; Resilient;
  31. 31. SoftwareDefinedDatacenterSolutionGroup 32 keyChallengestoaddressedCloudnativeNetworkfunction Placement considering the Network resources QoS Service Function chains High-speed wiring of NFVs Data plane Management Support for Cloud Native & high performance Kubernetes Networking NFV – Specific Policy APIs Management/Control Solutionstoworktogether Ligato CloudNativeimplementationChallengestoaddress
  32. 32. Cloud CMTS • The Cloud CMTS is a cloud native implementation of the Cable Modem Termination System • The Physical layer is handled by the “Remote phy device (RPD)”, similar to the phy split in 5G • The Cloud CMTS processing byte streams below the MAC layer because the RPD performs only mod/demod at the phy layer
  33. 33. 34 SoftwareDefinedDatacenterSolutionGroup CALLtoaction • We need feedback on the current ingredients e.g. • Multus • SR-IOV • Vhost user • Be active in K8s SIGs • Network • Resource Management • Try out the Ligato Dev Framework
  34. 34. SoftwareDefinedDatacenterSolutionGroup 35 EngagewithIntel OPEN SOURCE POC EXPERIENCE KITS Best Practice Guidelines Software community Engagewithintel CONTAINER CAPABILITIES CONTAINER NETWORKING Intel is addressing key challenges to using containers for NFV use cases Many of these have been open sourced already Explore more information available on Intel’s Network Builders site while additional material will be made available throughout January 2018 :https://networkbuilders.intel.com/network-technologies/container-experience-kits VNF
  35. 35. CloudNativeforNFV AlonBernstein alonb@cisco.com DistinguishedEngineer CiscoSystemsinc. KuralamudhanRamakrishnan Kuralamudhan.Ramakrishnan@intel.com SeniorSoftwareEngineer -Intel RepresentingonBehalfof IVANCOUGHLAN IVAN.COUGHLAN@INTEL.COM SeniorSoftwareArchitect SoftwareDefinedDatacenterSolutionsGroup-Intel
  36. 36. 37 SoftwareDefinedDatacenterSolutionGroup
  37. 37. backupSlides
  38. 38. SoftwareDefinedDatacenterSolutionGroup LigatoFrameworkForCloudNativeVNFs Contiv – VPP Switch VPP Kernel Host stack Legacy apps High Performance apps • Service Mesh • Istio / Envoy Cloud-Native VNFs POD Istio Sidecar VPP APP Sockets POD APP Sockets POD VNF VPP Agent VPP TCP Stack Contiv-VPP Contiv-VPP etcd Ligato Kubelet mem-if Define ServicesDefine Topology mem-if IPv4/IPv6 Reference: Jan Medved’s Ligato: a platform for development of Cloud-Native VNFs
  39. 39. SoftwareDefinedDatacenterSolutionGroup 40 Test configuration: Master & Minion Nodes: {mother board: Intel Corporation; S2600WFQ; CPU: Intel® Xeon® Gold Processor 6138T; 2.0 Ghz; 2 socket; 20 cores; 27.5 MB; 125 W; Memory: Micron MTA36ASF2G72PZ; 1 DIMM/Channel, 6 Channel/Socket; BIOS: Intel Corporation SE5C620.86B.0X.01.0007.060920171037; NIC: Intel Corporation; Ethernet Controller XXV710 for 2x25GbE Firmware version 5.50; SW: Ubuntu 16.04.2 64bit; Kernel 4.4.0-62-generic x86_64; DPDK 17.05} IXIA* - IxNetwork 8.10.1046.6 EA; Protocols: 8.10.1105.9, IxOS 8.10.1250.8 EA-Patch1 ExperienceKitExample–TestsetupforCPUManagerforKubernetesBenchmarkTest Testing Results Summary: Managed Noisy neighbors using CMK Benchmark Test With core isolation Performance is consistent with or without ‘noisy’ application” present Without core isolation EPA feature, in presence of “noisy application” • >70% Throughput drops for small packet sizes • > 10% Throughput drops for large packet sizes • > x10 Packet latency increased
  40. 40. SoftwareDefinedDatacenterSolutionGroup 41 BondingCNIPlugin PROBLEM There is no redundancy of network link failure in container environment. This results in high-priority workloads not achieving expected high-availability. e.g., due to failure of NIC, network Switch or cable breakdowns. SOLUTION Bonding CNI provides a mechanism to aggregate multiple network interfaces into a single logical “bonded” interface in a Container environment. Thus providing a fail-over, high- availability network for containerized applications e.g., VNF. REFERENCE https://github.com/Intel-Corp/bond-cni network SRIOV PF 0 Kubernetes Pod Container VNF Application Net 0 Net 1 SRIOV PF 1 bond VF VF VF VFVF VF VF VF
  41. 41. SoftwareDefinedDatacenterSolutionGroup 42 HugepageNativeSupportinKubernetes PROBLEM No resource management of Huge Pages in kubernetes Responsibility of the cluster operator to handle it manually SOLUTION Huge Pages introduced as first class resource in kubernetes Support for Huge Pages via hugetlbfs enabled through a memory backed volume plugin Inherent accounting of Huge Pages Automatic relinquishing of Huge Pages in case of unexpected process termination REFERENCE Alpha support for pre-allocated hugepages Hugetlbfs support based on empty dir volume plugin
  42. 42. SoftwareDefinedDatacenterSolutionGroup 43 QATsupportinkubernetes PROBLEM No way to identify QAT devices available in a kubernetes cluster Inability for a workload to request a QAT device along with other compute resources SOLUTION QAT support enabled through Device plugins introduced in kubernetes 1.8 A vendor independent framework for users to consume hardware devices Enables device discovery, advertisement to kubelet, allocation to pod and device health checks REFERENCE https://kubernetes.io/docs/concepts/cluster-administration/device-plugins apiVersion: v1 kind: Pod metadata: name: dpdkpod spec: containers: - image: dpdkapp name: dpdkcontainer resources: requests: cpu: "10" memory: "4Gi" pod.alpha.kubernetes.io/opaque-int-resource-qat: '1' limits: cpu: "10" memory: "4Gi" pod.alpha.kubernetes.io/opaque-int-resource-qat: '1'
  43. 43. SoftwareDefinedDatacenterSolutionGroup 44 NFDSecurebootUSECASE PROBLEM The kernel does not allow IGB_UIO based DPDK applications on UEFI Secure Boot enabled systems SOLUTION Using node antiaffinty feature in kubernetes to prevent DPDK application requiring IGB_UIO driver support from landing on nodes with SecureBoot label created by Node feature Discovery apiVersion: v1 kind: Pod metadata: name: dpdkpodRequiringUIOSupport spec: affinity: nodeAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: “nfd-SecureBoot” operator: In values: - “true” containers: - image: dpdkapp name: dpdkcontainer
  44. 44. SoftwareDefinedDatacenterSolutionGroup PlatformTelemetrySystemsupportinKubernetes-Remove Compute Network Storage Intel® Run Sure Technologies Prometheus Resource Telemetry Interfaces Open Collection BasePlatform Intel® Infrastructure Management Technologies See: Platform Service Assurance site not including containers specific data: https://networkbuilders.intel.com/network-technologies/serviceassurance C-Advisor Container Container and Platform Telemetry Platform Telemetry Container Telemetry • Service Mesh • Istio / Envoy
  45. 45. SoftwareDefinedDatacenterSolutionGroup 46 VNFs vCMTS vIMS vEPC vCPE vSBC NFVi- Network SR-IOV NFV Orchestration Hybrid Unified Containers VM Containers Containers VM Bare Metal VM CLOUD NATIVE COMPUTING FOUNDATION Containersnetworkingdeploymentsconsiderations MultipleDeploymentModels…..
  46. 46. SoftwareDefinedDatacenterSolutionGroup 47 ContainerUnifiedInfrastructureDeploymentModel VNFs vCMTS vIMS vEPC vCPE vSBC NFVi- Network SR-IOV NFV Orchestration CLOUD NATIVE COMPUTING FOUNDATION Containers Unified Infrastructure Hardware Orchestration VM Guest OS Docker Engine VM Guest OS Docker Engine VM Guest OS Docker Engine Hypervisor App App App App App App
  47. 47. SoftwareDefinedDatacenterSolutionGroup CPU Manager for KubernetesSupport for CPU Core Pinning for Kuryr-K8s pods Support for high performance Data Plane (E-W) Multiple network interfaces for VNFs Kuryr- KubernetesRemoving Network performance penalties for container in VM IndustrychallengesincontainersUnifiedInfrastructure MASTER VM Same as in Container Bare Metal Challengesbeingaddressed Solution

×