Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Using VPP and SRIO-V with Clear Containers

673 views

Published on

Clear Containers is an Open Containers Initiative (OCI) “runtime” that launches an Intel VT-x secured hypervisor rather than a standard Linux container. An introduction of Clear Containers will be provided, followed by an overview of CNM networking plugins which have been created to enhance network connectivity using Clear Containers. More specifically, we will show demonstrations of using VPP with DPDK and SRIO-v based networks to connect Clear Containers. Pending time we will provide and walk through a hands on example of using VPP with Clear Containers.

About the speaker: Manohar Castelino is a Principal Engineer for Intel’s Open Source Technology Center. Manohar has worked on networking, network management, network processors and virtualization for over 15 years. Manohar is currently an architect and developer with the ciao (clearlinux.org/ciao) and the clear containers (https://github.com/01org/cc-oci-runtime) projects focused on networking. Manohar has spoken at many Container Meetups and internal conferences.

Published in: Software
  • Be the first to comment

  • Be the first to like this

Using VPP and SRIO-V with Clear Containers

  1. 1. Data plane and intel® Clear ContainersOverview of networking POCs for Intel Clear Containers eric.ernst@intel.com
  2. 2. 2 Quick Introduction to Intel® Clear Containers *Brief* look at some Dataplane Network Technologies: •  DPDK •  VPP* •  SRIO-V Using VPP, and SRIO-V with Intel Clear Containers •  Demonstration of SRIO-V •  Hands on Lab: playing with VPP and Clear Containers *Other names and brands may be claimed as the property of others.
  3. 3. Brief overview of intel Clear Containers 3
  4. 4. docker pull (a few minutes) docker run (less than 1 sec)
  5. 5. Linux kernel Storage Network Process Process Process namespaces Memory namespacesnamespaces CPU *Other names and brands may be claimed as the property of others. Linux* namespace containers
  6. 6. HARDWARE SECURITY Source: https://medias.monuments-nationaux.fr/var/cmn_inter/storage/images/2/6/0/8/21308062-1-fre-FR/C-0g7UPXkAIV7Gb_block-socialnetwork.jpg
  7. 7. SOFTWARE SECURITYSource: http://www.adventurebouncecork.ie/wp-content/uploads/2013/04/Crayon-Combi-Front-1280x.jpg
  8. 8. ? Speed Security
  9. 9. Linux* kernel Storage Network Process Process Process namespaces Memory namespacesnamespaces CPU Process Linux Kernel Intel® VT-x namespaces Process Linux Kernel Intel® VT-x namespaces Process Linux Kernel Intel® VT-x namespaces Process namespaces Process Linux Kernel Intel® VT-x namespaces Process Linux Kernel Intel® VT-x namespaces Process Process Process namespacesnamespacesnamespaces Network *Other names and brands may be claimed as the property of others.
  10. 10. Interesting, so what? Telecommunications and their applicability to Intel Clear Containers 10
  11. 11. 11 Historically a strong place for virtual machines (VMs) Telco usage and requirements tends to favor VMs over containers, at least for the time being. !  Reliability: five 9s: –  Computing infrastructure working 99.999% of the time. !  Highly customized, optimizations in their workload’s stack, including in kernel. !  Use cases aren’t necessarily throw-away, short-lived workloads in the cloud; lower downtime. Intel Clear Containers provides a unique feature set relative to standard containers.
  12. 12. 12 VNF-1 VNF-2A VNF-2B VNF-2C VNF-3 Ingress Egress endpoint
  13. 13. 13 VM-1 VNF-1 VM-2 VNF-2A VM-3 VNF-2B VM-4 VNF-2C VM-5 VNF-3 Egress Endpoint Ingress
  14. 14. Why We cannot just use Linux* networking Motivation for data plane acceleration 14 *Other names and brands may be claimed as the property of others.
  15. 15. 64 B packet 1 GB 672 ns 10 GB 67.2 ns 40 GB 16.8 ns 100 GB 6.72 ns •  Process one packet: •  67.2 ns -- ~201 cycles @ 3 GHz •  Cache: •  L2 / L3 access: 12 cycles / 44 cycles •  Cache hit in other core: 60 cycles •  Memory: ~70 ns: 210 cycles! Source: https://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-optimization-manual.pdf Time budget for processing
  16. 16. DPDK Data Plane Development Kit 16
  17. 17. rx tx •  Is a framework for fast packet processing. •  Put/get directly to from DMA area to avoid copies using huge pages. •  Works in Poll Mode (no interrupt context switch/latency). Directly poll data from the NIC. •  Uses huge pages to avoid dealing with paging. *
  18. 18. VPP* Vector Packet Processing Platform 18 * *Other names and brands may be claimed as the property of others. Source: http://events.linuxfoundation.org/sites/events/files/slides/fdio%20-%20Universal%20Dataplane.pdf
  19. 19. 19 VPP* quick overview •  Built on a packet processing graph. •  Modular: It is a framework to which you can add your own graph nodes. •  Takes in a vector of packets at a time to amortize the cost of a D- cache miss and mitigates read latency. Nodes optimized to keep I- cache hot. •  All user space - makes use of DPDK. *Other names and brands may be claimed as the property of others. Source: http://events.linuxfoundation.org/sites/events/files/slides/fdio%20-%20Universal%20Dataplane.pdf
  20. 20. 20 VPP*: Learn more •  https://wiki.fd.io/view/VPP •  Overview, including architecture and performance: •  http://events.linuxfoundation.org/sites/events/files/slides/fdio%20- %20Universal%20Dataplane.pdf •  CNM plugin for Intel Clear Containers: •  https://github.com/clearcontainers/vpp *Other names and brands may be claimed as the property of others. Source: http://events.linuxfoundation.org/sites/events/files/slides/fdio%20-%20Universal%20Dataplane.pdf
  21. 21. SRIO-V Single Root I/O Virtualization 21
  22. 22. 22 SRIO-V •  Extension of PCI specification. •  Allows a device to separate its resources among various PCIe hardware functions. •  Physical function: This is the primary function of the device, which also exposes the SRIO-V capability to the platform. •  Virtual function: Once enabled, this shows up on the host with a BDF, and depending on the devices hardware design, can operate in its own IOMMU group. •  With SRIO-V, you can pass a physical connection directly to a VM, allowing native performance (no virtualization overhead).
  23. 23. 23 SRIO-V: Learn more •  Overview of SRIO-V: •  https://www.intel.sg/content/dam/doc/application-note/pci-sig-sr-iov-primer- sr-iov-technology-paper.pdf •  CNM plugin for Intel Clear Containers: •  https://github.com/clearcontainers/sriov •  Example of using SRIO-V with Intel Clear Containers: •  https://gist.github.com/egernst/b0fd8ec4d9bd5cc323d2d00a0ad1f81c
  24. 24. Putting it ALL together 24
  25. 25. 25 VM-1 VNF-1 VM-2 VNF-2A VM-3 VNF-2B VM-4 VNF-2C VM-5 VNF-3 Egress endpoint Ingress
  26. 26. 26 VM-4 VNF-2B vhost user qemu virtio-net virtio-net vhost user VNF-2C vhost user qemu virtio-netvirtio-net VM-3 vhost user VM-5 VNF-3 vhost user qemu virtio-net virtio-net vhost user VM-2 VNF-2A vhost user qemu virtio-net virtio-net vhost user VM-1 VNF-1 vhost user qemu virtio-net virtio-net vhost user Ingress endpoint Egress endpoint
  27. 27. 2727 VM-1 VNF-1 vhost user qemu virtio- net virtio- net VFIO- PCI SRIOV NIC VPP L2 XC VM-2 VNF-1 vhost user qemu virtio- net virtio- net vhost user VM-3 VNF-1 VFIO- PCI qemu virtio- net virtio- net vhost user SRIOV NIC VPP L2 XC
  28. 28. 2828 VM-1: VNF-1 vhost user qemu VFIO- PCI SRIOV NIC VPP L2 XC SRIOV NIC Guest userspace Guest kernel PMD PMD VNF processing virtio- net Host kernel VM-2: VNF-2 vhost user qemu vhost user PMD PMD VNF processing virtio- net VM-3: VNF-3 VFIO- PCI qemu vhost user PMD PMD VNF processing virtio- net virtio- net virtio- net virtio- net VPP L2 XC Host userspace
  29. 29. Support in Intel Clear Containers 29
  30. 30. 30 CNM based plugins •  OVS-DPDK: •  The plugin creates an OVS bridge on the host system per Docker* network creation. •  Each container added to this network is provided a vhost-user interface. Intel Clear Containers search for this vhost-user socket to identify if special network setup is required. •  VPP*: •  Plugin will add endpoints for the same VPP-based network to the same L2 bridge domain. Intel Clear Containers search for this vhost-user socket to identify if special network setup is required. •  SRIO-V: •  Plugin assumes that VFs are already created on the host and places the VF into the container namespace (this works with runc out of the box). $ go get github.com/clearcontainers/ovsdpdk $ go get github.com/clearcontainers/vpp $ go get github.com/clearcontainers/sriov *Other names and brands may be claimed as the property of others.
  31. 31. Demonstration
  32. 32. 33 Disclaimers Intel and the Intel logo are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. © Intel Corporation

×