Vector Packet Technologies such as DPDK and FD.io/VPP revolutionized software packet processing initially for discrete appliances and then for NFV use cases. Container based VNF deployments and it's supporting NFV infrastructure is now the new frontier in packet processing and has number of strong advocates among both traditional Comms Service Providers and in the Cloud. This presentation will give an overview of how DPDK and FD.io/VPP project are rising to meet the challenges of the Container dataplane. The discussion will provide an overview of the challenges, recent new features and what is coming soon in this exciting new area for the software dataplane, in both DPDK and FD.io/VPP!
About the speaker: Ray Kinsella has been working on Linux and various other open source technologies for about twenty years. He is recently active in open source communities such as VPP and DPDK but is a constant lurker in many others. He is interested in the software dataplane and optimization, virtualization, operating system design and implementation, communications and networking.
5. 5
Challenges of Containers
Containers ≠ Micro-services, but …
Containers ⇒ Micro-service - like behaviours
… operators will have to start treating their network functions less like pets and more like cattle.
Peter Willis, Chief Researcher for converged networks BT.
03/22/2017 PARIS -- MPLS, SDN and NFV World Congress
6. 6
Challenges of Containers
Micro-services are typically…
• Decomposed ( modular )
• Stateless ( or minimal state )
• Rapid Lifecycle ( in the µSeconds )
• Lightweight ( in terms CPU, Memory and I/O )
• Scalable ( to the many, many 1000s )
11. 11
16-09 New Features
Enhanced LISP support for
L2 overlays
Multitenancy
Multihoming
Re-encapsulating Tunnel Routers (RTR) support
Map-Resolver failover algorithm
New plugins for
SNAT
MagLev-like Load
Identifier Locator Addressing
NSH SFC SFF’s & NSH Proxy
Port range ingress filtering
Dynamically ordered subgraphs
17-01 New Features
Hierarchical FIB
Performance Improvements
DPDK input and output nodes
L2 Path
IPv4 lookup node
IPSEC Performance
SW and HW Crypto Support
HQoS support
Simple Port Analyzer (SPAN)
BFD, ACL, IPFIX, SNAT
L2 GRE over IPSec tunnels
LLDP
LISP Enhancements
Source/Dest control plane
L2 over LISP and GRE
Map-Register/Map-Notify
RLOC-probing
Flow Per Packet
17-04 New Features
VPP Userspace Host Stack
TCP stack
DHCPv4 & DHCPv6 relay/proxy
ND Proxy
SNAT
CGN: port allocation
& address pool
CPE: External interface
NAT64, LW46
Segment Routing
SRv6 Network Programming
SR Traffic Engineering
SR LocalSIDs
Framework to expand
LocalSIDs w/ plugins
iOAM
UDP Pinger
IOAM as type 2 metadata in NSH
Anycast active server selection
IPFIX Improvements (IPv6)
VPP is rapidly evolving!
17-07 New Features
Infrastructure
DPDK 17.05
make test
Host stack
TCP RFC Compatibility
TCP Loss Recovery
Interfaces
MemIF
Virtio-user
Network features
MPLS Multicast
MPLS Segment Routing
Bidirectional Fwd Detection
GRE over IPV6
iOAM for SRv6
GTP-U support
LISP NSH Support
VXLAN Bypass
Introducing VPP
13. 13
Scaling the Container Data-plane
Virtual Network Functions (VNFs).
DPDK and FD.io VPP based apps in a Container
Virtual Switching (vSwitches)
DPDK and FD.io VPP based vSwitches
Enterprise and Network Function Virtualization Infrastructure
1.
2.
Virt
I/O
RX TX
Network
Appliance Socket App
BSD Sockets API
RX TX
Layer 3
Layer 2
Layer 4
Container(s) Containers(s)
Copy Copy
Virt
I/O
Control Plane TX RX
TX RX
TX RX
TX RX
14. 14
Virtual Network Functions
Decomposed - - Application specific
Stateless - - Application specific
Rapid Lifecycle Startup
• No PCI scan
• No Contig
Mem
Startup
• As DPDK
• Disable
DPDK plugin
Must achieve 7-25us
startup
Lightweight Memory
• 4K Pages
• Late Binding
Memory
• 4K Pages
• Late Binding
4k Pages or lazy
memory allocation
Scalable Core Sharing
• Interrupt
I/O
• SR-IOV
• Virtio-User
Core Sharing
• Interrupt
I/O
• SR-IOV
• MemIF
Core sharing (interrupt
driven)
Scalable I/O method
CC BY-ND 2.0 Image by Yoel Ben-Avraham
http://bit.ly/1tXyV0O
15. 15
Use
Case
Description/Status
NFVi
Kubernetes & Contiv/Calico integration
is in progress.
VNF
MemIF: packet interface in a shared
memory.
• Library for user by 3rd party (non-
FD.io VPP) applications.
• VPP 17.07 benchmark @ 4Mpps †
Cloud
LD Preload Layer
• Will give good headline performance
for priority apps; NGINX, NodeJS,
Redis etc.
• Will take time to scale to support all
socket based apps.
Container
MemIf RX TX TX RX
Copy
VNF
TX RX
FD.io/VPP
Copy
MemIf FIDO
Userspace
Python-API
Container
Socket App
BSD Sockets API
FD.io/VPP
Layer 3
Layer 2
Layer 4
LD Preload
TX RX
FIFO
Kernel
Virtual Switching: Bare Metal
† Platform Configuration:
64 byte packets, Cross-connect through MemIF interface.
Intel® Xeon® Processor E5-4655 v4 @ 3.2Ghz
Intel® Ethernet Converged Network Adapter XL710
FD.io VPP v17.07 & DPDK v17.05
Reference: https://docs.fd.io/csit/rls1707/report/vpp_performance_tests/
For more complete information about performance and benchmark
results, visit www.intel.com/benchmarks.
16. Virt
I/O
RX TX
Network
Appliance
QEMU/KVM
S ocket App
BS D S ockets API
RX TX
Layer 3
Layer 2
Layer 4
Container(s) Containers(s)
Copy Copy
VHOS TVHOS T
VHOS T-API
Virt
I/O
Control Plane
Virtual Machine
TX RX
TX RX
TX RX
TX RX
Use
Case
Description/Status
NFVi
Kubernetes & Contiv/Calico integration is
in progress.
VNF
Virtio-Net: DPDK support for Virtio-Net is
mature.
• VPP 17.07 benchmark @ 3.5 Mpps†
• Supporting Virtio 1.1 Spec is a WiP
Cloud
Virtio-Net: Linux Kernel support for virtio-
net is mature and widely available.
Virtual Switching: MasterVM
† Platform Configuration:
64 byte packets, Cross-connect through Virtio-Net interface.
Intel® Xeon® Processor E5-2699 v3 @ 2.3 Ghz
Intel® Ethernet Converged Network Adapter XL710
FD.io VPP v17.07 & DPDK v17.05
Reference: https://docs.fd.io/csit/rls1707/report/vpp_performance_tests/
For more complete information about performance and benchmark
results, visit www.intel.com/benchmarks.
16
17. Virtual Switching: Master VM
Master VM for NFVi with a common method to talk to Containers and VM.
⇒ simplifying Comms Service Provider and Cloud Service Provider deployments.
QEMU/KVM
Socket App
BSD Sockets API
RX TX
Layer 3
Layer 2
Layer 4
TX RX
Copy
TX RX
Copy
VHOSTVHOST
VHOST-API
Virt
I/O
OVSDB/
OpenFlow
QEMU
Socket App
BSD Sockets API
RX TX
Layer 3
Layer 2
Virt
I/O
Virt
I/O
RX TX
QEMU
TX RX
TX RX
Layer 4
Container(s) Containers(s)
Virtual Machine
Virtual Machine Virtual Machine
Copy
VHOST
VHOST-API
Copy
Virt
I/O
RX TX
Network
Appliance
Network
Appliance
13
18. 18
Two Linux Containers
! cone: Network Test Tools (Scapy)
! ctwo: VPP Lite
Two Bridges
! lxcbr0: Linux to access containers via SSH/
SCP etc.
! VPP 17.04 bridge: Sandbox network traffic; i.e.
scapy to vpp_lite.
Authentication
! Keys are automatically provisioned for
password less access.
Demonstration: vpp-bootstrap†
† vpp-bootstrap:
† Vagrant* based, VPP and Container development environment
Reference: https://git.fd.io/vppsb/tree/vpp-bootstrap
Virtual Appliance (Ubuntu 16.04)
lxc: cone (Æ 16.04) lxc:ctwoÆ 16.04
eth0 veth_link1 eth0 veth_link1
Linux Kernel
ssh, ip tools, etc
Linux Kernel
ssh, ip tools, etc
Bridge 1 (VPP 17.04)
19. 19
Summary
DPDK & FD.io VPP
! DPDK & FD.io VPP are developing a faster lifecycle, a lightweight foot-print and a
scalable design.
! FD.io VPP is developing a TCP host-stack and socket layer to accelerate socket based
applications.
! DPDK & FD.io VPP are developing a vSwitch agnostic method to accelerate the
Container Data-plane.
http://dpdk.org
http://fd.io
Collaborate with us to accelerate the Container Data-plane!
E-mail: ray.kinsella [at] intel.com