SlideShare a Scribd company logo
The Universal Dataplane
FD.io: The Universal Dataplane
•  Project at Linux Foundation
•  Multi-party
•  Multi-project
•  Software Dataplane
•  High throughput
•  Low Latency
•  Feature Rich
•  Resource Efficient
•  Bare Metal/VM/Container
•  Multiplatform
fd.io	Founda+on	 2	
Bare	Metal/VM/Container	
•  Fd.io	Scope:	
•  Network	IO	-	NIC/vNIC	<->	cores/threads	
•  Packet	Processing	– Classify/Transform/Priori+ze/
Forward/Terminate		
•  Dataplane	Management	Agents	-		ControlPlane	
Dataplane	Management	Agent	
Packet	Processing	
Network	IO
Fd.io in the overall stack
fd.io	Founda+on	 3	
Hardware	
Network	Controller	
Orchestra+on	
Opera+on	System	
Data	Plane	Services	
Applica+on	Layer/App	Server	
Packet	Processing	 Network	IO	
Dataplane	
Management	Agent
Multiparty: Broad Membership
fd.io	Founda+on	 4	
Service	Providers	 Network	Vendors	 Chip	Vendors	
Integrators
Multiparty: Broad Contribution
fd.io	Founda+on	 5	
Universitat	Politècnica	de	Catalunya	(UPC)	
Yandex	
Qiniu
Code Activity
fd.io	Founda+on	 6	
2016-02-11	to		
2017-05-14	
Fd.io	 OVS	 DPDK	
Commits	 6662	 2625	 4430	
Contributors	 173	 157	 282	
Organiza+ons	 45	 56	 84	
0	
2000	
4000	
6000	
8000	
Commits	
Commits	
fd.io	 OVS	 DPDK	
0	
100	
200	
300	
Contributors	
Contributors	
fd.io	 OVS	 DPDK	
0	
20	
40	
60	
80	
100	
Organiza+ons	
Organiza+ons	
fd.io	 OVS	 DPDK
FD.io is the Universal Data-plane
Multiproject: Fd.io Projects
fd.io	Founda+on	 8	
Honeycomb	 hc2vpp	
Dataplane	Management	Agent	
CSIT	
puppet-fdio	
trex	
Tes+ng/Support	
NSH_SFC	 ONE	 TLDK	
odp4vpp	CICN	 VPP	Sandbox	
VPP	
Packet	Processing	
deb_dpdk	 rpm_dpdk	
Network	IO	
govpp
Fd.io Integrations
fd.io	Founda+on	 9	
VPP
Control	Plane	Data	Plane	
Honeycomb
Netconf/Yang	
VBD app
Lispflowmapping
app
LISP	Mapping	Protocol	
SFC
Netconf/yang	
Openstack
Neutron
ODL
Plugin
Fd.io
Plugin
Fd.io ML2 Agent
REST	
GBP app
Integra+on	work	done	at
Vector Packet Processor - VPP
•  Packet Processing Platform:
•  High performance
•  Linux User space
•  Run’s on commodity CPUs: / /
•  Shipping at volume in server & embedded
products since 2004.
fd.io	Founda+on	 10	
Bare	Metal/VM/Container	
Dataplane	Management	Agent	
Packet	Processing	
Network	IO
VPP Architecture: Packet Processing
Packet	
Vector	of	n	packets	
ethernet-input	
dpdk-input	 vhost-user-input	 af-packet-input	
ip4-input	ip6-input	 arp-input	
ip6-lookup	 ip4-lookup	
ip6-local	ip6-rewrite	 ip4-rewrite	ip4-local	
mpls-input	
…	
…	
Packet	Processing		
Graph	
Graph	Node	
Input	Graph	Node	
…
VPP Architecture: Splitting the Vector
Packet	
Vector	of	n	packets	
ethernet-input	
dpdk-input	 vhost-user-input	 af-packet-input	
ip4-input	ip6-input	 arp-input	
ip6-lookup	 ip4-lookup	
ip6-local	ip6-rewrite	 ip4-rewrite	ip4-local	
mpls-input	
…	
…	
Packet	Processing		
Graph	
Graph	Node	
Input	Graph	Node	
…
VPP Architecture: Plugins
…	
Packet	
Vector	of	n	packets	
ethernet-input	
dpdk-input	 vhost-user-input	 af-packet-input	
ip4-input	ip6-input	 arp-input	
ip6-lookup	 ip4-lookup	
ip6-local	ip6-rewrite	 ip4-rewrite	ip4-local	
mpls-input	
…	
…	
custom-1	
custom-2	 custom-3	
Packet	Processing		
Graph	
Graph	Node	
Input	Graph	Node	
/usr/lib/vpp_plugins/foo.so	
Plugin	 Plugins	are:		
			First	class	ci+zens	
			That	can:	
								Add	graph	nodes	
								Add	API	
								Rearrange	the	graph		
Hardware	Plugin	
hw-accel-input	
Skip	sew	nodes	
where	work	is	done	by	
hardware	already	
Can	be	built	independently		
of	VPP	source	tree
VPP: How does it work?
*	approx.	173	nodes	in	default	deployment	
ethernet-	
input	
dpdk-input	
af-packet-	
input	
vhost-user-	
input	
mpls-input	lldp-input	
...-no-	
checksum	
ip4-input	 ip6-input	arp-input	cdp-input	 l2-input	
ip4-lookup	
ip4-lookup-	
mulitcast	
ip4-rewrite-	
transit	
ip4-load-	
balance	
ip4-	
midchain	
mpls-policy-	
encap	
interface-	
output	
Packet	0	
Packet	1	
Packet	2	
Packet	3	
Packet	4	
Packet	5	
Packet	6	
Packet	7	
Packet	8	
Packet	9	
Packet	10	
Packet	processing	is	decomposed	into	a	
directed	graph	node	…	
…	packets	moved	through		
graph	nodes	in	vector	…	
Instruc+on	Cache	
Data	Cache	
Microprocessor	
…	graph	nodes	are	op+mized		
to	fit	inside	the	instruc+on	cache	…	
…	packets	are	pre-fetched,		
into	the	data	cache	…
dispatch	fn()	
Get	pointer	to	vector	
PREFETCH	#3	and	#4	
PROCESS	#1	and	#2	
ASSUME	next_node	same	as	last	packet	
Update	counters,	advance	buffers	
Enqueue	the	packet	to	next_node	
<as	above	but	single	packet>	
while	packets	in	vector	
while	4	or	more	packets	
while	any	packets	
Microprocessor	
ethernet-input	
Packet	1	
Packet	2	
…	packets	are	processed	in	groups	of	four,		
any	remaining	packets	are	processed	on	by	one	…		
…	instruc+on	cache	is	warm	with	the	instruc+ons	
from	a	single	graph	node	…	
		
…	data	cache	is	warm	with	a	small	number	of	
packets	..	
		
VPP: How does it work?
dispatch	fn()	
Get	pointer	to	vector	
PREFETCH	#1	and	#2	
PROCESS	#1	and	#2	
ASSUME	next_node	same	as	last	packet	
Update	counters,	advance	buffers	
Enqueue	the	packet	to	next_node	
<as	above	but	single	packet>	
while	packets	in	vector	
while	4	or	more	packets	
while	any	packets	
Microprocessor	
ethernet-input	
Packet	1	
Packet	2	
…	prefetch	packets	#1	and	#2	…	
VPP: How does it work?
dispatch	fn()	
Get	pointer	to	vector	
PREFETCH	#3	and	#4	
PROCESS	#1	and	#2	
ASSUME	next_node	same	as	last	packet	
Update	counters,	advance	buffers	
Enqueue	the	packet	to	next_node	
<as	above	but	single	packet>	
while	packets	in	vector	
while	4	or	more	packets	
while	any	packets	
Microprocessor	
ethernet-input	
Packet	1	
Packet	2	
Packet	3	
Packet	4	
…	process	packet	#3	and	#4	…	
…	update	counters,	enqueue	packets	to	the	next	
node	…	
VPP: How does it work?
fd.io	Founda+on	 18	
VPP Architecture: Programmability
Linux	Hosts	
Architecture	 Example:	Honeycomb	
Agent	 VPP	
Shared	Memory	
…	
…	
Request	Queue	
Response	Queue	
Request	Message	
900k	request/s	
Async	Response	Message	
Linux	Hosts	
Honeycomb	
Agent	
VPP	
Shared	Memory	
…	
…	
Request	Queue	
Response	Queue	
Request	Message	
Async	Response	Message	
Netconf/Restconf/Yang	Control	Plane	Protocol	
Can	use	C/Java/Python/or	Lua	
Language	bindings
VPP Universal Fast Dataplane: Performance at Scale [1/2]
Per CPU core throughput with linear multi-thread(-core) scaling
Hardware:	
Cisco	UCS	C240	M4
Intel®	C610	series	chipset
2	x	Intel®	Xeon®	Processor	E5-2698	v3	
(16	cores,	2.3GHz,	40MB	Cache)	
2133	MHz,	256	GB	Total
6	x	2p40GE	Intel	XL710=12x40GE
64B
128B
I/O	NIC	max-pps
0.0
50.0
100.0
150.0
200.0
250.0
2x	40GE
2	core
4x	40GE
4	core 6x	40GE
6	core 8x	40GE
8	core 10x	40GE
10	core 12x	40GE
12	core
No.	of	Interfaces
No.	of	CPU Cores
Frame
Size
[Bytes]
Service	Scale	=	1	million	IPv4	route	entries
Packet	Throughput	[Mpps]
NDR - Zero	Frame	Loss
64B
128B
IMIX
1518B
I/O	NIC	max-bw
0.0
50.0
100.0
150.0
200.0
250.0
300.0
2x	40GE
2	core
4x	40GE
4	core
6x	40GE
6	core
8x	40GE
8	core 10x	40GE
10	core 12x	40GE
12	core
Packet	Throughput	[Gbps]
NDR - Zero	Frame	Loss
Frame
Size
[Bytes]No.	of	Interfaces
No.	of	CPU Cores
Service	Scale	=	1	million	IPv4	route	entries
64B
128B
I/O	NIC	max-pps
0.0
50.0
100.0
150.0
200.0
250.0
2x	40GE
2	core
4x	40GE
4	core 6x	40GE
6	core 8x	40GE
8	core 10x	40GE
10	core 12x	40GE
12	core
No.	of	Interfaces
No.	of	CPU Cores
Frame
Size
[Bytes]
Service	Scale	=	0.5	million	IPv6	route	entries
Packet	Throughput	[Mpps]
NDR - Zero	Frame	Loss
64B
128B
IMIX
1518B
I/O	NIC	max-bw
0.0
50.0
100.0
150.0
200.0
250.0
300.0
2x	40GE
2	core
4x	40GE
4	core
6x	40GE
6	core
8x	40GE
8	core 10x	40GE
10	core 12x	40GE
12	core
Packet	Throughput	[Gbps]
NDR - Zero	Frame	Loss
Frame
Size
[Bytes]No.	of	Interfaces
No.	of	CPU Cores
Service	Scale	=	0.5	million	IPv6	route	entries
actual	m-core	scaling	
(mid-points	interpolated)
24 45.36 66.72 88.08 109.44 130.8
IPv4	Thput	[Mpps]
2x	40GE
2	core
4x	40GE
4	core
6x	40GE
6	core
8x	40GE
8	core
10x	40GE
10	core
12x	40GE
12	core
64B 24.0 45.4 66.7 88.1 109.4 130.8
128B 24.0 45.4 66.7 88.1 109.4 130.8
IMIX 15.0 30.0 45.0 60.0 75.0 90.0
1518B 3.8 7.6 11.4 15.2 19.0 22.8
I/O	NIC	max-pps
35.8 71.6 107.4 143.2 179 214.8
NIC	max-bw 46.8 93.5 140.3 187.0 233.8 280.5
actual	m-core	scaling	
(mid-points	interpolated)
19.2 35.36 51.52 67.68 83.84 100
IPv6	Thput	[Mpps]
2x	40GE
2	core
4x	40GE
4	core
6x	40GE
6	core
8x	40GE
8	core
10x	40GE
10	core
12x	40GE
12	core
64B 19.2 35.4 51.5 67.7 83.8 100.0
128B 19.2 35.4 51.5 67.7 83.8 100.0
IMIX 15.0 30.0 45.0 60.0 75.0 90.0
1518B 3.8 7.6 11.4 15.2 19.0 22.8
I/O	NIC	max-pps
35.8 71.6 107.4 143.2 179 214.8
NIC	max-bw 46.8 93.5 140.3 187.0 233.8 280.5
Packet	Traffic	
Generator
12x	40GE
interfaces
Topology:	
Phy-VS-Phy	
SoRware
Linux:	Ubuntu	16.04.1	LTS	
Kernel: ver. 4.4.0-45-generic
FD.io	VPP:	VPP	v17.01-5~ge234726	
(DPDK	16.11)
Resources
1	physical	CPU	core	per	40GE	port	
Other	CPU	cores	available	for	other	
services	and	other	work	
20 physical CPU cores available in
12x40GE seupt
Lots of Headroom for much more
throughput and features
IPv4	RouUng	 IPv6	RouUng
64B
128B
I/O	NIC	max-pps
0.0
50.0
100.0
150.0
200.0
250.0
2x	40GE
2	core
4x	40GE
4	core 6x	40GE
6	core 8x	40GE
8	core 10x	40GE
10	core 12x	40GE
12	core
No.	of	Interfaces
No.	of	CPU Cores
Frame
Size
[Bytes]
Service	Scale	=	16	thousand	MAC	L2	entries
Packet	Throughput	[Mpps]
NDR - Zero	Frame	Loss
64B
128B
I/O	NIC	max-pps
0.0
50.0
100.0
150.0
200.0
250.0
2x	40GE
2	core 4x	40GE
4	core 6x	40GE
6	core 8x	40GE
8	core 10x	40GE
10	core 12x	40GE
12	core
No.	of	Interfaces
No.	of	CPU Cores
Frame
Size
[Bytes]
Service	Scale	=	100	thousand	MAC	L2	entries
Packet	Throughput	[Mpps]
NDR - Zero	Frame	Loss
64B
128B
IMIX
1518B
I/O	NIC	max-bw
0.0
50.0
100.0
150.0
200.0
250.0
300.0
2x	40GE
2	core 4x	40GE
4	core 6x	40GE
6	core 8x	40GE
8	core 10x	40GE
10	core 12x	40GE
12	core
Packet	Throughput	[Gbps]
NDR - Zero	Frame	Loss
Frame
Size
[Bytes]No.	of	Interfaces
No.	of	CPU Cores
Service	Scale	=	100	thousand	MAC	L2	entries
64B
128B
IMIX
1518B
I/O	NIC	max-bw
0.0
50.0
100.0
150.0
200.0
250.0
300.0
2x	40GE
2	core
4x	40GE
4	core
6x	40GE
6	core
8x	40GE
8	core 10x	40GE
10	core 12x	40GE
12	core
Packet	Throughput	[Gbps]
NDR - Zero	Frame	Loss
Frame
Size
[Bytes]No.	of	Interfaces
No.	of	CPU Cores
Service	Scale	=	16	thousand	MAC	L2	entries
actual	m-core	scaling	
(mid-points	interpolated)
11.6 25.12 38.64 52.16 65.68 79.2
MAC	Thput	[Mpps]
2x	40GE
2	core
4x	40GE
4	core
6x	40GE
6	core
8x	40GE
8	core
10x	40GE
10	core
12x	40GE
12	core
64B 11.6 25.1 38.6 52.2 65.7 79.2
128B 11.6 25.1 38.6 52.2 65.7 79.2
IMIX 10.5 21.0 31.5 42.0 52.5 63.0
1518B 3.8 7.6 11.4 15.2 19.0 22.8
I/O	NIC	max-pps
35.8 71.6 107.4 143.2 179 214.8
NIC	max-bw 46.8 93.5 140.3 187.0 233.8 280.5
actual	m-core	scaling	
(mid-points	interpolated)
20.8 38.36 55.92 73.48 91.04 108.6
MAC	Thput	[Mpps]
2x	40GE
2	core
4x	40GE
4	core
6x	40GE
6	core
8x	40GE
8	core
10x	40GE
10	core
12x	40GE
12	core
64B 20.8 38.4 55.9 73.5 91.0 108.6
128B 20.8 38.4 55.9 73.5 91.0 108.6
IMIX 15.0 30.0 45.0 60.0 75.0 90.0
1518B 3.8 7.6 11.4 15.2 19.0 22.8
I/O	NIC	max-pps
35.8 71.6 107.4 143.2 179 214.8
NIC	max-bw 46.8 93.5 140.3 187.0 233.8 280.5
Hardware:	
Cisco	UCS	C240	M4
Intel®	C610	series	chipset
2	x	Intel®	Xeon®	Processor	E5-2698	v3	
(16	cores,	2.3GHz,	40MB	Cache)	
2133	MHz,	256	GB	Total
6	x	2p40GE	Intel	XL710=12x40GE
Packet	Traffic	
Generator
12x	40GE
interfaces
Topology:	
Phy-VS-Phy	
SoRware
Linux:	Ubuntu	16.04.1	LTS	
Kernel: ver. 4.4.0-45-generic
FD.io	VPP:	VPP	v17.01-5~ge234726	
(DPDK	16.11)
Resources
1	physical	CPU	core	per	40GE	port	
Other	CPU	cores	available	for	other	
services	and	other	work	
20 physical CPU cores available in
12x40GE seupt
Lots of Headroom for much more
throughput and features
VPP Universal Fast Dataplane: Performance at Scale [2/2]
Per CPU core throughput with linear multi-thread(-core) scaling
L2	Switching	 L2	Switching	with	VXLAN	Tunneling
Phy-VS-Phy	
VPP Performance at Scale
64B	
1518B	0.0	
100.0	
200.0	
300.0	
400.0	
500.0	
12	
routes		
1k	
routes	
100k	
routes	
500k	
routes	
1M	
routes	
2M	
routes	
[Gbps]]
480Gbps	zero	frame	loss	
64B	
1518B	0.0	
50.0	
100.0	
150.0	
200.0	
250.0	
12	
routes		
1k	
routes	
100k	
routes	
500k	
routes	
1M	
routes	
2M	
routes	
[Mpps]
200Mpps	zero	frame	loss	
64B	
1518B	0	
100	
200	
300	
400	
500	
1k	
routes	
500k	
routes	
1M	
routes	
2M	
routes	
4M	
routes	
8M	
routes	
[Gbps]]
IMIX	=>	342	Gbps,1518B	=>	462	Gbps	
64B	
1518B	0	
50	
100	
150	
200	
250	
300	
1k	
routes	
500k	
routes	
1M	
routes	
2M	
routes	
4M	
routes	
8M	
routes	
[Mpps]
64B	=>	238	Mpps	
IPv6,	24	of	72	cores	 IPv4+	2k	Whitelist,	36	of	72	cores	 Zero-packet-loss	Throughput		
for	12	port	40GE	
Hardware:	
Cisco	UCS	C460	M4
Intel®	C610	series	chipset
4	x	Intel®	Xeon®	Processor	E7-8890	v3		
(18	cores,	2.5GHz,	45MB	Cache)	
2133	MHz,	512	GB	Total
9	x	2p40GE	Intel	XL710	
18 x 40GE = 720GE !!
Latency
18	x	7.7trillion	packets	soak	test	
Average	latency:		<23	usec	
Min	Latency:	7…10	usec	
Max	Latency:	3.5	ms
Headroom
Average	vector	size	~24-27	
Max	vector	size	255	
Headroom for much more throughput/
features
NIC/PCI	bus	is	the	limit	not	vpp
22	
FD.io is evolving rapidly!
16-06	
Release-	VPP	
16-09	
Release:	
VPP,	Honeycomb,	
NSH_SFC,	ONE	
17-01	
Release:	
VPP,	Honeycomb,	
NSH_SFC,	ONE	
16-06	New	Features	
Enhanced	Switching	&	RouUng	
				SRv6	spray	use-case	(IP-TV)	
				LISP	xTR	support	
				VXLAN	over	IPv6	underlay	
				per	interface	whitelists	
				shared	adjacencies	in	FIB	
Improves	interface	support	
				vhost-user	–	jumbo	frames	
				Netmap	interface	support	
				AF_Packet	interface	support	
Improved	programmability	
				Python	API	bindings	
				Enhanced	JVPP	Java	API	bindings	
				Enhanced	debugging	cli	
Hardware	and	SoRware	Support	
				Support	for	ARM	32	targets	
				Support	for	Raspberry	Pi	
				Support	for	DPDK	16.04	
	
	
	
	
	
	
16-09	New	Features	
Enhanced	LISP	support	for	
				L2	overlays	
				Mul+tenancy	
				Mul+homing	
				Re-encapsula+ng	Tunnel	Routers	(RTR)	support	
				Map-Resolver	failover	algorithm	
New	plugins	for	
				SNAT	
				MagLev-like	Load	
				Iden+fier	Locator	Addressing	
				NSH	SFC	SFF’s	&	NSH	Proxy	
Port	range	ingress	filtering	
Dynamically	ordered	subgraphs	
	
	
	
	
	
	
17-01	New	Features	
Hierarchical	FIB	
Performance	Improvements	
				DPDK	input	and	output	nodes	
				L2	Path	
				IPv4	lookup	node	
IPSEC	Performance	
					SW	and	HW	Crypto	Support	
HQoS	support	
Simple	Port	Analyzer	(SPAN)	
BFD,	ACL,	IPFIX,	SNAT	
L2	GRE	over	IPSec	tunnels	
LLDP	
LISP	Enhancements	
				Source/Dest	control	plane	
				L2	over	LISP	and	GRE	
				Map-Register/Map-No+fy	
				RLOC-probing	
Flow	Per	Packet	
17-04	New	Features	
VPP	Userspace	Host	Stack	
	TCP	stack	
	DHCPv4	&	DHCPv6	relay/proxy	
	ND	Proxy	
SNAT	
	CGN:	port	alloca+on	&	address	pool	
	CPE:	External	interface		
	NAT64,	LW46	
Segment	RouUng	
	SRv6	Network	Programming	
	SR	Traffic	Engineering	
	SR	LocalSIDs	
	Framework	to	expand	LocalSIDs	w/	plugins	
iOAM	
	UDP	Pinger		
	IOAM	as	type	2	metadata	in	NSH	
	Anycast	ac+ve	server	selec+on	
IPFIX	Improvements	(IPv6)	
17-04	
Release:	
VPP,	Honeycomb,	
NSH_SFC,	ONE…
Universal Dataplane: Features
fd.io	Founda+on	 23	
Tunnels/Encaps	
GRE/VXLAN/VXLAN-GPE/LISP-GPE/NSH	
IPSEC	
							Including	HW	offload	when	available	
Interfaces	
DPDK/Netmap/AF_Packet/TunTap	
Vhost-user	-		mul+-queue,	reconnect,	
Jumbo	Frame	Support	
MPLS	
MPLS	over	Ethernet/GRE	
Deep	label	stacks	supported	
Segment	Rou+ng	
SR	MPLS/IPv6	
				Including	Mul+cast	
Inband	iOAM	
Telemetry	export	infra	(raw	IPFIX)	
iOAM	for	VXLAN-GPE	(NGENA)	
SRv6	and	iOAM	co-existence	
iOAM	proxy	mode	/	caching	
iOAM	probe	and	responder		
LISP	
LISP	xTR/RTR	
				L2	Overlays	over	LISP	and	GRE	encaps	
				Mul+tenancy	
				Mul+home	
				Map/Resolver	Failover	
				Source/Dest	control	plane	support	
				Map-Register/Map-No+fy/RLOC-probing	
	
Language	Bindings	
C/Java/Python/Lua	
Hardware	Plaxorms	
Pure	Userspace	-		X86,ARM	32/64,Power	
Raspberry	Pi	
Rou+ng	
IPv4/IPv6	
14+	MPPS,	single	core	
Hierarchical	FIBs	
Mul+million	FIB	entries	
Source	RPF	
Thousands	of	VRFs	
	Controlled	cross-VRF	lookups	
Mul+path	–	ECMP	and	Unequal	Cost	
Network	Services	
DHCPv4	client/proxy	
DHCPv6	Proxy	
MAP/LW46	–	IPv4aas	
MagLev-like	Load	
Iden+fier	Locator	Addressing	
NSH	SFC	SFF’s	&	NSH	Proxy	
LLDP	
BFD	
Policer	
Mul+ple	million	Classifiers	–		
	Arbitrary	N-tuple	
	
	
	
	
	
	
	
Switching	
VLAN	Support	
					Single/	Double	tag	
					L2	forwd	w/EFP/BridgeDomain	concepts	
VTR	–	push/pop/Translate	(1:1,1:2,	2:1,2:2)	
Mac	Learning	–	default	limit	of	50k	addr	
Bridging		
				Split-horizon	group	support/EFP	Filtering	
Proxy	Arp	
Arp	termina+on	
IRB	-	BVI	Support	with	RouterMac	assigmt	
Flooding	
Input	ACLs	
Interface	cross-connect	
L2	GRE	over	IPSec	tunnels	
	
Monitoring	
Simple	Port	Analyzer	(SPAN)	
IP	Flow	Export	(IPFIX)	
Counters	for	everything	
Lawful	Intercept	
Security	
Mandatory	Input	Checks:	
	TTL	expira+on	
	header	checksum	
	L2	length	<	IP	length	
	ARP	resolu+on/snooping	
	ARP	proxy	
SNAT	
Ingress	Port	Range	Filtering	
Per	interface	whitelists	
Policy/Security	Groups/GBP	(Classifier)
fd.io	Founda+on	
24	
New in 17.04
VPP	Userspace	Host	Stack	
TCP	stack	
DHCPv4	relay	mul+-des+na+on	
DHCPv4	op+on	82	
DHCPv6	relay	mul+-des+na+on	
DHPCv6	relay	remote-id	
ND	Proxy	
	
	
	
	
	
	
SNAT	
CGN:	Configurable	port	alloca+on	
CGN:	Configurable	Address	pooling	
CPE:	External	interface		
DHCP	support	
NAT64,	LW46	
	
	
	
	
	
Security	Groups	
Routed	interface	support	
L4	filters	with	IPv6	Extension	Headers	
	
	
	
	
	 API	
Move	to	CFFI	for	Python	binding	
Python	Packaging	improvements	
CLI	over	API	
Improved	C/C++	language	binding	
	
	
	
	
	
Segment	Rou+ng	v6	
SR	policies	with	weighted	SID	lists	
Binding	SID	
SR	steering	policies	
SR	LocalSIDs	
Framework	to	expand	local	SIDs	w/plugins	
	
	
	
	
	
iOAM	
UDP	Pinger	w/path	fault	isola+on	
IOAM	as	type	2	metadata	in	NSH	
IOAM	raw	IPFIX	collector	and	analyzer	
Anycast	ac+ve	server	selec+on	
	
	
	
	
	 IPFIX	
Collect	IPv6	informa+on	
Per	flow	state
fd.io	Founda+on	 25	
Continuous Quality, Performance, Usability
Built into the development process – patch by patch
Submit	
Automated	
Verify	
Code	Review	 Merge	 Publish	Ar+facts	
System	Func+onal	Tes+ng	
252	Tests/Patch	
DHCP	–	Client	and	Proxy	
GRE	Overlay	Tunnels	
L2BD	Ethernet	Switching	
L2	Cross	Connect	Ethernet	Switching	
LISP	Overlay	Tunnels	
IPv4-in-IPv6	SoRwire	Tunnels	
Cop	Address	Security	
IPSec	
IPv6	RouUng	–	NS/ND,	RA,	ICMPv6	
uRPF	Security	
Tap	Interface	
Telemetry	–	IPFIX	and	Span	
VRF	Routed	Forwarding	
iACL	Security	–	Ingress	–	IPv6/IPv6/Mac	
IPv4	RouUng	
QoS	Policer	Metering	
VLAN	Tag	TranslaUon	
VXLAN	Overlay	Tunnels	
	
Performance	Tes+ng	
144	Tests/Patch,	841	Tests	
L2	Cross	Connect	
L2	Bridging	
IPv4	RouUng	
IPv6	RouUng	
IPv4	Scale	–	20k,200k,2M	FIB	Entries	
IPv4	Scale	-	20k,200k,2M	FIB	Entries	
VM	with	vhost-userr	
							PHYS-VPP-VM-VPP-PHYS	
							L2	Cross	Connect/Bridge	
							VXLAN	w/L2	Bridge	Domain	
							IPv4	RouUng	
COP	–	IPv4/IPv6	whiteless		
iACL	–	ingress	IPv4/IPv6	ACLs	
LISP	–	IPv4-o-IPv6/IPv6-o-IPv4	
VXLAN	
QoS	Policer	
L2	Cross	over	
L2	Bridging	
	
	
Usability	
Merge-by-merge:	
							apt	installable	deb	packaging	
							yum	installable	rpm	packaging	
						autogenerated	code	documentaUon	
						autogenerated	cli	documentaUon	
Per	release:	
						autogenerated	tesUng	reports	
															report	perf	improvements	
Puppet	modules	
Training/Tutorial	videos	
Hands-on-usecase	documentaUon	
	
	
	
	
	
Build/Unit	Tes+ng	
120	Tests/Patch	
Build	binary	packaging	for	
								Ubuntu	14.04	
								Ubuntu	16.04	
								Centos	7	
Automated	Style	Checking	
Unit	test	:	
									IPFIX	
									BFD	
									Classifier	
									DHCP	
									FIB	
									GRE	
									IPv4	
									IPv4	IRB	
									IPv4	mulU-VRF	
																			
	
	
									IPv6	
									IP	MulUcast	
									L2	FIB	
									L2	Bridge	Domain	
									MPLS	
									SNAT	
									SPAN	
									VXLAN	
										
	
	
Run	on	real	hardware	in	fd.io	Performance	Lab	
Merge-by-merge	packaging	feeds	
Downstream	consumer	CI	pipelines
Universal Dataplane: Infrastructure
fd.io	Founda+on	 26	
Server	
Kernel/Hypervisor	
FD.io	
Bare	Metal	
Server	
Kernel/Hypervisor	
FD.io	
VM	 VM	 VM	
Cloud/NFVi	
Server	
Kernel	
FD.io	
Con	 Con	 Con	
Container	Infra
Universal Dataplane: VNFs
fd.io	Founda+on	 27	
Server	
Kernel/Hypervisor	
VM	
FD.io	based	VNFs	
VM	
FD.io	 FD.io	
FD.io	
Server	
Kernel/Hypervisor	
Con	
FD.io	based	VNFs	
Con	
FD.io	 FD.io	
FD.io
Universal Dataplane: Embedded
fd.io	Founda+on	 28	
Device	
Kernel/Hypervisor	
Embedded	Device	
FD.io	
Hw	Accel	
Server	
Kernel/Hypervisor	
SmartNic	
SmartNic	
FD.io	
Hw	Accel
Universal Dataplane: CPE Example
fd.io	Founda+on	 29	
Device	
Kernel/Hypervisor	
Physical	CPE	
FD.io	
Hw	Accel	
Server	
Kernel/Hypervisor	
VM	
vCPE	in	a	VM	
VM	
FD.io	 FD.io	
FD.io	
Server	
Kernel/Hypervisor	
Con	
vCPE	in	a	Container	
Con	
FD.io	 FD.io	
FD.io
Opportunities to Contribute
We invite you to Participate in fd.io
•  Get the Code, Build the Code, Run the
Code
•  Try the vpp user demo
•  Install vpp from binary packages (yum/
apt)
•  Install Honeycomb from binary packages
•  Read/Watch the Tutorials
•  Join the Mailing Lists
•  Join the IRC Channels
•  Explore the wiki
•  Join fd.io as a member
fd.io	Founda+on	 30	
•  Firewall	
•  IDS	
•  Hardware	Accelerators	
•  Integra+on	with	OpenCache	
•  Control	plane	–	support	your	favorite	SDN	
Protocol	Agent	
•  Spanning	Tree	
•  DPI	
•  Test	tools	
•  Cloud	Foundry	Integra+on	
•  Container	Integra+on	
•  Packaging	
•  Tes+ng
The Universal Dataplane

More Related Content

What's hot

High-Performance Networking Using eBPF, XDP, and io_uring
High-Performance Networking Using eBPF, XDP, and io_uringHigh-Performance Networking Using eBPF, XDP, and io_uring
High-Performance Networking Using eBPF, XDP, and io_uring
ScyllaDB
 
P4, EPBF, and Linux TC Offload
P4, EPBF, and Linux TC OffloadP4, EPBF, and Linux TC Offload
P4, EPBF, and Linux TC Offload
Open-NFP
 
The Modern Telco Network: Defining The Telco Cloud
The Modern Telco Network: Defining The Telco CloudThe Modern Telco Network: Defining The Telco Cloud
The Modern Telco Network: Defining The Telco Cloud
Marco Rodrigues
 
Accelerating Ceph with RDMA and NVMe-oF
Accelerating Ceph with RDMA and NVMe-oFAccelerating Ceph with RDMA and NVMe-oF
Accelerating Ceph with RDMA and NVMe-oF
inside-BigData.com
 

What's hot (20)

Using eBPF for High-Performance Networking in Cilium
Using eBPF for High-Performance Networking in CiliumUsing eBPF for High-Performance Networking in Cilium
Using eBPF for High-Performance Networking in Cilium
 
High-Performance Networking Using eBPF, XDP, and io_uring
High-Performance Networking Using eBPF, XDP, and io_uringHigh-Performance Networking Using eBPF, XDP, and io_uring
High-Performance Networking Using eBPF, XDP, and io_uring
 
DPDK: Multi Architecture High Performance Packet Processing
DPDK: Multi Architecture High Performance Packet ProcessingDPDK: Multi Architecture High Performance Packet Processing
DPDK: Multi Architecture High Performance Packet Processing
 
Low latency microservices in java QCon New York 2016
Low latency microservices in java   QCon New York 2016Low latency microservices in java   QCon New York 2016
Low latency microservices in java QCon New York 2016
 
NVMe over Fabric
NVMe over FabricNVMe over Fabric
NVMe over Fabric
 
P4, EPBF, and Linux TC Offload
P4, EPBF, and Linux TC OffloadP4, EPBF, and Linux TC Offload
P4, EPBF, and Linux TC Offload
 
Infini Band
Infini BandInfini Band
Infini Band
 
Cilium - Container Networking with BPF & XDP
Cilium - Container Networking with BPF & XDPCilium - Container Networking with BPF & XDP
Cilium - Container Networking with BPF & XDP
 
SDN Architecture & Ecosystem
SDN Architecture & EcosystemSDN Architecture & Ecosystem
SDN Architecture & Ecosystem
 
Project calico - introduction
Project calico - introductionProject calico - introduction
Project calico - introduction
 
The Modern Telco Network: Defining The Telco Cloud
The Modern Telco Network: Defining The Telco CloudThe Modern Telco Network: Defining The Telco Cloud
The Modern Telco Network: Defining The Telco Cloud
 
OpenShift Kubernetes Native Infrastructure for 5GC and Telco Edge Cloud
OpenShift  Kubernetes Native Infrastructure for 5GC and Telco Edge Cloud OpenShift  Kubernetes Native Infrastructure for 5GC and Telco Edge Cloud
OpenShift Kubernetes Native Infrastructure for 5GC and Telco Edge Cloud
 
How to Speak Intel DPDK KNI for Web Services.
How to Speak Intel DPDK KNI for Web Services.How to Speak Intel DPDK KNI for Web Services.
How to Speak Intel DPDK KNI for Web Services.
 
RDMA programming design and case studies – for better performance distributed...
RDMA programming design and case studies – for better performance distributed...RDMA programming design and case studies – for better performance distributed...
RDMA programming design and case studies – for better performance distributed...
 
Astera Labs: Intelligent Connectivity for Cloud and AI Infrastructure
Astera Labs:  Intelligent Connectivity for Cloud and AI InfrastructureAstera Labs:  Intelligent Connectivity for Cloud and AI Infrastructure
Astera Labs: Intelligent Connectivity for Cloud and AI Infrastructure
 
Hardware & Software Platforms for HPC, AI and ML
Hardware & Software Platforms for HPC, AI and MLHardware & Software Platforms for HPC, AI and ML
Hardware & Software Platforms for HPC, AI and ML
 
Continuous Go Profiling & Observability
Continuous Go Profiling & ObservabilityContinuous Go Profiling & Observability
Continuous Go Profiling & Observability
 
Accelerating Ceph with RDMA and NVMe-oF
Accelerating Ceph with RDMA and NVMe-oFAccelerating Ceph with RDMA and NVMe-oF
Accelerating Ceph with RDMA and NVMe-oF
 
The IPv6-Only Network
The IPv6-Only NetworkThe IPv6-Only Network
The IPv6-Only Network
 
DPDK KNI interface
DPDK KNI interfaceDPDK KNI interface
DPDK KNI interface
 

Similar to The Universal Dataplane

Splunk app for stream
Splunk app for stream Splunk app for stream
Splunk app for stream
csching
 
The Computing Continuum.pdf
The Computing Continuum.pdfThe Computing Continuum.pdf
The Computing Continuum.pdf
Förderverein Technische Fakultät
 

Similar to The Universal Dataplane (20)

Fast datastacks - fast and flexible nfv solution stacks leveraging fd.io
Fast datastacks - fast and flexible nfv solution stacks leveraging fd.ioFast datastacks - fast and flexible nfv solution stacks leveraging fd.io
Fast datastacks - fast and flexible nfv solution stacks leveraging fd.io
 
FD.io - The Universal Dataplane
FD.io - The Universal DataplaneFD.io - The Universal Dataplane
FD.io - The Universal Dataplane
 
Software Network Data Plane - Satisfying the need for speed - FD.io - VPP and...
Software Network Data Plane - Satisfying the need for speed - FD.io - VPP and...Software Network Data Plane - Satisfying the need for speed - FD.io - VPP and...
Software Network Data Plane - Satisfying the need for speed - FD.io - VPP and...
 
PLNOG16: Obsługa 100M pps na platformie PC , Przemysław Frasunek, Paweł Mała...
PLNOG16: Obsługa 100M pps na platformie PC, Przemysław Frasunek, Paweł Mała...PLNOG16: Obsługa 100M pps na platformie PC, Przemysław Frasunek, Paweł Mała...
PLNOG16: Obsługa 100M pps na platformie PC , Przemysław Frasunek, Paweł Mała...
 
uCluster
uClusteruCluster
uCluster
 
100 M pps on PC.
100 M pps on PC.100 M pps on PC.
100 M pps on PC.
 
In-Network Acceleration with FPGA (MEMO)
In-Network Acceleration with FPGA (MEMO)In-Network Acceleration with FPGA (MEMO)
In-Network Acceleration with FPGA (MEMO)
 
100G Networking Berlin.pdf
100G Networking Berlin.pdf100G Networking Berlin.pdf
100G Networking Berlin.pdf
 
Ocpeu14
Ocpeu14Ocpeu14
Ocpeu14
 
Oow 2008 yahoo_pie-db
Oow 2008 yahoo_pie-dbOow 2008 yahoo_pie-db
Oow 2008 yahoo_pie-db
 
Splunk app for stream
Splunk app for stream Splunk app for stream
Splunk app for stream
 
HiPEAC-CSW 2022_Kevin Mika presentation
HiPEAC-CSW 2022_Kevin Mika presentationHiPEAC-CSW 2022_Kevin Mika presentation
HiPEAC-CSW 2022_Kevin Mika presentation
 
LinkedIn OpenFabric Project - Interop 2017
LinkedIn OpenFabric Project - Interop 2017LinkedIn OpenFabric Project - Interop 2017
LinkedIn OpenFabric Project - Interop 2017
 
Summit 16: Deploying Virtualized Mobile Infrastructures on Openstack
Summit 16: Deploying Virtualized Mobile Infrastructures on OpenstackSummit 16: Deploying Virtualized Mobile Infrastructures on Openstack
Summit 16: Deploying Virtualized Mobile Infrastructures on Openstack
 
Cilium - Fast IPv6 Container Networking with BPF and XDP
Cilium - Fast IPv6 Container Networking with BPF and XDPCilium - Fast IPv6 Container Networking with BPF and XDP
Cilium - Fast IPv6 Container Networking with BPF and XDP
 
Telco junho cost-effective approach for telco network analysis in 5_g_final
Telco junho cost-effective approach for telco network analysis in 5_g_finalTelco junho cost-effective approach for telco network analysis in 5_g_final
Telco junho cost-effective approach for telco network analysis in 5_g_final
 
Mellanox Approach to NFV & SDN
Mellanox Approach to NFV & SDNMellanox Approach to NFV & SDN
Mellanox Approach to NFV & SDN
 
The Computing Continuum.pdf
The Computing Continuum.pdfThe Computing Continuum.pdf
The Computing Continuum.pdf
 
SDN and metrics from the SDOs
SDN and metrics from the SDOsSDN and metrics from the SDOs
SDN and metrics from the SDOs
 
BUD17 Socionext SC2A11 ARM Server SoC
BUD17 Socionext SC2A11 ARM Server SoCBUD17 Socionext SC2A11 ARM Server SoC
BUD17 Socionext SC2A11 ARM Server SoC
 

More from Michelle Holley

Service Mesh on Kubernetes with Istio
Service Mesh on Kubernetes with IstioService Mesh on Kubernetes with Istio
Service Mesh on Kubernetes with Istio
Michelle Holley
 

More from Michelle Holley (20)

NFF-GO (YANFF) - Yet Another Network Function Framework
NFF-GO (YANFF) - Yet Another Network Function FrameworkNFF-GO (YANFF) - Yet Another Network Function Framework
NFF-GO (YANFF) - Yet Another Network Function Framework
 
Edge and 5G: What is in it for the developers?
Edge and 5G: What is in it for the developers?Edge and 5G: What is in it for the developers?
Edge and 5G: What is in it for the developers?
 
5G and Open Reference Platforms
5G and Open Reference Platforms5G and Open Reference Platforms
5G and Open Reference Platforms
 
De-fogging Edge Computing: Ecosystem, Use-cases, and Opportunities
De-fogging Edge Computing: Ecosystem, Use-cases, and OpportunitiesDe-fogging Edge Computing: Ecosystem, Use-cases, and Opportunities
De-fogging Edge Computing: Ecosystem, Use-cases, and Opportunities
 
Building the SD-Branch using uCPE
Building the SD-Branch using uCPEBuilding the SD-Branch using uCPE
Building the SD-Branch using uCPE
 
Enabling Multi-access Edge Computing (MEC) Platform-as-a-Service for Enterprises
Enabling Multi-access Edge Computing (MEC) Platform-as-a-Service for EnterprisesEnabling Multi-access Edge Computing (MEC) Platform-as-a-Service for Enterprises
Enabling Multi-access Edge Computing (MEC) Platform-as-a-Service for Enterprises
 
Accelerating Edge Computing Adoption
Accelerating Edge Computing Adoption Accelerating Edge Computing Adoption
Accelerating Edge Computing Adoption
 
Install FD.IO VPP On Intel(r) Architecture & Test with Trex*
Install FD.IO VPP On Intel(r) Architecture & Test with Trex*Install FD.IO VPP On Intel(r) Architecture & Test with Trex*
Install FD.IO VPP On Intel(r) Architecture & Test with Trex*
 
DPDK & Cloud Native
DPDK & Cloud NativeDPDK & Cloud Native
DPDK & Cloud Native
 
OpenDaylight Update (June 2018)
OpenDaylight Update (June 2018)OpenDaylight Update (June 2018)
OpenDaylight Update (June 2018)
 
Tungsten Fabric Overview
Tungsten Fabric OverviewTungsten Fabric Overview
Tungsten Fabric Overview
 
Orchestrating NFV Workloads in Multiple Clouds
Orchestrating NFV Workloads in Multiple CloudsOrchestrating NFV Workloads in Multiple Clouds
Orchestrating NFV Workloads in Multiple Clouds
 
Convergence of device and data at the Edge Cloud
Convergence of device and data at the Edge CloudConvergence of device and data at the Edge Cloud
Convergence of device and data at the Edge Cloud
 
Intel® Network Builders - Network Edge Ecosystem Program
Intel® Network Builders - Network Edge Ecosystem ProgramIntel® Network Builders - Network Edge Ecosystem Program
Intel® Network Builders - Network Edge Ecosystem Program
 
Design Implications, Challenges and Principles of Zero-Touch Management Envir...
Design Implications, Challenges and Principles of Zero-Touch Management Envir...Design Implications, Challenges and Principles of Zero-Touch Management Envir...
Design Implications, Challenges and Principles of Zero-Touch Management Envir...
 
Using Microservices Architecture and Patterns to Address Applications Require...
Using Microservices Architecture and Patterns to Address Applications Require...Using Microservices Architecture and Patterns to Address Applications Require...
Using Microservices Architecture and Patterns to Address Applications Require...
 
Intel Powered AI Applications for Telco
Intel Powered AI Applications for TelcoIntel Powered AI Applications for Telco
Intel Powered AI Applications for Telco
 
Artificial Intelligence in the Network
Artificial Intelligence in the Network Artificial Intelligence in the Network
Artificial Intelligence in the Network
 
Service Mesh on Kubernetes with Istio
Service Mesh on Kubernetes with IstioService Mesh on Kubernetes with Istio
Service Mesh on Kubernetes with Istio
 
Intel® QuickAssist Technology Introduction, Applications, and Lab, Including ...
Intel® QuickAssist Technology Introduction, Applications, and Lab, Including ...Intel® QuickAssist Technology Introduction, Applications, and Lab, Including ...
Intel® QuickAssist Technology Introduction, Applications, and Lab, Including ...
 

Recently uploaded

How to Position Your Globus Data Portal for Success Ten Good Practices
How to Position Your Globus Data Portal for Success Ten Good PracticesHow to Position Your Globus Data Portal for Success Ten Good Practices
How to Position Your Globus Data Portal for Success Ten Good Practices
Globus
 

Recently uploaded (20)

AI/ML Infra Meetup | Reducing Prefill for LLM Serving in RAG
AI/ML Infra Meetup | Reducing Prefill for LLM Serving in RAGAI/ML Infra Meetup | Reducing Prefill for LLM Serving in RAG
AI/ML Infra Meetup | Reducing Prefill for LLM Serving in RAG
 
Understanding Globus Data Transfers with NetSage
Understanding Globus Data Transfers with NetSageUnderstanding Globus Data Transfers with NetSage
Understanding Globus Data Transfers with NetSage
 
Globus Compute wth IRI Workflows - GlobusWorld 2024
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus Compute wth IRI Workflows - GlobusWorld 2024
Globus Compute wth IRI Workflows - GlobusWorld 2024
 
AI/ML Infra Meetup | Perspective on Deep Learning Framework
AI/ML Infra Meetup | Perspective on Deep Learning FrameworkAI/ML Infra Meetup | Perspective on Deep Learning Framework
AI/ML Infra Meetup | Perspective on Deep Learning Framework
 
Into the Box 2024 - Keynote Day 2 Slides.pdf
Into the Box 2024 - Keynote Day 2 Slides.pdfInto the Box 2024 - Keynote Day 2 Slides.pdf
Into the Box 2024 - Keynote Day 2 Slides.pdf
 
Cyaniclab : Software Development Agency Portfolio.pdf
Cyaniclab : Software Development Agency Portfolio.pdfCyaniclab : Software Development Agency Portfolio.pdf
Cyaniclab : Software Development Agency Portfolio.pdf
 
How to Position Your Globus Data Portal for Success Ten Good Practices
How to Position Your Globus Data Portal for Success Ten Good PracticesHow to Position Your Globus Data Portal for Success Ten Good Practices
How to Position Your Globus Data Portal for Success Ten Good Practices
 
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...
 
BoxLang: Review our Visionary Licenses of 2024
BoxLang: Review our Visionary Licenses of 2024BoxLang: Review our Visionary Licenses of 2024
BoxLang: Review our Visionary Licenses of 2024
 
AI/ML Infra Meetup | ML explainability in Michelangelo
AI/ML Infra Meetup | ML explainability in MichelangeloAI/ML Infra Meetup | ML explainability in Michelangelo
AI/ML Infra Meetup | ML explainability in Michelangelo
 
Dominate Social Media with TubeTrivia AI’s Addictive Quiz Videos.pdf
Dominate Social Media with TubeTrivia AI’s Addictive Quiz Videos.pdfDominate Social Media with TubeTrivia AI’s Addictive Quiz Videos.pdf
Dominate Social Media with TubeTrivia AI’s Addictive Quiz Videos.pdf
 
SOCRadar Research Team: Latest Activities of IntelBroker
SOCRadar Research Team: Latest Activities of IntelBrokerSOCRadar Research Team: Latest Activities of IntelBroker
SOCRadar Research Team: Latest Activities of IntelBroker
 
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...
 
Advanced Flow Concepts Every Developer Should Know
Advanced Flow Concepts Every Developer Should KnowAdvanced Flow Concepts Every Developer Should Know
Advanced Flow Concepts Every Developer Should Know
 
Beyond Event Sourcing - Embracing CRUD for Wix Platform - Java.IL
Beyond Event Sourcing - Embracing CRUD for Wix Platform - Java.ILBeyond Event Sourcing - Embracing CRUD for Wix Platform - Java.IL
Beyond Event Sourcing - Embracing CRUD for Wix Platform - Java.IL
 
How Recreation Management Software Can Streamline Your Operations.pptx
How Recreation Management Software Can Streamline Your Operations.pptxHow Recreation Management Software Can Streamline Your Operations.pptx
How Recreation Management Software Can Streamline Your Operations.pptx
 
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...
 
Field Employee Tracking System| MiTrack App| Best Employee Tracking Solution|...
Field Employee Tracking System| MiTrack App| Best Employee Tracking Solution|...Field Employee Tracking System| MiTrack App| Best Employee Tracking Solution|...
Field Employee Tracking System| MiTrack App| Best Employee Tracking Solution|...
 
De mooiste recreatieve routes ontdekken met RouteYou en FME
De mooiste recreatieve routes ontdekken met RouteYou en FMEDe mooiste recreatieve routes ontdekken met RouteYou en FME
De mooiste recreatieve routes ontdekken met RouteYou en FME
 
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoam
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamOpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoam
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoam
 

The Universal Dataplane

  • 2. FD.io: The Universal Dataplane •  Project at Linux Foundation •  Multi-party •  Multi-project •  Software Dataplane •  High throughput •  Low Latency •  Feature Rich •  Resource Efficient •  Bare Metal/VM/Container •  Multiplatform fd.io Founda+on 2 Bare Metal/VM/Container •  Fd.io Scope: •  Network IO - NIC/vNIC <-> cores/threads •  Packet Processing – Classify/Transform/Priori+ze/ Forward/Terminate •  Dataplane Management Agents - ControlPlane Dataplane Management Agent Packet Processing Network IO
  • 3. Fd.io in the overall stack fd.io Founda+on 3 Hardware Network Controller Orchestra+on Opera+on System Data Plane Services Applica+on Layer/App Server Packet Processing Network IO Dataplane Management Agent
  • 4. Multiparty: Broad Membership fd.io Founda+on 4 Service Providers Network Vendors Chip Vendors Integrators
  • 5. Multiparty: Broad Contribution fd.io Founda+on 5 Universitat Politècnica de Catalunya (UPC) Yandex Qiniu
  • 6. Code Activity fd.io Founda+on 6 2016-02-11 to 2017-05-14 Fd.io OVS DPDK Commits 6662 2625 4430 Contributors 173 157 282 Organiza+ons 45 56 84 0 2000 4000 6000 8000 Commits Commits fd.io OVS DPDK 0 100 200 300 Contributors Contributors fd.io OVS DPDK 0 20 40 60 80 100 Organiza+ons Organiza+ons fd.io OVS DPDK
  • 7. FD.io is the Universal Data-plane
  • 8. Multiproject: Fd.io Projects fd.io Founda+on 8 Honeycomb hc2vpp Dataplane Management Agent CSIT puppet-fdio trex Tes+ng/Support NSH_SFC ONE TLDK odp4vpp CICN VPP Sandbox VPP Packet Processing deb_dpdk rpm_dpdk Network IO govpp
  • 9. Fd.io Integrations fd.io Founda+on 9 VPP Control Plane Data Plane Honeycomb Netconf/Yang VBD app Lispflowmapping app LISP Mapping Protocol SFC Netconf/yang Openstack Neutron ODL Plugin Fd.io Plugin Fd.io ML2 Agent REST GBP app Integra+on work done at
  • 10. Vector Packet Processor - VPP •  Packet Processing Platform: •  High performance •  Linux User space •  Run’s on commodity CPUs: / / •  Shipping at volume in server & embedded products since 2004. fd.io Founda+on 10 Bare Metal/VM/Container Dataplane Management Agent Packet Processing Network IO
  • 11. VPP Architecture: Packet Processing Packet Vector of n packets ethernet-input dpdk-input vhost-user-input af-packet-input ip4-input ip6-input arp-input ip6-lookup ip4-lookup ip6-local ip6-rewrite ip4-rewrite ip4-local mpls-input … … Packet Processing Graph Graph Node Input Graph Node …
  • 12. VPP Architecture: Splitting the Vector Packet Vector of n packets ethernet-input dpdk-input vhost-user-input af-packet-input ip4-input ip6-input arp-input ip6-lookup ip4-lookup ip6-local ip6-rewrite ip4-rewrite ip4-local mpls-input … … Packet Processing Graph Graph Node Input Graph Node …
  • 13. VPP Architecture: Plugins … Packet Vector of n packets ethernet-input dpdk-input vhost-user-input af-packet-input ip4-input ip6-input arp-input ip6-lookup ip4-lookup ip6-local ip6-rewrite ip4-rewrite ip4-local mpls-input … … custom-1 custom-2 custom-3 Packet Processing Graph Graph Node Input Graph Node /usr/lib/vpp_plugins/foo.so Plugin Plugins are: First class ci+zens That can: Add graph nodes Add API Rearrange the graph Hardware Plugin hw-accel-input Skip sew nodes where work is done by hardware already Can be built independently of VPP source tree
  • 14. VPP: How does it work? * approx. 173 nodes in default deployment ethernet- input dpdk-input af-packet- input vhost-user- input mpls-input lldp-input ...-no- checksum ip4-input ip6-input arp-input cdp-input l2-input ip4-lookup ip4-lookup- mulitcast ip4-rewrite- transit ip4-load- balance ip4- midchain mpls-policy- encap interface- output Packet 0 Packet 1 Packet 2 Packet 3 Packet 4 Packet 5 Packet 6 Packet 7 Packet 8 Packet 9 Packet 10 Packet processing is decomposed into a directed graph node … … packets moved through graph nodes in vector … Instruc+on Cache Data Cache Microprocessor … graph nodes are op+mized to fit inside the instruc+on cache … … packets are pre-fetched, into the data cache …
  • 18. fd.io Founda+on 18 VPP Architecture: Programmability Linux Hosts Architecture Example: Honeycomb Agent VPP Shared Memory … … Request Queue Response Queue Request Message 900k request/s Async Response Message Linux Hosts Honeycomb Agent VPP Shared Memory … … Request Queue Response Queue Request Message Async Response Message Netconf/Restconf/Yang Control Plane Protocol Can use C/Java/Python/or Lua Language bindings
  • 19. VPP Universal Fast Dataplane: Performance at Scale [1/2] Per CPU core throughput with linear multi-thread(-core) scaling Hardware: Cisco UCS C240 M4 Intel® C610 series chipset 2 x Intel® Xeon® Processor E5-2698 v3 (16 cores, 2.3GHz, 40MB Cache) 2133 MHz, 256 GB Total 6 x 2p40GE Intel XL710=12x40GE 64B 128B I/O NIC max-pps 0.0 50.0 100.0 150.0 200.0 250.0 2x 40GE 2 core 4x 40GE 4 core 6x 40GE 6 core 8x 40GE 8 core 10x 40GE 10 core 12x 40GE 12 core No. of Interfaces No. of CPU Cores Frame Size [Bytes] Service Scale = 1 million IPv4 route entries Packet Throughput [Mpps] NDR - Zero Frame Loss 64B 128B IMIX 1518B I/O NIC max-bw 0.0 50.0 100.0 150.0 200.0 250.0 300.0 2x 40GE 2 core 4x 40GE 4 core 6x 40GE 6 core 8x 40GE 8 core 10x 40GE 10 core 12x 40GE 12 core Packet Throughput [Gbps] NDR - Zero Frame Loss Frame Size [Bytes]No. of Interfaces No. of CPU Cores Service Scale = 1 million IPv4 route entries 64B 128B I/O NIC max-pps 0.0 50.0 100.0 150.0 200.0 250.0 2x 40GE 2 core 4x 40GE 4 core 6x 40GE 6 core 8x 40GE 8 core 10x 40GE 10 core 12x 40GE 12 core No. of Interfaces No. of CPU Cores Frame Size [Bytes] Service Scale = 0.5 million IPv6 route entries Packet Throughput [Mpps] NDR - Zero Frame Loss 64B 128B IMIX 1518B I/O NIC max-bw 0.0 50.0 100.0 150.0 200.0 250.0 300.0 2x 40GE 2 core 4x 40GE 4 core 6x 40GE 6 core 8x 40GE 8 core 10x 40GE 10 core 12x 40GE 12 core Packet Throughput [Gbps] NDR - Zero Frame Loss Frame Size [Bytes]No. of Interfaces No. of CPU Cores Service Scale = 0.5 million IPv6 route entries actual m-core scaling (mid-points interpolated) 24 45.36 66.72 88.08 109.44 130.8 IPv4 Thput [Mpps] 2x 40GE 2 core 4x 40GE 4 core 6x 40GE 6 core 8x 40GE 8 core 10x 40GE 10 core 12x 40GE 12 core 64B 24.0 45.4 66.7 88.1 109.4 130.8 128B 24.0 45.4 66.7 88.1 109.4 130.8 IMIX 15.0 30.0 45.0 60.0 75.0 90.0 1518B 3.8 7.6 11.4 15.2 19.0 22.8 I/O NIC max-pps 35.8 71.6 107.4 143.2 179 214.8 NIC max-bw 46.8 93.5 140.3 187.0 233.8 280.5 actual m-core scaling (mid-points interpolated) 19.2 35.36 51.52 67.68 83.84 100 IPv6 Thput [Mpps] 2x 40GE 2 core 4x 40GE 4 core 6x 40GE 6 core 8x 40GE 8 core 10x 40GE 10 core 12x 40GE 12 core 64B 19.2 35.4 51.5 67.7 83.8 100.0 128B 19.2 35.4 51.5 67.7 83.8 100.0 IMIX 15.0 30.0 45.0 60.0 75.0 90.0 1518B 3.8 7.6 11.4 15.2 19.0 22.8 I/O NIC max-pps 35.8 71.6 107.4 143.2 179 214.8 NIC max-bw 46.8 93.5 140.3 187.0 233.8 280.5 Packet Traffic Generator 12x 40GE interfaces Topology: Phy-VS-Phy SoRware Linux: Ubuntu 16.04.1 LTS Kernel: ver. 4.4.0-45-generic FD.io VPP: VPP v17.01-5~ge234726 (DPDK 16.11) Resources 1 physical CPU core per 40GE port Other CPU cores available for other services and other work 20 physical CPU cores available in 12x40GE seupt Lots of Headroom for much more throughput and features IPv4 RouUng IPv6 RouUng
  • 20. 64B 128B I/O NIC max-pps 0.0 50.0 100.0 150.0 200.0 250.0 2x 40GE 2 core 4x 40GE 4 core 6x 40GE 6 core 8x 40GE 8 core 10x 40GE 10 core 12x 40GE 12 core No. of Interfaces No. of CPU Cores Frame Size [Bytes] Service Scale = 16 thousand MAC L2 entries Packet Throughput [Mpps] NDR - Zero Frame Loss 64B 128B I/O NIC max-pps 0.0 50.0 100.0 150.0 200.0 250.0 2x 40GE 2 core 4x 40GE 4 core 6x 40GE 6 core 8x 40GE 8 core 10x 40GE 10 core 12x 40GE 12 core No. of Interfaces No. of CPU Cores Frame Size [Bytes] Service Scale = 100 thousand MAC L2 entries Packet Throughput [Mpps] NDR - Zero Frame Loss 64B 128B IMIX 1518B I/O NIC max-bw 0.0 50.0 100.0 150.0 200.0 250.0 300.0 2x 40GE 2 core 4x 40GE 4 core 6x 40GE 6 core 8x 40GE 8 core 10x 40GE 10 core 12x 40GE 12 core Packet Throughput [Gbps] NDR - Zero Frame Loss Frame Size [Bytes]No. of Interfaces No. of CPU Cores Service Scale = 100 thousand MAC L2 entries 64B 128B IMIX 1518B I/O NIC max-bw 0.0 50.0 100.0 150.0 200.0 250.0 300.0 2x 40GE 2 core 4x 40GE 4 core 6x 40GE 6 core 8x 40GE 8 core 10x 40GE 10 core 12x 40GE 12 core Packet Throughput [Gbps] NDR - Zero Frame Loss Frame Size [Bytes]No. of Interfaces No. of CPU Cores Service Scale = 16 thousand MAC L2 entries actual m-core scaling (mid-points interpolated) 11.6 25.12 38.64 52.16 65.68 79.2 MAC Thput [Mpps] 2x 40GE 2 core 4x 40GE 4 core 6x 40GE 6 core 8x 40GE 8 core 10x 40GE 10 core 12x 40GE 12 core 64B 11.6 25.1 38.6 52.2 65.7 79.2 128B 11.6 25.1 38.6 52.2 65.7 79.2 IMIX 10.5 21.0 31.5 42.0 52.5 63.0 1518B 3.8 7.6 11.4 15.2 19.0 22.8 I/O NIC max-pps 35.8 71.6 107.4 143.2 179 214.8 NIC max-bw 46.8 93.5 140.3 187.0 233.8 280.5 actual m-core scaling (mid-points interpolated) 20.8 38.36 55.92 73.48 91.04 108.6 MAC Thput [Mpps] 2x 40GE 2 core 4x 40GE 4 core 6x 40GE 6 core 8x 40GE 8 core 10x 40GE 10 core 12x 40GE 12 core 64B 20.8 38.4 55.9 73.5 91.0 108.6 128B 20.8 38.4 55.9 73.5 91.0 108.6 IMIX 15.0 30.0 45.0 60.0 75.0 90.0 1518B 3.8 7.6 11.4 15.2 19.0 22.8 I/O NIC max-pps 35.8 71.6 107.4 143.2 179 214.8 NIC max-bw 46.8 93.5 140.3 187.0 233.8 280.5 Hardware: Cisco UCS C240 M4 Intel® C610 series chipset 2 x Intel® Xeon® Processor E5-2698 v3 (16 cores, 2.3GHz, 40MB Cache) 2133 MHz, 256 GB Total 6 x 2p40GE Intel XL710=12x40GE Packet Traffic Generator 12x 40GE interfaces Topology: Phy-VS-Phy SoRware Linux: Ubuntu 16.04.1 LTS Kernel: ver. 4.4.0-45-generic FD.io VPP: VPP v17.01-5~ge234726 (DPDK 16.11) Resources 1 physical CPU core per 40GE port Other CPU cores available for other services and other work 20 physical CPU cores available in 12x40GE seupt Lots of Headroom for much more throughput and features VPP Universal Fast Dataplane: Performance at Scale [2/2] Per CPU core throughput with linear multi-thread(-core) scaling L2 Switching L2 Switching with VXLAN Tunneling
  • 21. Phy-VS-Phy VPP Performance at Scale 64B 1518B 0.0 100.0 200.0 300.0 400.0 500.0 12 routes 1k routes 100k routes 500k routes 1M routes 2M routes [Gbps]] 480Gbps zero frame loss 64B 1518B 0.0 50.0 100.0 150.0 200.0 250.0 12 routes 1k routes 100k routes 500k routes 1M routes 2M routes [Mpps] 200Mpps zero frame loss 64B 1518B 0 100 200 300 400 500 1k routes 500k routes 1M routes 2M routes 4M routes 8M routes [Gbps]] IMIX => 342 Gbps,1518B => 462 Gbps 64B 1518B 0 50 100 150 200 250 300 1k routes 500k routes 1M routes 2M routes 4M routes 8M routes [Mpps] 64B => 238 Mpps IPv6, 24 of 72 cores IPv4+ 2k Whitelist, 36 of 72 cores Zero-packet-loss Throughput for 12 port 40GE Hardware: Cisco UCS C460 M4 Intel® C610 series chipset 4 x Intel® Xeon® Processor E7-8890 v3 (18 cores, 2.5GHz, 45MB Cache) 2133 MHz, 512 GB Total 9 x 2p40GE Intel XL710 18 x 40GE = 720GE !! Latency 18 x 7.7trillion packets soak test Average latency: <23 usec Min Latency: 7…10 usec Max Latency: 3.5 ms Headroom Average vector size ~24-27 Max vector size 255 Headroom for much more throughput/ features NIC/PCI bus is the limit not vpp
  • 22. 22 FD.io is evolving rapidly! 16-06 Release- VPP 16-09 Release: VPP, Honeycomb, NSH_SFC, ONE 17-01 Release: VPP, Honeycomb, NSH_SFC, ONE 16-06 New Features Enhanced Switching & RouUng SRv6 spray use-case (IP-TV) LISP xTR support VXLAN over IPv6 underlay per interface whitelists shared adjacencies in FIB Improves interface support vhost-user – jumbo frames Netmap interface support AF_Packet interface support Improved programmability Python API bindings Enhanced JVPP Java API bindings Enhanced debugging cli Hardware and SoRware Support Support for ARM 32 targets Support for Raspberry Pi Support for DPDK 16.04 16-09 New Features Enhanced LISP support for L2 overlays Mul+tenancy Mul+homing Re-encapsula+ng Tunnel Routers (RTR) support Map-Resolver failover algorithm New plugins for SNAT MagLev-like Load Iden+fier Locator Addressing NSH SFC SFF’s & NSH Proxy Port range ingress filtering Dynamically ordered subgraphs 17-01 New Features Hierarchical FIB Performance Improvements DPDK input and output nodes L2 Path IPv4 lookup node IPSEC Performance SW and HW Crypto Support HQoS support Simple Port Analyzer (SPAN) BFD, ACL, IPFIX, SNAT L2 GRE over IPSec tunnels LLDP LISP Enhancements Source/Dest control plane L2 over LISP and GRE Map-Register/Map-No+fy RLOC-probing Flow Per Packet 17-04 New Features VPP Userspace Host Stack TCP stack DHCPv4 & DHCPv6 relay/proxy ND Proxy SNAT CGN: port alloca+on & address pool CPE: External interface NAT64, LW46 Segment RouUng SRv6 Network Programming SR Traffic Engineering SR LocalSIDs Framework to expand LocalSIDs w/ plugins iOAM UDP Pinger IOAM as type 2 metadata in NSH Anycast ac+ve server selec+on IPFIX Improvements (IPv6) 17-04 Release: VPP, Honeycomb, NSH_SFC, ONE…
  • 23. Universal Dataplane: Features fd.io Founda+on 23 Tunnels/Encaps GRE/VXLAN/VXLAN-GPE/LISP-GPE/NSH IPSEC Including HW offload when available Interfaces DPDK/Netmap/AF_Packet/TunTap Vhost-user - mul+-queue, reconnect, Jumbo Frame Support MPLS MPLS over Ethernet/GRE Deep label stacks supported Segment Rou+ng SR MPLS/IPv6 Including Mul+cast Inband iOAM Telemetry export infra (raw IPFIX) iOAM for VXLAN-GPE (NGENA) SRv6 and iOAM co-existence iOAM proxy mode / caching iOAM probe and responder LISP LISP xTR/RTR L2 Overlays over LISP and GRE encaps Mul+tenancy Mul+home Map/Resolver Failover Source/Dest control plane support Map-Register/Map-No+fy/RLOC-probing Language Bindings C/Java/Python/Lua Hardware Plaxorms Pure Userspace - X86,ARM 32/64,Power Raspberry Pi Rou+ng IPv4/IPv6 14+ MPPS, single core Hierarchical FIBs Mul+million FIB entries Source RPF Thousands of VRFs Controlled cross-VRF lookups Mul+path – ECMP and Unequal Cost Network Services DHCPv4 client/proxy DHCPv6 Proxy MAP/LW46 – IPv4aas MagLev-like Load Iden+fier Locator Addressing NSH SFC SFF’s & NSH Proxy LLDP BFD Policer Mul+ple million Classifiers – Arbitrary N-tuple Switching VLAN Support Single/ Double tag L2 forwd w/EFP/BridgeDomain concepts VTR – push/pop/Translate (1:1,1:2, 2:1,2:2) Mac Learning – default limit of 50k addr Bridging Split-horizon group support/EFP Filtering Proxy Arp Arp termina+on IRB - BVI Support with RouterMac assigmt Flooding Input ACLs Interface cross-connect L2 GRE over IPSec tunnels Monitoring Simple Port Analyzer (SPAN) IP Flow Export (IPFIX) Counters for everything Lawful Intercept Security Mandatory Input Checks: TTL expira+on header checksum L2 length < IP length ARP resolu+on/snooping ARP proxy SNAT Ingress Port Range Filtering Per interface whitelists Policy/Security Groups/GBP (Classifier)
  • 24. fd.io Founda+on 24 New in 17.04 VPP Userspace Host Stack TCP stack DHCPv4 relay mul+-des+na+on DHCPv4 op+on 82 DHCPv6 relay mul+-des+na+on DHPCv6 relay remote-id ND Proxy SNAT CGN: Configurable port alloca+on CGN: Configurable Address pooling CPE: External interface DHCP support NAT64, LW46 Security Groups Routed interface support L4 filters with IPv6 Extension Headers API Move to CFFI for Python binding Python Packaging improvements CLI over API Improved C/C++ language binding Segment Rou+ng v6 SR policies with weighted SID lists Binding SID SR steering policies SR LocalSIDs Framework to expand local SIDs w/plugins iOAM UDP Pinger w/path fault isola+on IOAM as type 2 metadata in NSH IOAM raw IPFIX collector and analyzer Anycast ac+ve server selec+on IPFIX Collect IPv6 informa+on Per flow state
  • 25. fd.io Founda+on 25 Continuous Quality, Performance, Usability Built into the development process – patch by patch Submit Automated Verify Code Review Merge Publish Ar+facts System Func+onal Tes+ng 252 Tests/Patch DHCP – Client and Proxy GRE Overlay Tunnels L2BD Ethernet Switching L2 Cross Connect Ethernet Switching LISP Overlay Tunnels IPv4-in-IPv6 SoRwire Tunnels Cop Address Security IPSec IPv6 RouUng – NS/ND, RA, ICMPv6 uRPF Security Tap Interface Telemetry – IPFIX and Span VRF Routed Forwarding iACL Security – Ingress – IPv6/IPv6/Mac IPv4 RouUng QoS Policer Metering VLAN Tag TranslaUon VXLAN Overlay Tunnels Performance Tes+ng 144 Tests/Patch, 841 Tests L2 Cross Connect L2 Bridging IPv4 RouUng IPv6 RouUng IPv4 Scale – 20k,200k,2M FIB Entries IPv4 Scale - 20k,200k,2M FIB Entries VM with vhost-userr PHYS-VPP-VM-VPP-PHYS L2 Cross Connect/Bridge VXLAN w/L2 Bridge Domain IPv4 RouUng COP – IPv4/IPv6 whiteless iACL – ingress IPv4/IPv6 ACLs LISP – IPv4-o-IPv6/IPv6-o-IPv4 VXLAN QoS Policer L2 Cross over L2 Bridging Usability Merge-by-merge: apt installable deb packaging yum installable rpm packaging autogenerated code documentaUon autogenerated cli documentaUon Per release: autogenerated tesUng reports report perf improvements Puppet modules Training/Tutorial videos Hands-on-usecase documentaUon Build/Unit Tes+ng 120 Tests/Patch Build binary packaging for Ubuntu 14.04 Ubuntu 16.04 Centos 7 Automated Style Checking Unit test : IPFIX BFD Classifier DHCP FIB GRE IPv4 IPv4 IRB IPv4 mulU-VRF IPv6 IP MulUcast L2 FIB L2 Bridge Domain MPLS SNAT SPAN VXLAN Run on real hardware in fd.io Performance Lab Merge-by-merge packaging feeds Downstream consumer CI pipelines
  • 26. Universal Dataplane: Infrastructure fd.io Founda+on 26 Server Kernel/Hypervisor FD.io Bare Metal Server Kernel/Hypervisor FD.io VM VM VM Cloud/NFVi Server Kernel FD.io Con Con Con Container Infra
  • 27. Universal Dataplane: VNFs fd.io Founda+on 27 Server Kernel/Hypervisor VM FD.io based VNFs VM FD.io FD.io FD.io Server Kernel/Hypervisor Con FD.io based VNFs Con FD.io FD.io FD.io
  • 28. Universal Dataplane: Embedded fd.io Founda+on 28 Device Kernel/Hypervisor Embedded Device FD.io Hw Accel Server Kernel/Hypervisor SmartNic SmartNic FD.io Hw Accel
  • 29. Universal Dataplane: CPE Example fd.io Founda+on 29 Device Kernel/Hypervisor Physical CPE FD.io Hw Accel Server Kernel/Hypervisor VM vCPE in a VM VM FD.io FD.io FD.io Server Kernel/Hypervisor Con vCPE in a Container Con FD.io FD.io FD.io
  • 30. Opportunities to Contribute We invite you to Participate in fd.io •  Get the Code, Build the Code, Run the Code •  Try the vpp user demo •  Install vpp from binary packages (yum/ apt) •  Install Honeycomb from binary packages •  Read/Watch the Tutorials •  Join the Mailing Lists •  Join the IRC Channels •  Explore the wiki •  Join fd.io as a member fd.io Founda+on 30 •  Firewall •  IDS •  Hardware Accelerators •  Integra+on with OpenCache •  Control plane – support your favorite SDN Protocol Agent •  Spanning Tree •  DPI •  Test tools •  Cloud Foundry Integra+on •  Container Integra+on •  Packaging •  Tes+ng