Sponsored By:
Platforms for Accelerating the
Software Defined and Virtual
Infrastructure
Members of:
Today’s Presenters
Presenter
Yann Rapaport
Product Manager
6WIND
Moderator
Simon Stanley
Analyst at Large
Heavy Reading
Presenter
Mark Guinther
Senior Director of
Business Development
Netronome
Presenter
Peter Marek
Senior Director x86 Solutions
Advantech Network &
Communications Group
Agenda
ļ‚§ Introduction
ļ‚§ Challenges and roadblocks
ļ‚§ Tuning platform designs to balance compute
and I/O performance
ļ‚§ How software acceleration can help
ļ‚§ Solutions, use cases and performance
ļ‚§ Summary
ļ‚§ Q&A
Growing Data Demand
• Network intensive applications
‒ Internet access
‒ Cloud services
‒ Video/TV on demand
‒ Amazon, Netflix, YouTube
‒ Voice/video over IP (VoIP), VoLTE
‒ ā€œAlways onā€ apps such as IM and
Location Services
• Rapidly growing network traffic
‒ Global IP traffic
‒ Fixed internet, managed IP and mobile
‒ Almost 1.6 zettabytes by 2018 (21% CAGR)
0
200
400
600
800
1,000
1,200
1,400
1,600
2013 2014 2015 2016 2017 2018
exabytesperyear
Fixed Internet Managed IP Mobile Data
• Network infrastructure becoming virtual and software defined
‒ Distributed data centers
‒ Software Defined Networking (SDN)
‒ Network Functions Virtualization (NFV)
Source: Cisco VNI, June 2014
SDN and NFV
• Software Defined Networking (SDN)
– Abstracts and automates provisioning
– Separates data plane and control
– Increased flexibility
• Open interfaces
– OpenFlow, OpenStack
• Network Functions
Virtualization (NFV)
– Replaces fixed function systems
with virtualized functions on
common hardware
• Cost effective and highly
scalable
• Complimentary to existing,
non-virtualized solutions
SGSN/
GGSN
MME
S-GW/
P-GW
SBC RNC
PCEF
HLR/
HSS
NFV
Fixed Function Systems Virtual Systems
Orchestrator
OpenStack
OpenFlow
SDN
Controller
Server FirewallSwitch
Controller
Switch NICServer
System Development Trends
Proprietary
Platform
Proprietary
Middleware
Applications
COTS/ATCA
Platform
Standardized
Middleware
Applications
Virtual
Compute
VNF
Virtualization Layer
Proprietary
Platform
Virtual
Storage
Virtual
Network
Virtual Network Functions (VNFs)
VNF VNF VNF VNF
Vertical
Integration
Horizontal
Integration
Virtual
Integration
Increasing complexity
ATCA
x86/ARM
Server
Platform Requirements and Challenges
• H/W Challenges
– Physical size
– Power
– Performance
• S/W Challenges
– Software integration
– Open interfaces
– Performance
Multicore
CPU
Cluster
Compact
2U Chassis
Physical or
Virtual Switch
High Speed
Networking
Standard
Software
Frameworks
Whither the Universal Platform -
Does one-size-fits-all make sense?
• Standard Server Vs Workload Optimized Platforms
– Telecom, Datacom, Server
• Hardware Bottlenecks & Tradeoffs
– Ethernet <> PCIe <> Memory <> QPI
• How to deliver packets from I/O to VMs and from VM to VM
most efficiently
• Offload of CPU intensive workloads
– Security Functions: IPsec, SSL etc
– Optimization functions: Compression
Standard Appliances – The Problem
• Whitebox system architecture limitations
• FWA-6512
– Balanced I/O & offload capability
– Inefficiencies of traffics/flows traversing socket boundaries remain
Network
PCIex16NICs XEON
Socket 0
PCIex16 XEON
Socket 1
XEON
Socket 2
XEON
Socket 3
QPI-
Ring
Good
Bad
Network
Network
PCIex8
NIC
NIC
XEON
Socket 0
PCIex8 XEON
Socket 1
XEON
Socket 2
XEON
Socket 3
QPI-
Ring
Good
Bad
Bad
Worst
Good
PCIex16
PCIex16
Good Good
Good
Offload
PCIex16
NICs
Offload
PCIex16
NICs
Offload
NICs
Offload
PCIex16
PCIex16
Network is Becoming a Bottleneck
10 MBIT ETHERNET
100 MBIT
ETHERNET
1 GBIT ETHERNET
10 GBIT
ETHERNET
40 GBIT ETHERNET
1980 1990 2000 2010
BANDWIDTH 100 GBIT ETHERNET
2015
• Network Bandwidth is
increasing exponentially
• A new paradigm is needed
for networked systems to
keep up for
• Virtualized systems
• Security appliances
• Software Defined
Networking/NFV
• Deep Packet
Inspection
• Load Balancing
Typical NFV Performance Challenges
Hypervisor
Virtual Switch
Driver Level Bottleneck
Virtual Switch Bottleneck
Communication Bottleneck - Host vs Guest OS
Virtual Machine Bottleneck
Virtual
Machine
Application
Software
Virtual
Machine
Application
Software
Server Platform
Audience Poll #1
• What actions are you most likely to take to improve
throughput in your next gen networking equipment design
(tick all that apply)
– A: Increase port bandwidths to 40G
– B: Increase port bandwidths to 100G
– C: Upgrade Intel CPUs to highest throughput SKUs
– D: Implement S/W Acceleration technologies on Intel Architecture
– E: Move to H/W based I/O Virtualization
Hardware – Getting the Right Balance
• Choosing the Base Platform
– Modular, flexible & upgradeable
– Performance/density/environment
x4
x4
x2
Up to 4x 100GE Ports
on PMMs
2x PCIe Offload
Up to 16x 40GE Ports
on NMCs
4x internal PCIe Offload
Up to 12x 40GE
Ports on PCIe slots
PCIe Adapter for
Flow Processing
2x 10GE
1 & 2x 100GE
2 & 4x 40GE
Netronome Flow
Processor Load
Balancing between
Xeons and network
ports
• Accelerate it to the level which
makes most economical sense
Dedicated NICs and
look aside offload
on each socket
Communications Appliances are
Transforming
Intelligent Edge System Requirements:
• Network translation, overlays
• Tunnel termination (NVGRE, VXLAN)
• Distributed security, crypto
• Gateways
• Load balancing
• QoS and Metering
• Policy, ACL
• Virtual Switching
(Ex. OVS)
Flow Processors for High Bandwidth Networking
• Rapidly increasing interface speeds to
multi-port 10, 40 and 100 gigabit
Ethernet
• more than 10,000 operations per packet
• Moving beyond legacy 3/5/7-tuple
forwarding decisions to more than 40
match fields
• Flexible use of arbitrary classification
fields
• Support large flow tables for over 100M
flows
• Tunnel termination and
encapsulation/decapsulation
• Dynamic load-balancing and forwarding
FlowNIC technology solves the bottleneck
PCIex8 XEON
Socket 0
XEON
Socket 1
XEON
Socket 2
XEON
Socket 3
QPI-
Ring
Good
NIC
Control
IF
Net
Work
Ports
NIC
Flow
processor
NIC
NIC PCIex8
Good
PCIex8
PCIex8
Good
Good
• FlowNIC presents one NIC per Xeon socket
– No change in IO model (incl. DPDK)
• FlowNIC performs load balancing between network ports and PCIe NIC
interfaces / cores / vcores
– Wildcard /hash based distribution up to full flow qualification
– Up to millions of flows
– Supports flow pinning to cores / threads
• Black-box S/W approach
FlowNIC Applications
• Flow NIC =ā€œFlowSwitchā€
– Adds on Chip forwarding
between Ethernet ports
onchip
– Control path to x86 CPUs via
PCIe Forwarding
• Intelligent traffic
handling
– Flow Aware switching
– Service /Application aware
load balancing
– Flow/Application aware
filtering
100GE or
2x 40GE
Stack of
appliances
16x10GE
(24x10GE)
100GE or
2x 40GE
Network A
Network B
L2 fwd of non-
qualified traffic
Load balance
qualified traffic
to appliances
for special
processing
4 x8 PCIe gen3
PHY
Module
100G
PHY
Module
2x40G
PHY
Module
8x10G
PHY
Module
8x1G
48 10Gbps Serdes
DPDK, PMD APIs
• FlowNIC = Make use of IntelĀ® Xeon horse power in multi-
socket environments for handling 100GE traffic with millions
of flows while freeing up CPU cores for application processing
• Processing and IO Density, FlowNIC = ideal for ā€œbump in the
wireā€
– Network Security. Firewall, IPS/IDS, UTM
– Deep Packet Inspection
Example FWA-6512C
FlowNIC offload with APIs
Exact Match
Flow Classifier
FST FST
OpenFlow 1.x / OVS Dataplane
Load Balancer
VM2
VNF
DPDK
VM1
VNF
DPDK
VNF
DPDK
VNF
OVS/
OpenFlow
Agent
OVS
FST
Kernel
Stack
Netronome
Flow NIC
x86 Standard Server
user
space
kernel
space
OpenStack
Quantum
OpenStack
Nova
OpenStack
Swift
Orchestration
Application
OpenFlow Controller
VNF
Controller
• Load Balance to VMs, cores,
sockets
• DPDK interfaces to user space
• Delivery to kernel stack
• SDN/Openstack
configuration/stats
• OVS intelligence for cut-through
switching
Lowest Latency with Flexibility Between
Workloads
Virtual
Network
Function
Virtual
Network
Function
Virtual
Network
Function
PCI Express
Local NIC
External Switch
Physical Switching Limitations
• Hardware dependent switching
(SR-IOV, RDMA, NIC embedded switching)
• Throughput is limited by PCI Express (50 Gbps)
and faces PCI Express and DMA additional
latencies
• Available PCI slots limit the number of chained
VNFs
• At 30 Gbps a single VNF is supported per node!
Virtual Networking Benefits
• Hardware independent
• Aggregate 500 Gbps bandwidth with low latency
• No external limit to number of chained VNFs
50
Gbps
500 Gbps
Virtual Networking
Software Acceleration
Packet Processing with 6WINDGate
Paradigm Shift In Packet Processing Software
• Fastest performance on the market; in both physical and virtual
environments
• Transparent, no change necessary to OS, hypervisor and management
• Available across all major platforms
• Native support for all major network protocols
Multicore Processor Platform
Fast
PathNetwork Stack
Control Plane
Fast
Path
Fast
Path
Fast
Path
Fast
Path
Linux Kernel
Linux Networking Stack
Linux Compatibility is Critical
• Huge ecosystem of Linux
applications
• Industry-standard
management frameworks
• Acceleration solutions must
retain full compatibility with
packages and management
iproute2iptables
Quagga
Linux Acceleration via 6WINDGate
• Standard Linux functions are
accelerated by 6WINDGate
Linux Kernel
Linux Networking
Stack
Fast Path Configuration
Fast Path Statistics
Fast Path
Fast Path Modules
Shared
Memory
Protocol
Tables
Statistics
iproute2iptables
Quagga
6WINDGate Removes Performance Bottlenecks
Performance
(MillionsOfPackets
PerSecond)
...
Fast Path Cores
...
Increase OS
stability by
offloading
resource
intensive
mundane tasks
Standard Linux
Becomes
Unstable
Performance
benefits scale with
the number of
processing cores
1 2 3 8 9 10 ...
6WINDGate NFVI on Advantech Appliance
Audience Poll #2
• When will your company adopt SDN/NFV for mainstream
deployment?
– Option 1 - In 2015
– Option 2 - In 2-3 years
– Option 3 - In 3-5 years
– Option 4 - Up to 10 years
– Option 5 - Never
Use Case: Networking Middlebox
• Inline traffic processing
for
– Firewall, IPS/IDS, Analytics
• 10G/40G/100G Network
Interfaces
– Tunnel termination
– IPsec VPN, Crypto
– Port-to-Port fast path
• DPDK Packet API
• Load balance to individual
x86 sockets/cores
user
space
kernel
space
Advantech FWA-6512C
Netronome
Acceleration/Offload
10G 100G40G
Use Case: SDN Gateway
• SDN-controlled Appliance
• Connect SDN and
conventional networks
– MPLS, VXLAN, VLAN, (NV)GRE
• Manipulate traffic flow
based on events
– Security
– Application ID
– Traffic Characteristics
OpenFlow
Controller
Scale out workload & traffic distribution
2x100GE
Upstream Routers
16x10GE 16x10GE
ESP-9213R
ToR Switches
2x
100GE
2x100GE2x
100GE
FWA-6512C
FlowBalancer
(dual socket Xeon)
2x40GE
(10 sets total)
2x40GE
(10 sets)
16x10GE 16x10GE
…………….
Horizontal scaling
…………….
Horizontal scaling
Servers
Performance Comparison
• X86 reference platform
• NFVI performs Open vSwitch
(in and out)
• VM performs L3 forwarding
(in and out)
• 64 byte packets
Bump in the Wire - Throughput
0
2
4
6
8
10
12
14
16
18
20
64 128 256 512 1024 1518
Throughput(Gbps)
Packet size
FlowNIC
DPDK
Std Linux
Throughput
0.00
10.00
20.00
30.00
40.00
50.00
60.00
70.00
80.00
64 128 256 512 1024 1518
Latency(us)
Packet size
FlowNIC
DPDK
Std Linux
Latency
• Simple packet forwarding between 2 ports
• FlowNIC solutions
• 100% offloaded, 0% CPU utilization
• Highest throughput, lowest latency
6WINDGate DPDK + Data Plane Acceleration
OpenFlow/ Open vSwitch Control Plane
Advantech Management Plane
OpenFlow / Open vSwitch SW LB
Special purpose offload
Summary
VM VM
VM VMVM VM
Questions and Answers?
Presenter
Yann Rapaport
Product Manager
6WIND
Moderator
Simon Stanley
Analyst at Large
Heavy Reading
Presenter
Mark Guinther
Senior Director of
Business Development
Netronome
Presenter
Peter Marek
Senior Director x86 Solutions
Advantech Network &
Communications Group
Advantech
Thank you for attending!
Upcoming Light Reading Webinars
www.lightreading.com/webinars.asp
www.advantech.com/nc

Platforms for Accelerating the Software Defined and Virtual Infrastructure

  • 1.
    Sponsored By: Platforms forAccelerating the Software Defined and Virtual Infrastructure Members of:
  • 2.
    Today’s Presenters Presenter Yann Rapaport ProductManager 6WIND Moderator Simon Stanley Analyst at Large Heavy Reading Presenter Mark Guinther Senior Director of Business Development Netronome Presenter Peter Marek Senior Director x86 Solutions Advantech Network & Communications Group
  • 3.
    Agenda ļ‚§ Introduction ļ‚§ Challengesand roadblocks ļ‚§ Tuning platform designs to balance compute and I/O performance ļ‚§ How software acceleration can help ļ‚§ Solutions, use cases and performance ļ‚§ Summary ļ‚§ Q&A
  • 4.
    Growing Data Demand •Network intensive applications ‒ Internet access ‒ Cloud services ‒ Video/TV on demand ‒ Amazon, Netflix, YouTube ‒ Voice/video over IP (VoIP), VoLTE ‒ ā€œAlways onā€ apps such as IM and Location Services • Rapidly growing network traffic ‒ Global IP traffic ‒ Fixed internet, managed IP and mobile ‒ Almost 1.6 zettabytes by 2018 (21% CAGR) 0 200 400 600 800 1,000 1,200 1,400 1,600 2013 2014 2015 2016 2017 2018 exabytesperyear Fixed Internet Managed IP Mobile Data • Network infrastructure becoming virtual and software defined ‒ Distributed data centers ‒ Software Defined Networking (SDN) ‒ Network Functions Virtualization (NFV) Source: Cisco VNI, June 2014
  • 5.
    SDN and NFV •Software Defined Networking (SDN) – Abstracts and automates provisioning – Separates data plane and control – Increased flexibility • Open interfaces – OpenFlow, OpenStack • Network Functions Virtualization (NFV) – Replaces fixed function systems with virtualized functions on common hardware • Cost effective and highly scalable • Complimentary to existing, non-virtualized solutions SGSN/ GGSN MME S-GW/ P-GW SBC RNC PCEF HLR/ HSS NFV Fixed Function Systems Virtual Systems Orchestrator OpenStack OpenFlow SDN Controller Server FirewallSwitch Controller Switch NICServer
  • 6.
    System Development Trends Proprietary Platform Proprietary Middleware Applications COTS/ATCA Platform Standardized Middleware Applications Virtual Compute VNF VirtualizationLayer Proprietary Platform Virtual Storage Virtual Network Virtual Network Functions (VNFs) VNF VNF VNF VNF Vertical Integration Horizontal Integration Virtual Integration Increasing complexity ATCA x86/ARM Server
  • 7.
    Platform Requirements andChallenges • H/W Challenges – Physical size – Power – Performance • S/W Challenges – Software integration – Open interfaces – Performance Multicore CPU Cluster Compact 2U Chassis Physical or Virtual Switch High Speed Networking Standard Software Frameworks
  • 8.
    Whither the UniversalPlatform - Does one-size-fits-all make sense? • Standard Server Vs Workload Optimized Platforms – Telecom, Datacom, Server • Hardware Bottlenecks & Tradeoffs – Ethernet <> PCIe <> Memory <> QPI • How to deliver packets from I/O to VMs and from VM to VM most efficiently • Offload of CPU intensive workloads – Security Functions: IPsec, SSL etc – Optimization functions: Compression
  • 9.
    Standard Appliances –The Problem • Whitebox system architecture limitations • FWA-6512 – Balanced I/O & offload capability – Inefficiencies of traffics/flows traversing socket boundaries remain Network PCIex16NICs XEON Socket 0 PCIex16 XEON Socket 1 XEON Socket 2 XEON Socket 3 QPI- Ring Good Bad Network Network PCIex8 NIC NIC XEON Socket 0 PCIex8 XEON Socket 1 XEON Socket 2 XEON Socket 3 QPI- Ring Good Bad Bad Worst Good PCIex16 PCIex16 Good Good Good Offload PCIex16 NICs Offload PCIex16 NICs Offload NICs Offload PCIex16 PCIex16
  • 10.
    Network is Becominga Bottleneck 10 MBIT ETHERNET 100 MBIT ETHERNET 1 GBIT ETHERNET 10 GBIT ETHERNET 40 GBIT ETHERNET 1980 1990 2000 2010 BANDWIDTH 100 GBIT ETHERNET 2015 • Network Bandwidth is increasing exponentially • A new paradigm is needed for networked systems to keep up for • Virtualized systems • Security appliances • Software Defined Networking/NFV • Deep Packet Inspection • Load Balancing
  • 11.
    Typical NFV PerformanceChallenges Hypervisor Virtual Switch Driver Level Bottleneck Virtual Switch Bottleneck Communication Bottleneck - Host vs Guest OS Virtual Machine Bottleneck Virtual Machine Application Software Virtual Machine Application Software Server Platform
  • 12.
    Audience Poll #1 •What actions are you most likely to take to improve throughput in your next gen networking equipment design (tick all that apply) – A: Increase port bandwidths to 40G – B: Increase port bandwidths to 100G – C: Upgrade Intel CPUs to highest throughput SKUs – D: Implement S/W Acceleration technologies on Intel Architecture – E: Move to H/W based I/O Virtualization
  • 13.
    Hardware – Gettingthe Right Balance • Choosing the Base Platform – Modular, flexible & upgradeable – Performance/density/environment x4 x4 x2 Up to 4x 100GE Ports on PMMs 2x PCIe Offload Up to 16x 40GE Ports on NMCs 4x internal PCIe Offload Up to 12x 40GE Ports on PCIe slots PCIe Adapter for Flow Processing 2x 10GE 1 & 2x 100GE 2 & 4x 40GE Netronome Flow Processor Load Balancing between Xeons and network ports • Accelerate it to the level which makes most economical sense Dedicated NICs and look aside offload on each socket
  • 14.
    Communications Appliances are Transforming IntelligentEdge System Requirements: • Network translation, overlays • Tunnel termination (NVGRE, VXLAN) • Distributed security, crypto • Gateways • Load balancing • QoS and Metering • Policy, ACL • Virtual Switching (Ex. OVS)
  • 15.
    Flow Processors forHigh Bandwidth Networking • Rapidly increasing interface speeds to multi-port 10, 40 and 100 gigabit Ethernet • more than 10,000 operations per packet • Moving beyond legacy 3/5/7-tuple forwarding decisions to more than 40 match fields • Flexible use of arbitrary classification fields • Support large flow tables for over 100M flows • Tunnel termination and encapsulation/decapsulation • Dynamic load-balancing and forwarding
  • 16.
    FlowNIC technology solvesthe bottleneck PCIex8 XEON Socket 0 XEON Socket 1 XEON Socket 2 XEON Socket 3 QPI- Ring Good NIC Control IF Net Work Ports NIC Flow processor NIC NIC PCIex8 Good PCIex8 PCIex8 Good Good • FlowNIC presents one NIC per Xeon socket – No change in IO model (incl. DPDK) • FlowNIC performs load balancing between network ports and PCIe NIC interfaces / cores / vcores – Wildcard /hash based distribution up to full flow qualification – Up to millions of flows – Supports flow pinning to cores / threads • Black-box S/W approach
  • 17.
    FlowNIC Applications • FlowNIC =ā€œFlowSwitchā€ – Adds on Chip forwarding between Ethernet ports onchip – Control path to x86 CPUs via PCIe Forwarding • Intelligent traffic handling – Flow Aware switching – Service /Application aware load balancing – Flow/Application aware filtering 100GE or 2x 40GE Stack of appliances 16x10GE (24x10GE) 100GE or 2x 40GE Network A Network B L2 fwd of non- qualified traffic Load balance qualified traffic to appliances for special processing 4 x8 PCIe gen3 PHY Module 100G PHY Module 2x40G PHY Module 8x10G PHY Module 8x1G 48 10Gbps Serdes DPDK, PMD APIs • FlowNIC = Make use of IntelĀ® Xeon horse power in multi- socket environments for handling 100GE traffic with millions of flows while freeing up CPU cores for application processing • Processing and IO Density, FlowNIC = ideal for ā€œbump in the wireā€ – Network Security. Firewall, IPS/IDS, UTM – Deep Packet Inspection Example FWA-6512C
  • 18.
    FlowNIC offload withAPIs Exact Match Flow Classifier FST FST OpenFlow 1.x / OVS Dataplane Load Balancer VM2 VNF DPDK VM1 VNF DPDK VNF DPDK VNF OVS/ OpenFlow Agent OVS FST Kernel Stack Netronome Flow NIC x86 Standard Server user space kernel space OpenStack Quantum OpenStack Nova OpenStack Swift Orchestration Application OpenFlow Controller VNF Controller • Load Balance to VMs, cores, sockets • DPDK interfaces to user space • Delivery to kernel stack • SDN/Openstack configuration/stats • OVS intelligence for cut-through switching
  • 19.
    Lowest Latency withFlexibility Between Workloads Virtual Network Function Virtual Network Function Virtual Network Function PCI Express Local NIC External Switch Physical Switching Limitations • Hardware dependent switching (SR-IOV, RDMA, NIC embedded switching) • Throughput is limited by PCI Express (50 Gbps) and faces PCI Express and DMA additional latencies • Available PCI slots limit the number of chained VNFs • At 30 Gbps a single VNF is supported per node! Virtual Networking Benefits • Hardware independent • Aggregate 500 Gbps bandwidth with low latency • No external limit to number of chained VNFs 50 Gbps 500 Gbps Virtual Networking
  • 20.
    Software Acceleration Packet Processingwith 6WINDGate Paradigm Shift In Packet Processing Software • Fastest performance on the market; in both physical and virtual environments • Transparent, no change necessary to OS, hypervisor and management • Available across all major platforms • Native support for all major network protocols Multicore Processor Platform Fast PathNetwork Stack Control Plane Fast Path Fast Path Fast Path Fast Path
  • 21.
    Linux Kernel Linux NetworkingStack Linux Compatibility is Critical • Huge ecosystem of Linux applications • Industry-standard management frameworks • Acceleration solutions must retain full compatibility with packages and management iproute2iptables Quagga
  • 22.
    Linux Acceleration via6WINDGate • Standard Linux functions are accelerated by 6WINDGate Linux Kernel Linux Networking Stack Fast Path Configuration Fast Path Statistics Fast Path Fast Path Modules Shared Memory Protocol Tables Statistics iproute2iptables Quagga
  • 23.
    6WINDGate Removes PerformanceBottlenecks Performance (MillionsOfPackets PerSecond) ... Fast Path Cores ... Increase OS stability by offloading resource intensive mundane tasks Standard Linux Becomes Unstable Performance benefits scale with the number of processing cores 1 2 3 8 9 10 ...
  • 24.
    6WINDGate NFVI onAdvantech Appliance
  • 25.
    Audience Poll #2 •When will your company adopt SDN/NFV for mainstream deployment? – Option 1 - In 2015 – Option 2 - In 2-3 years – Option 3 - In 3-5 years – Option 4 - Up to 10 years – Option 5 - Never
  • 26.
    Use Case: NetworkingMiddlebox • Inline traffic processing for – Firewall, IPS/IDS, Analytics • 10G/40G/100G Network Interfaces – Tunnel termination – IPsec VPN, Crypto – Port-to-Port fast path • DPDK Packet API • Load balance to individual x86 sockets/cores user space kernel space Advantech FWA-6512C Netronome Acceleration/Offload 10G 100G40G
  • 27.
    Use Case: SDNGateway • SDN-controlled Appliance • Connect SDN and conventional networks – MPLS, VXLAN, VLAN, (NV)GRE • Manipulate traffic flow based on events – Security – Application ID – Traffic Characteristics OpenFlow Controller
  • 28.
    Scale out workload& traffic distribution 2x100GE Upstream Routers 16x10GE 16x10GE ESP-9213R ToR Switches 2x 100GE 2x100GE2x 100GE FWA-6512C FlowBalancer (dual socket Xeon) 2x40GE (10 sets total) 2x40GE (10 sets) 16x10GE 16x10GE ……………. Horizontal scaling ……………. Horizontal scaling Servers
  • 29.
    Performance Comparison • X86reference platform • NFVI performs Open vSwitch (in and out) • VM performs L3 forwarding (in and out) • 64 byte packets
  • 30.
    Bump in theWire - Throughput 0 2 4 6 8 10 12 14 16 18 20 64 128 256 512 1024 1518 Throughput(Gbps) Packet size FlowNIC DPDK Std Linux Throughput 0.00 10.00 20.00 30.00 40.00 50.00 60.00 70.00 80.00 64 128 256 512 1024 1518 Latency(us) Packet size FlowNIC DPDK Std Linux Latency • Simple packet forwarding between 2 ports • FlowNIC solutions • 100% offloaded, 0% CPU utilization • Highest throughput, lowest latency
  • 31.
    6WINDGate DPDK +Data Plane Acceleration OpenFlow/ Open vSwitch Control Plane Advantech Management Plane OpenFlow / Open vSwitch SW LB Special purpose offload Summary VM VM VM VMVM VM
  • 32.
    Questions and Answers? Presenter YannRapaport Product Manager 6WIND Moderator Simon Stanley Analyst at Large Heavy Reading Presenter Mark Guinther Senior Director of Business Development Netronome Presenter Peter Marek Senior Director x86 Solutions Advantech Network & Communications Group Advantech
  • 33.
    Thank you forattending! Upcoming Light Reading Webinars www.lightreading.com/webinars.asp www.advantech.com/nc