The Network\'s IN the (virtualised) Server: Virtualized Io In Heterogeneous Multicore Architectures

  • 498 views
Uploaded on

How to truly harness the most powerful server processors without bottlenecking or thrashing their cache\'s with network flows...

How to truly harness the most powerful server processors without bottlenecking or thrashing their cache\'s with network flows...

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
498
On Slideshare
0
From Embeds
0
Number of Embeds
1

Actions

Shares
Downloads
12
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. Virtualized Vi t li d I/O in H t i Heterogeneous Multicore Architectures Scaling x86 embedded designs to 40 Gbps and beyond Daniel Proch Director, Director Product Management and Field Applications Engineering daniel.proch@netronome.com Linley Tech Spring Conference May 18-19, 2010 – San Jose, CA © Netronome Systems Inc MMX
  • 2. Next-Generation Computing Trends • Network and security application vendors need to scale performance with embedded multicore IA/x86 • Virtualization is seen as the key to the convergence of networking and computing in the data center • Networking functionality collapsing into servers from discrete devices in data centers A new processing paradigm is required to support these trends and scale x86 systems to 40 Gbps and beyond t d d l 86 t t Gb db d © Netronome Systems Inc MMX 2
  • 3. Network Virtualization • Virtualized networks have been around for years • Allows a single set of physical resources to be shared amongst a diverse group of users • With isolation, performance guarantees and security Access Se ces Services • Ethernet VLAN Eth t VLANs xDSL DSLAM • IP Sec VPNs Aggregation Voice • SSL VPNs GigE Edge g Backbone • MPLS, RFC2547 WWW • Frame Relay IP MSAN • ATM • PWE3 FR ATM Video © Netronome Systems Inc MMX 3
  • 4. Server Virtualization • For data center consolidation, a single physical machine supports multiple guest OSs • I Improves the efficiency and availability of resources and th ffi i d il bilit f d applications • The “one server, one application model is gone one application” © Netronome Systems Inc MMX 4
  • 5. Data Center Collapse • With applications uniquely tied to physical server resources, net- working happened outside the g pp server • L2/L3 switching • Network securityy • Load balancers to spread traffic across hosting platforms Changing the ratio of applications to servers changes the way we need to architect products for the data center © Netronome Systems Inc MMX 5
  • 6. Virtualized Networking Need a in x86 virtualized network in here! • Multicore servers support many applications per physical device (whether virtualized or not) • Networking functionality must now collapse inside the server • Packet classification • Flow based load balancing • Active flow state and flow pinning • L2 switching • L3 forwarding • QoS x86 server 86 © Netronome Systems Inc MMX 6
  • 7. x86 Networking Performance • Multicore x86 creates bottlenecks • Not optimized for network and security processing • Processing done in “software” • Packet interrupt handling wastes CPU NFE • Poor small packet performance • High power consumption Load balancer Classifier L2 switch Flow Sate Classifier L2 S it h Switch x86 server Flow State © Netronome Systems Inc MMX 7
  • 8. Enter I/O Virtualization • Hardware based network and security processing in network flow processors VM VM • Workload-optimized NFPs and x86 p processors are linked VM NFE • Efficient delivery of data to VMs at high VM VM rates (20+ Gbps) • High-performance, virtualization-aware communications path VM • Zero-copy data delivery to virtual end VM points Load balancer IOV - the final link between virtualized networks, flow processors and general- Classifier L2 switch purpose multicore x86 Flow Sate x86 server © Netronome Systems Inc MMX 8
  • 9. Comparing IOV Implementation Options Software IO Virtualization IOV with multi-queue devices • All traffic passes through • Multiplexing occurs in management VM a age e t hardware ad ae • Multiplexing (and demux) in • Packets still traverse software management VM (adds latency) • Poor performance and latency © Netronome Systems Inc MMX 9
  • 10. Netronome Enhanced IOV • PCI device direct assignment • Guest VMs can directly access hardware devices • Eliminates IOV overheads • Netronome IOV solution is SR-IOV-compliant while providing flexible device support • Dumb NIC • Intelligent NIC • Crypto NIC or • Packet Capture (pcap) NIC © Netronome Systems Inc MMX 10
  • 11. • Application/control plane processing • Deep packet inspection • Content inspection, behavioral heuristics, forensics, PCRE • L2-L7 classification • Stateful flow processing • Cryptography • PKI operations • Flow-based load balancing • L2 switching to VMs • L2-L4 packet classification • Packet-based load balancing g • Physical Interfaces •I t Integrated bypass relays t db l © Netronome Systems Inc MMX 11
  • 12. Deep Packet Inspection In a heterogeneous multicore architecture • Packets are classified on ingress • Sent to x86 for DPI p processingg • Results in application or protocol awareness • New classification rule programmed to NFP for each flow © Netronome Systems Inc MMX 12
  • 13. Reduction in CPU Utilization • Up to 80% of the total CPU resources are dedicated to packet I/O with systems using standard adapters • Leaves only 20% of CPU resources for application processing •N t Network fl k flow-based b d coprocessors give a 3-5x increase in available CPU resources Kernel CPU cycle use and interrupts are significantly reduced © Netronome Systems Inc MMX 13
  • 14. 20 Gbps IPS Application Performance •Computationally intense processing i • ~4000 PCRE rules • Variable packet sizes • Variable protocol mix • Inline measurements © Netronome Systems Inc MMX 14
  • 15. Heterogeneous Multicore Processing Architecture © Netronome Systems Inc MMX 15
  • 16. www.netronome.com www netronome com © Netronome Systems Inc MMX 16
  • 17. Backup © Netronome Systems Inc MMX 17
  • 18. NFP-3200 Summary • High performance • 40 cores @ 1.4 GHz • 1,800 instructions / packet at 30M pps • 20 Gbps of packet, flow, and content processing • I/O virtualization • PCIe Gen2 with SR-IOV support pp • Highly integrated design • 20Gbps of line-rate security/crypto • Integrated MAC, PKI, PCIe, Interlaken, ARM • Unmatched ease of use • Proven tools, software development kit, product-ready software, reference platforms 40 – 100G Gbps Network Flow Processor © Netronome Systems Inc MMX 18
  • 19. Netronome Network Flow Engine NFE 3240 NFE-3240 • 20Gbps of line rate packet processing per NFE • 6x1GigE, 2x10GigE ( g , g (SPF+), netmod interfaces ), • PCIe Gen2 (8 lanes) • Nanosecond packet timestamping • Hardware c yptog ap y suppo t a d a e cryptography support • Flexible/configurable memory options • TCAM based traffic filtering • Virtualized Linux drivers via SR-IOV SR IOV • Hardware-based stateful flow management • Dynamic flow-based load balancing to x86 Highly programmable, intelligent acceleration cards for network security appliances and servers © Netronome Systems Inc MMX 19
  • 20. World's Highest Performance Appliance Platform • Intelligent Network O Optimized Virtualization Adapters • 20 Gbps PCIe cards • Flow processing solutions up to 200Gbps • Pluggable front facing I/O • Three layers of packet, flow and application processing • Open APIs for application acceleration p pp • Snort, Bro, ntop, switching / routing • Custom applications • Up to 200 Gbps minimum sized packet performance for network and security applications! • Highest performance solution pe per $$$$$ in t e world! the o d © Netronome Systems Inc MMX 20