Your SlideShare is downloading. ×
0
Accelerated Xen networking
Accelerated Xen networking
Accelerated Xen networking
Accelerated Xen networking
Accelerated Xen networking
Accelerated Xen networking
Accelerated Xen networking
Accelerated Xen networking
Accelerated Xen networking
Accelerated Xen networking
Accelerated Xen networking
Accelerated Xen networking
Accelerated Xen networking
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Accelerated Xen networking

248

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
248
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
6
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  1. Accelerated Xen networking
  2. Background  Solarflare does user-level networking  TCP/IP stack linked with user app  Known as Etherfabric  Smart NIC allows safe access from user-mode  Onboard IOMMU for safe DMA  Nic’s filter-table demuxes incoming packets to queue  Queues get mapped into user-mode processes  Eliminates interrupts/syscalls/context switches  Can also do zero-copy tx from user-mode September 2006 2
  3. Virtual Interfaces and Protection  NIC has many Virtual Interfaces (VIs)  VI = Filter + DMA queue + event queue  VIs largely about 'protection'  Allow untrusted entities to access the NIC without compromising system integrity  VIs mapped into untrusted entity’s address space  Traditionally 'untrusted entity' means user process ... but it might equally mean a guest VM kernel September 2006 3
  4. Xen: I/O via Dom0  All 'real' drivers live in Dom0  DomU kernels have pseudo drivers that communicate with Dom0 via the hypervisor  Necessary because only Dom0 is 'trusted' Dom0 DomU DomU Hypervisor Hardware September 2006 4
  5. But with Etherfabric...  Accelerated routes set up in Dom0  Then DomU can access hardware directly  DomU can access hardware directly + safely  At least most of the time; still slow path via Dom0 Dom0 DomU DomU Hypervisor Hardware September 2006 5
  6. Front-end and back-end  Front-end driver in DomU  Where possible, interacts with h/w directly  This is known as “fast path”  Otherwise packets routed via Dom0: “slow path”  Need to be careful about Tx traffic for local machine  Back-end driver lives in Dom0  Sets up VI per guest VM  Installs filters “on-demand” as traffic flows in/out  Enforces quotas on front-end drivers September 2006 6
  7. Architecture Dom0 DomU h/w independent Shared Back-end pages Hardware In shared pages: driver dependent •VIs driver •FIFOs FIFOs •Tx/Rx buffs •Local MAC tbls RxQ TxQ EvQ RxQ TxQ EvQ RxQ TxQ EvQ RxQ TxQ EvQ VIVI VI VI September 2006 7
  8. Front-end driver  fend communicates with bend over FIFOs  Implemented using shared pages  Front-end driver split into two parts  Hardware dependent and independent parts  H/w independent part loads h/w dependent part  Selects based on message from back-end  On Linux, this message is just name of a module  Non-fatal error if h/w dependent module not found September 2006 8
  9. Migration  On migration, back-end sends ‘h/w unavailable’ message to front-end  Everything then goes over slow-path  Possibly followed by ‘new h/w available’ message at the other end  If no accelerated hardware available, or if no driver for that hardware available: slow path! September 2006 9
  10. Early performance results STREAM tx STREAM rx TCP/RR (Mbps) (Mbps) (trans. per sec.) Vanilla Xen Accelerated Native September 2006 10
  11. Early performance results 1400 1200 1000 800 600 400 200 0 STREAM rx STREAM tx TCP/RR Vanilla Xen (Mbps) ( M b p s ) (trans per Etherfabric Xen 100ms) Xen Dom0 September 2006 11
  12. CPU Usage 140 120 100 Vanilla Xen 80 (Dom0+DomU) Etherfabric 60 (Dom0 + DomU) Xen Dom0 (Dom0) 40 20 0 STREAM rx STREAM tx TCP/RR September 2006 12
  13. What's next?  Submission of patch to xen-devel  Some small changes contributed already  Convenient to fold bend and fend into mainline  Nice features to have in next-gen hardware  MAC filtering (on receive and transmit)  MSIX (interrupt line per guest)  Local dest tx DMA-ed directly to local guest September 2006 13

×