Slide 2 | PROPRIETARY AND CONFIDENTIALIntroducing Solarflare• High-performance softwareand hardware for 10GbEserver networking• Mission-critical applications– Securities and trading– HPC– Storage– Cloud/Web 2.0• Leader in financial services• Partnerships with Arista,Cisco, Couchbase, IBM,Juniper, Redhat, Oracle,VMware“Solarflare’s product, EnterpriseOnload is arobust, rigorously tested and fully supportedsolution that addresses our demanding supportand service level requirements. In addition toproviding the highest-performance, lowest-latency hardware, Solarflare’s unique andinnovative application acceleration softwarecan be used to deploy quickly without any needto re-write our applications.”Andrew BachSenior Vice President of Network Services for NYSE Euronext
Slide 3 | PROPRIETARY AND CONFIDENTIALData VolumeGrowing 44x2020: 35.2Zettabytes2010:1.2ZettabytesData is Growing Faster than Moore’s LawBusiness Analytics Requires a New ApproachSource: IDC Digital Universe Study, sponsored by EMC, May 2010The Network is the BottleneckIDCDigital UniverseStudy 2011
Slide 4 | PROPRIETARY AND CONFIDENTIALBut Computing is More than Just Moore’s Law• SandyBridge-BasedServers/ Aka RomleyMotherboards• Intel Integrated I/0• Reduced bottlenecks• Increase performanceup to 80%Up to 4 channelsDDR3 1600 MhzmemoryUp to 8 coresUp to 20 MB cacheIntegratedPCI Express*3.0Up to 40lanesper socket[ Transactions per second ]Xeon5600SeriesXeon2600FamilyCan more than DoubleI/O Performance1Direct Data I/O (DDIO)
Slide 5 | PROPRIETARY AND CONFIDENTIALSFN6122F & Xeon E5-2600 Deliver Winning Combination• SFN6122F single-streamlatency is superb over allmessage rates on Romleyplatforms, right up to thepoint of CPU core utilization• Ultra-low jitter (sub-micro at99Percentile)• Benefits from Intel® DataDirect I/O (DDIO) andchipset IO – memorybandwidth• Message rate headroom –20Mpps with 4x sfnt-streamssfnt-stream / openonload-201109-u2“Westmere” = 2x Xeon 5687 (3.6GHz)“Romley” = 2x E5-2687W (3.1GHz) – DDR 1333
Slide 6 | PROPRIETARY AND CONFIDENTIALStandard Server I/O Networking: RSS• RSS spreads flowsrandomly over cores• Packets within a flow go tosame core• Works well when– Connectionsare long lived– One thread perconnection– And the threadhappens to runon the rightcore– Great fornetworkbenchmarksNICVNIC VNIC VNICVNICreceive-side scalingcore0 core1 core2 core3App App App AppISR ISR ISR ISR
Slide 7 | PROPRIETARY AND CONFIDENTIALSmarter Server I/O Networking: Flow Affinity• Hardware deterministicallydirects flows to the idealcore for handling the load• Google developedReceive packet steering(RPS)• Solarflare developedAccelerated RFS formultiqueue hardware• Supported today onSolarflare adaptersNICVNIC VNIC VNICVNICreceive-side scalingcore0 core1 core2 core3App App App AppISR ISR ISR ISR
Slide 8 | PROPRIETARY AND CONFIDENTIALCisco and Solarflare Achieve Dramatic Latency Reductionsfor Interactive Web 2.0 Applications
Slide 11 | PROPRIETARY AND CONFIDENTIALOther Tuning Tips• RSS Spreading– Changing the default RSS spreading so that the card uses an RX queue per CPUcore. This is done by setting the driver’s module parameter rss_cpus=cores. Seesection “Receive Side Scaling (RSS)” on page 193-194 for more details. If youcan, we also suggest you disable the irqbalancer service before doing this, as theirqbalancer tends to undo the good work of spreading the networking interruptsover the available CPU cores (see page 196)• Enable LRO– Double check LRO is enabled using ethtool. As mentioned RHEL 6 libvirtddaemon can cause this to be disabled. See “TCP Large Receive Offload (LRO)”on page 191-192 for details of LRO and how to change RHEL6 behaviour• Interrupt Moderation– IF (and only if) you think the benchmark is latency sensitive then we suggestdisabling interrupt moderation. The driver by default uses adaptive interruptmoderation and tries to tune based on traffic patterns. However, if you know youhave a ping/pong – transactional app then helps to disable this completely (seepage 189 – “ethtool –C <ethX> rx-usecs-irq 0 adaptive-rx off”). BUT if you arestreaming large blocks of storage data between the servers then don’t do this - thedefault is best.