Sunx4450 Intel7460 GigaSpaces XAP Platform Benchmark


Published on

Sunx4450 Intel7460 GigaSpaces XAP Platform Benchmark

Published in: Technology
  • Be the first to comment

  • Be the first to like this

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • Thanks to Massimo for the very informative presentation of the technology roadmap that awaits us. With your permission – I’d like to spend the next few minutes talking about 2 things: How we at GS see the change that our industry is going through (and no - I’m not referring to the sub-prime crisis...), How we are responding to it.
  • Sunx4450 Intel7460 GigaSpaces XAP Platform Benchmark

    1. 1. Ultra-Scalable and Blazing-Fast: The Sun Fire x4450-Intel 7460-XAP GigaSpaces Platform Scaling up with Commodity HW® Scale up Benchmark Report Shay Hassidim Deputy CTO GigaSpaces January 2009
    2. 2. Agenda <ul><li>Benchmark targets </li></ul><ul><li>Results highlights </li></ul><ul><li>Servers used </li></ul><ul><li>Stack under test </li></ul><ul><li>The Sun Fire x4450 </li></ul><ul><li>Intel® Xeon® Processor 7400 Series </li></ul><ul><li>SBA vs .TBA – Scaling vs. Latency and Costs </li></ul><ul><li>Scale up Benchmark Scenarios </li></ul><ul><li>Quick Introduction to GigaSpaces </li></ul><ul><li>Scale up Throughput Benchmark </li></ul><ul><li>Scale up Latency Benchmark </li></ul><ul><li>Web Application Benchmark </li></ul><ul><li>Risk Calculation Benchmark </li></ul><ul><li>Special tuning </li></ul><ul><li>Comparison of Six Core with Quad Core </li></ul><ul><li>Conclusions </li></ul>
    3. 3. Scaling up with commodity HW ® - Benchmark Target <ul><li>“ S c a l i n g u p with commodity HW®” </li></ul><ul><li>Test how GigaSpaces based applications Scales-up on the Sun Fire x4450-Intel 7460-XAP GigaSpaces Platform and how the new Intel 45-nm technology with 4 CPUs and 6 cores utilized with mission critical applications </li></ul>
    4. 4. Scale up Benchmark Results Highlights - Throughput <ul><li>Collocated space capacity </li></ul><ul><ul><li>1.8 million read operations per/sec </li></ul></ul><ul><ul><li>1.1 million write/take operations per/sec </li></ul></ul><ul><ul><li>Less than 20% performance drop up to 16 threads </li></ul></ul><ul><li>Remote space capacity </li></ul><ul><ul><li>90,000 read operations per/sec </li></ul></ul><ul><ul><li>45,000 write/take operations per/sec (including HA). </li></ul></ul><ul><ul><li>Less than 20% performance drop up to 24 clients </li></ul></ul>
    5. 5. Scale up Benchmark Results Highlights - Latency <ul><li>1 ms latency with remote write operation </li></ul><ul><ul><li>Load of 20,000 remote write operation per sec </li></ul></ul><ul><ul><li>including HA (with replication to backup) </li></ul></ul><ul><ul><li>Concurrent 20 clients </li></ul></ul><ul><ul><li>4 K payload. </li></ul></ul><ul><li>0.4 ms latency with remote write operation </li></ul><ul><ul><li>Load of 20,000 remote write operation per sec </li></ul></ul><ul><ul><li>excluding HA (without replication to backup) </li></ul></ul><ul><ul><li>Concurrent 20 clients </li></ul></ul><ul><ul><li>4 K payload. </li></ul></ul>
    6. 6. Scale up Benchmark Results Highlights <ul><li>Web Application </li></ul><ul><ul><li>16,223 page generation/sec with 6 ms latency over LAN </li></ul></ul><ul><ul><li>3 web servers </li></ul></ul><ul><ul><li>2 x4450 machines </li></ul></ul><ul><li>HPC Risk Calculation </li></ul><ul><ul><li>Monte Carlo simulation </li></ul></ul><ul><ul><li>Near linear scalability (Up to 32 concurrent workers) </li></ul></ul><ul><ul><li>Boosting the calculation time in 2640% (compared to one worker) </li></ul></ul><ul><ul><li>Calculating 4096 portfolios in 100 seconds (41 calculation per/sec). </li></ul></ul>
    7. 7. Servers Used Since we are running Scale up Benchmark we have fixed amount of machines - mySQL Database - Apache Load-Balancer 3.16GHz 4 2 (2 cores each) Intel Xeon 16 G RAM Sun Fire X4150 2 socket Sun 4 GigaSpaces Clients 2.66GHz 24 4 (6 cores each) Intel Dunnington X7460 32 G RAM 4 socket Intel White box 3 GigaSpaces 2.66GHz 24 4 (6 cores each) Intel Dunnington X7460 32 G RAM Sun Fire X4450 4 socket Sun 2 GigaSpaces 2.66GHz 24 4 (6 cores each) Intel Dunnington X7460 32 G RAM Sun Fire X4450 4 socket Sun 1 Running Clock speed # of Cores # of CPU CPU Type and Memory Model Vendor Server ID
    8. 8. Technology Stack under Test Ethernet (1gE) Sun Fire x4450 Intel Dunnington X7460 4 CPUs (6 cores each) Sun Solaris update 6 GigaSpaces XAP 6.2.2 Sun mySQL 5 Apache LB 2.2.9 Sun JDK 1.6
    9. 9. Sun Fire x4450 <ul><li>Comes with the Intel® Xeon® processor 7400 series with 6 processing cores. </li></ul><ul><li>Half the size of the other servers in its class </li></ul><ul><li>Industry's smallest enterprise class 4-socket x64 server </li></ul><ul><li>Supports up to 4 processors (24 cores), 8 SAS Disk Drives and 32 memory DIMMs in a 2RU form factor </li></ul><ul><li>Energy efficient design saves on power and cooling </li></ul><ul><li>Comprehensive and easy-to-use system management and monitoring built into every system </li></ul><ul><li>Choice of virtually any OS: Solaris, Linux, Windows, VMware </li></ul><ul><li>Key Applications </li></ul><ul><ul><li>Server consolidation and virtualization </li></ul></ul><ul><ul><li>Database and web serving </li></ul></ul><ul><ul><li>Data warehousing and data analysis </li></ul></ul><ul><ul><li>Business processing </li></ul></ul>
    10. 10. Intel® Xeon® Processor 7400 Series Benefit Feature <ul><li>Flexibility for 64-bit and 32-bit applications and operating Systems </li></ul>Intel® 64 architecture <ul><li>Enables increased throughput and bandwidth between each of the processors and the chipset </li></ul>1066 MHz Dedicated High-Speed Interconnects (DHSI) <ul><li>A suite of processor hardware enhancements that assists virtualization software to deliver more efficient virtualization solutions and greater capabilities including 64-bit guest OS support </li></ul><ul><li>Intel® VT FlexPriority optimizes virtualization software efficiency by improving interrupt handling </li></ul><ul><li>Intel® VT FlexMigration enables Intel Xeon processor 7400 series-based systems to be added to the existing virtualization pool with single, two, or 4+ socket Intel Core microarchitecture-based servers </li></ul>Intel® Virtualization Technology 3 <ul><li>Boosts performance on multiple applications/user environments and data-demanding workloads, while enabling denser data center deployments through improved performance-per-watt </li></ul><ul><li>45nm Hi-k process technology enables larger on-die cache for better performance, and reduced transistor </li></ul><ul><li>Increases efficiency of cache-to-core data transfers, maximizing main memory to processor bandwidth </li></ul><ul><li>Reduces latency by storing larger data sets closer to the processor, reducing the number of trips to main memory </li></ul>Enhanced Intel® Core™ Microarchitecture 16 MB of L3 Cache <ul><li>Platform-compatible with other Intel Xeon processor 7000-sequence processors for ease of migration and IT stability </li></ul><ul><li>Increased performance with 45nm technology and increased headroom for multi-threaded applications and data demanding applications </li></ul><ul><li>Enables improved virtualization performance, increasing system utilization </li></ul>Multi-core processing
    11. 11. Scale up Benchmark Goals – Measuring Performance Scalability Latency
    12. 12. Scale up Benchmark Goals - Measure the Platform Performance , Scalability and Latency for <ul><ul><li>On-line Systems – Relevant for: </li></ul></ul><ul><ul><ul><li>On-line Banking </li></ul></ul></ul><ul><ul><ul><li>On-line Trading </li></ul></ul></ul><ul><ul><ul><li>On-line Gaming </li></ul></ul></ul><ul><ul><ul><li>e-gov portals </li></ul></ul></ul><ul><ul><ul><li>e-commerce Retail </li></ul></ul></ul><ul><ul><ul><li>Logistics </li></ul></ul></ul><ul><ul><ul><li>Order-Management systems </li></ul></ul></ul><ul><ul><ul><li>Supply chain management </li></ul></ul></ul><ul><ul><ul><li>RFID Based Systems </li></ul></ul></ul><ul><ul><li>HPC – Relevant for: </li></ul></ul><ul><ul><ul><li>Risk Calculation </li></ul></ul></ul><ul><ul><ul><li>Real-time Analytics </li></ul></ul></ul><ul><ul><ul><li>Market Data </li></ul></ul></ul><ul><ul><ul><li>Large-volume data mining </li></ul></ul></ul><ul><ul><ul><li>Real-time tracking </li></ul></ul></ul><ul><ul><ul><li>Command-and-Control </li></ul></ul></ul>
    13. 13. SBA vs .TBA – Scaling vs. Latency and Costs SBA – Space Based Architecture TBA – Traditional Based Architecture
    14. 14. Scale up Benchmark Scenarios <ul><li>Generic GigaSpaces XAP Benchmark: </li></ul><ul><ul><li>Performance vs Scalability - Throughput and capacity. In proc and out proc benchmark. Additional client threads will be added to measure the scalability of the system. </li></ul></ul><ul><ul><li>Latency vs Scalability – Measuring the time it takes pushing data into GigaSpaces' in-memory-data-grid for processing under massive load. </li></ul></ul><ul><li>Mission Critical applications Benchmark - On-Line web based and HPC applications benchmark: </li></ul><ul><ul><li>Pet Clinic application - This application will be based on GigaSpaces as the backbone instead of a relational database. </li></ul></ul><ul><ul><li>Credit Risk Calculation – Monte Carlo simulation </li></ul></ul>
    15. 15. Quick Introduction to GigaSpaces
    16. 16. Introduction - Space Basic Runtime Modes <ul><li>Embedded space </li></ul><ul><ul><li>A space instance that runs within the application memory address space </li></ul></ul><ul><ul><li>Accessed by reference without going through network or serialization calls </li></ul></ul><ul><ul><li>Most efficient configuration mode </li></ul></ul><ul><ul><li>Used as the primary space configuration setup </li></ul></ul><ul><li>Remote Space </li></ul><ul><ul><li>Accessing a remote space involves network calls and serialization/de-serialization of the objects between the client and the space process </li></ul></ul><ul><ul><li>Used in cases where: </li></ul></ul><ul><ul><ul><li>Client application cannot run an embedded space (due to memory capacity limitations, etc.) </li></ul></ul></ul><ul><ul><ul><li>In cases where there are a large number of concurrent updates on the same object using different remote processes </li></ul></ul></ul>
    17. 17. Introduction - Space Basic Operations
    18. 18. Introduction - Basic Deployment Topologies <ul><li>Primary-Backup </li></ul>Partitioned Feeder Feeder Partitioned + Backup Feeder
    19. 19. Introduction Space Based Architecture – Business logic and data collocated Primary 1 Primary 2 Primary 3 Backup 3 Backup 2 Backup 1 Replication Replication Replication Pushing data into the backend system In-Memory-Data-Grid and collocated Processing units Collects results / reporting Service
    20. 20. <ul><li>The Benchmarks </li></ul>
    21. 21. Scale up Throughput Benchmark
    22. 22. Scale up Throughput Benchmark – Physical Deployment Topology <ul><li>Remote (multiple machines , multiple processes) </li></ul>white box Client X4450 GigaSpaces 4 spaces , one per GSC X4450 GigaSpaces 4 spaces , one per GSC Switched Ethernet LAN <ul><li>Embedded (one machine , one process) </li></ul>X4450 Client GigaSpaces 8 spaces
    23. 23. Scale up Throughput Benchmark – Embedded mode 20% drop up to 16 threads hitting the system with 1.5 M read/sec 1.8 Million read sec! 1.1 Million write/take sec!
    24. 24. Scale up Throughput Benchmark – Remote mode 20% drop up to 24 users hitting the system with 65,000 red/sec 90,00 read sec! 45,00 write/take sec!
    25. 25. Scale up Throughput Benchmark - Results <ul><li>Embedded (excluding HA) </li></ul><ul><ul><li>1.8 million read operations/sec with 20 threads </li></ul></ul><ul><ul><li>1.1 million write/take operations/sec with 20 threads </li></ul></ul><ul><ul><li>0.8 scalability ratio up to 16 threads with read operations </li></ul></ul><ul><ul><li>64 bit JVM , Java 1.6 Performance version, 10 G Heap Size </li></ul></ul><ul><ul><li>Up to 10 threads: ~100K read/sec , ~80K write-take/sec per thread </li></ul></ul><ul><li>Remote (including HA) </li></ul><ul><ul><li>90,000 read operations/sec with 50 threads </li></ul></ul><ul><ul><li>46,000 write/take operations/sec with 50 threads </li></ul></ul><ul><ul><li>0.8 scalability ratio up to 24 threads </li></ul></ul><ul><ul><li>0.6 scalability ratio up to 42 threads </li></ul></ul><ul><ul><li>Both Client running on white box, spaces running on x4450 </li></ul></ul><ul><ul><li>Space: 64 bit JVM , 3 G Heap Size, Client: 64 bit JVM , 2 G Heap Size </li></ul></ul>
    26. 26. <ul><li>Scale up Latency Benchmark </li></ul>
    27. 27. Scale up Latency Benchmark <ul><li>Architecture </li></ul><ul><ul><li>Data-Grid running on x4450 (2 servers – 48 cores) </li></ul></ul><ul><ul><ul><li>Data-Grid Size – 4 partitions with/out backup </li></ul></ul></ul><ul><ul><li>Client running on white box (1 server – 24 cores) </li></ul></ul><ul><ul><ul><li>Multi threaded application performs write operations in stead rate (1000 write/sec) </li></ul></ul></ul>Client Latency Measured
    28. 28. Scale up Latency Benchmark– Physical Deployment Topology white box Client X4450 GigaSpaces 4 spaces , one per GSC X4450 GigaSpaces 4 spaces , one per GSC Switched Ethernet LAN
    29. 29. Scale up Latency Benchmark Less than 20% drop up to 20 users hitting the system with 20,000 write/sec
    30. 30. Scale up Latency Benchmark - Results <ul><li><1 millisecond latency for 20 users running with 1000 write operations/sec including HA. </li></ul><ul><li><0.4 millisecond latency for 20 users running with 1000 write operations/sec excluding HA. </li></ul><ul><li>True linear Scalability Ratio up to 8 users (8K write/sec) </li></ul><ul><li>>0.8 Scalability Ratio for 22 users (22K write/sec) </li></ul><ul><li>All the results above were taken with: </li></ul><ul><ul><li>Total of 20,000 operations/sec </li></ul></ul><ul><ul><li>4 K object payload </li></ul></ul><ul><ul><li>1gE network </li></ul></ul><ul><ul><li>ping latency 0.1 ms. </li></ul></ul><ul><ul><li>The test object had 8 fields where 3 of them are indexed (4 String Fields , 2 Long fields, 2 Integer fields). The test Class did not implemented Externalizable and did not had any special truing or optimization to deuce its footprint or speedup its serialization. </li></ul></ul>
    31. 31. Web Application Benchmark
    32. 32. Web Application Benchmark – The Pet Clinic <ul><li>The pet Clinic application is popular example for on-line applications implementations over the web. </li></ul><ul><li>The original application has been modified to use GigaSpaces as the system of record instead of a database. </li></ul>
    33. 33. Architecture – Step 1- Request Submission Get request and invoke Service Data Grid Step 1 - Latency
    34. 34. Architecture – Step 2- Retrieve Results Page Generation Data Grid Latency measured: Step 1 + 2 Step 2 - Latency
    35. 35. Web Application Benchmark – Physical Deployment Topology X4150 Apache Load Balancer mySQL X4450 GigaSpaces 4 spaces Web servers ,Services X4450 GigaSpaces 4 spaces Web servers ,Services Switched Ethernet LAN white box Client JMeter Switched Ethernet LAN
    36. 36. Web Application Benchmark Results – Latency , Scalability Only 20% drop up to 20 users hitting the system with 7000 requests/sec having 2.8 ms latency
    37. 37. Web Application Benchmark Results - Capacity The Users factor is 50 - Every LAN based user equals 50s WAN based users due-to the inherit latency of the internet (Min latency over the WAN 100ms , over the LAN 2ms)
    38. 38. Web Application Benchmark - Results <ul><li>Architecture </li></ul><ul><ul><li>Data-Grid and Web-Servers running on x4450 (2 servers – 48 cores) </li></ul></ul><ul><ul><ul><li>Data-Grid Size – 2 partitions with backup </li></ul></ul></ul><ul><ul><li>Client (using JMeter) running on white box (1 server – 24 cores) </li></ul></ul><ul><ul><li>Database and Apache Load Balancer running on 4150 (1 server - 4 cores) </li></ul></ul><ul><li>>6 millisecond latency for 100 users over the LAN with 3 web servers. </li></ul><ul><li>>0.8 Scalability Ratio for 20 users with 3 web servers </li></ul><ul><li>>0.6 Scalability Ratio for 50 users with 3 web servers </li></ul><ul><li>Capacity – 16223 page generation/sec with 3 web servers </li></ul>
    39. 39. <ul><li>Risk Calculation Benchmark </li></ul>
    40. 40. Risk Calculation Benchmark Workers Data-Grid Submit Calculation Request
    41. 41. Risk Calculation Benchmark – Physical Deployment Topology X4450 GigaSpaces spaces Workers ,Services X4450 GigaSpaces spaces Workers ,Services X4150 Client Switched Ethernet LAN White box GigaSpaces spaces Workers ,Services
    42. 42. Risk Calculation Benchmark Results – Calc Time , Scalability Less than 20% drop up to 32 workers
    43. 43. Risk Calculation Benchmark Results <ul><li>Architecture </li></ul><ul><ul><li>Client: x4150 – 4 cores 3GHz </li></ul></ul><ul><ul><li>Data Grid and Workers – 3 x4450 machines with 24 cores 2.67 GHz </li></ul></ul><ul><ul><li>Data-Grid size – 8 partitions </li></ul></ul><ul><li>Business logic </li></ul><ul><ul><li>Calculating 4096 Credit Risks using Monte Carlo simulation </li></ul></ul><ul><li>>0.8 Scalability Ratio for 32 workers </li></ul><ul><li>>0.6 Scalability Ratio for 64 workers </li></ul>
    44. 44. Special Tuning <ul><li>OS Tuning </li></ul><ul><ul><li>Add the following line to the /etc/system file: </li></ul></ul><ul><li>            set idle_cpu_prefer_mwait=0 </li></ul><ul><li>JVM Tuning </li></ul><ul><ul><li>6G Xmx and Xmx for the embedded space benchmark </li></ul></ul><ul><ul><li>-XX:LargePageSizeInBytes=256m </li></ul></ul><ul><ul><li>-XX:ParallelGCThreads=8 </li></ul></ul><ul><li>GigaSpaces Tuning </li></ul><ul><ul><li> </li></ul></ul><ul><ul><li> = 32 </li></ul></ul><ul><li>mySQL Tuning </li></ul><ul><ul><li>None </li></ul></ul><ul><li>Apache Load-Balancer Tuning </li></ul><ul><ul><li>None </li></ul></ul>
    45. 45. <ul><li>Comparing 6 cores CPU with 4 cores CPU </li></ul>
    46. 46. Comparing 6 core CPU with 4 Core CPU <ul><li>We run the throughput benchmarks on 4 cores CPU </li></ul><ul><ul><li>Embedded space throughput </li></ul></ul><ul><ul><li>Remote space throughput </li></ul></ul><ul><li>The 4 cores CPU clock was 10% faster than the 6 cores CPU </li></ul><ul><ul><li>4 cores CPU clock speed 2.93GHz </li></ul></ul><ul><ul><li>6 cores CPU clock speed 2.66GHz </li></ul></ul><ul><li>The theoretical performance boost of the 6 cores CPU should be 45% </li></ul><ul><ul><li>The results where ~30% performance boost </li></ul></ul>
    47. 47. Comparison of 6 with 4 Core CPU (16 vs. 24 cores) Embedded mode 20% difference with 16 threads 300,000 extra read oper/sec with 24 cores! 180,000 extra write/take oper/sec with 24 cores!
    48. 48. Comparing 6 core CPU with 4 Core CPU (16 vs. 24 cores) Embedded mode Better throughput with 24 cores Better Scalability with 24 cores – 24-29% better
    49. 49. Comparing 6 core CPU with 4 Core CPU (16 vs. 24 cores) Remote mode ~30% difference with 24 threads 10,000 extra read oper/sec with 24 cores! 20,000 extra write/take oper/sec with 24 cores!
    50. 50. Comparing 6 core CPU with 4 Core CPU (16 vs. 24 cores) Remote mode Better throughput with 24 cores Better Scalability with 24 cores 12-33 % better
    51. 51. Conclusions <ul><li>The Sunx4450-Intel 7460-GigaSpaces XAP Platform is a very scalable platform providing both low latency and high throughput </li></ul><ul><ul><li>The six cores CPU provided ~30% better scalability and performance compared to 4 cores CPU </li></ul></ul><ul><li>Fits both On-line and HPC based systems </li></ul><ul><li>Having 4 fast CPUs with 6 cores each allows applications to scale up with minimal affect on performance and latency. Using GigaSpaces: </li></ul><ul><ul><li>Scaling out can be done with minimal impact on the latency and deployment topology. </li></ul></ul><ul><ul><li>Scaling out can be done while system is running </li></ul></ul><ul><li>Solaris 10 OS and JVM 6 are very stable and provide deterministic and predictable environment – even for the most demanding scenarios. </li></ul><ul><li>GigaSpaces In-Memory-Data-Grid provides locality of reference avoiding the need to access the database for each business transaction. </li></ul><ul><li>GigaSpaces as a scale out application server allows very complex applications to be deployed very easily , and virtualizes the actual deployment model from the physical network and machine topology. </li></ul><ul><ul><li>Support failover , auto recovery and auto seal-healing </li></ul></ul><ul><ul><li>Dynamic SLA based scalability </li></ul></ul>
    52. 52. Special Tuning <ul><li>Network Tuning </li></ul><ul><li>1. The automatic interrupts rate adjustment in the Solaris IP network stack was recommended to be disabled by adding the following entry in the /etc/system file:     set dld:dld_opt=2 The dld:dld_opt parameter is used to control interrupt scheduling within the IP network stack. By default, the IP code will try to automatically adjust the rate of interrupts with the intention to smooth the rate and reduce delays for low and moderate network traffic. At high interrupt rates, automatic adjustment is often counterproductive, causing more work in the IP stack which is not needed. Automatic adjustment will not be done by setting the option to the value “2”. 2. For the 1GbE network infrastructure, it is recommended to disable the interrupt throttling on the e1000g interfaces by adding the following two entries in the /kernel/srv/e1000g.conf file:     intr_adaptive=0,0,0,0,0,0,0,0;     intr_throttling_rate=0,0,0,0,0,0,0,0; The intr_adaptive and intr_throttling_rate variables define how interrupt blanking (coalescing) is implemented by the driver for each NIC that it controls. Each element inthe list is the value for a specific NIC from e1000g0 to e1000g15 (higher numbered NICs will likely not be present). The table examples show 16 possible NICs being changed, which is not necessary in almost all cases. If interrupt blanking is disabled, packets are processed by the driver as soon as they arrive. If interrupt blanking is enabled, packets are processed by the driver when the interrupt is issued. Enabling blanking allows the network stack to delay processing single packets with the assumption that additional packets will follow shortly which can then all be processed with a single interrupt. This will add latency, but will reduce CPU utilization by reducing the number of interrupts that will need to be handled. The variable intr_adaptive disables (0) or enables (1) the interrupt blanking mechanism provided by the Solaris Generic LAN Driver (GLD) framework. When this tunable is disabled, the intr_throttling_rate parameter is available to configure interrupt blanking manually. The variable intr_throttling_rate (0 - 65535) specifies the inter-interrupt delay to realize the interrupt blanking (coalescing). The intr_throttling_rate variable takes effect only when intr_adaptive is disabled. Smaller values of intr_throttling_rate mean higher interrupt rates/less coalescing; larger values mean lower interrupt rates/more coalescing. The value 0 indicates that there is no interrupt coalescing - an interrupt is fired immediately when a packet is received. It is recommended that for systems handling lower message rates, that the defaults (intr_adaptive set to 1, which disables intr_throttling_rate) be used. Systems that handle very high loads that are relatively constant or which are very latency sensitive should set intr_adaptive and intr_throttling_rate to 0 only for the NIC(s) carrying that load. </li></ul>
    53. 53. Need help? <ul><li>Email us at [email_address] </li></ul>Thank You!