SlideShare a Scribd company logo
© VCDX 200 - https://www.vcdx200.com/
VMware NIC Performance Test Plan
Prepared by
David Pasek, VMware TAM
dpasek@vmware.com
Version History
Date Rev. Author Description Reviewers
2020 Oct 29 0.1 David Pasek Initial Draft.
Simple tests.
More complex tests and multiple
test combination can be tested
in case of not seeing any
performance issues.
2020 Nov 06 0.2 David Pasek NUTTCP test method fixed.
In guest (vm2vm) iperf TCP test
In guest (vm2vm) iperf UDP test
Two Load balancer simulation
tests (without RSS, with RSS)
2020 Nov 24 0.3 David Pasek Published as open source
Contents
1. Overview..........................................................................................4
2. Requirements, Constraints and Assumptions ...............................5
2.1 Requirements ........................................................................................................5
2.2 Constraints ............................................................................................................5
2.3 Assumptions..........................................................................................................5
3. Test Lab Environments...................................................................6
3.1 ESXi hosts - Hardware Specifications......................................................................6
3.2 Virtual Machines - Hardware and App Specifications ................................................6
3.3 Lab Architecture.....................................................................................................7
4. Test Plan..........................................................................................8
4.1 VMKernel - TCP (iperf) communication between two ESXi host consoles ..................9
4.2 VM - TCP (nuttcp) communication of 2 VMs across two ESXi hosts ........................10
4.3 VM - TCP (iperf) communication of 2 VMs across two ESXi hosts ...........................11
4.4 VM - UDP (nuttcp 64 KB) communication of 2 VMs across two ESXi hosts ..............12
4.5 VM - UDP (iperf 64 KB) communication of 2 VMs across two ESXi hosts.................13
4.6 VM - HTTP (Nginx) communication of 2 VMs across two ESXi hosts .......................14
4.7 VM - HTTPS (Nginx) communication of 2 VMs across two ESXi hosts .....................15
4.8 VM - HTTP communication across two ESXi hosts via LoadBalancer (no RSS) .......16
4.9 VM - HTTP communication across two ESXi hosts via LoadBalancer (RSS) ............18
5. Appendixes................................................................................... 20
5.1 Useful commands and tools for test cases.............................................................20
5.2 Diagnostic commands ..........................................................................................29
5.3 ESX commands to manage NIC Offloading Capabilities .........................................32
1. Overview
This document contains testing procedures to verify that the implemented design successfully addresses
customer requirements and expectations.
This document assumes that the person performing these tests has a basic understanding of VMware
vSphere and is familiar with vSphere lab design and environment. This document is not intended for
administrators or testers who have no prior knowledge of VMware vSphere concepts and terminology.
2. Requirements, Constraints and Assumptions
2.1 Requirements
1. VM to VM Network TCP communication across two ESXi hosts has to achieve at least ~5
Gbps (~500 MB/s) transmit/receive throughput bi-directionally (5 Gbps transmit and 5 Gbps
receive)
2. VM to VM Network HTTP communication across two ESXi hosts has to achieve at least 5
Gbps (~500 MB/s) transmit/receive throughput bi-directionally (5 Gbps transmit and 5 Gbps
receive)
3. VM to VM Network HTTPS communication across two ESXi hosts has to achieve at least 5
Gbps (~500 MB/s) transmit/receive throughput bi-directionally (5 Gbps transmit and 5 Gbps
receive)
2.2 Constraints
1. Hardware
a. 4x HPE DL560 Gen10 (BIOS: U34)
b. 4x Intel NIC X710 (Firmware Version: 10.51.5, Driver Version: 1.9.5)
c. 4x Qlogic FastLinQ QL41xxx
2. People
a. Testers having access to lab environment
3. Processes
a. VPN access to lab environment
2.3 Assumptions
1. Hardware
a. We will have 2 ESXi hosts with Intel X710 NIC
b. We will have 2 ESXi hosts with QLogic FastLinQ QL41xxx NIC
2. We will get VPN access to lab environment
3. We will be able to use Linux operating systems as Guest OS within VM and install testing
software (nuttcp, iperf, nginx, wrk, iftop)
3. Test Lab Environments
3.1 ESXi hosts - HardwareSpecifications
All ESXi host should be in following system
 Server Platform: HPE ProLiant DL560 Gen10
 BIOS: U34 | Date (ISO-8601): 2020-04-08
 OS/Hypervisor: VMware ESXi 6.7.0 build-16075168 (6.7 U3)
Following four ESXi hosts specifications are used for testing.
1. ESX01-INTEL
a. CPU: TBD
b. RAM: TBD
c. NIC: Intel X710, driver i40en version: 1.9.5, firmware 10.51.5
d. STORAGE: any storage for test VMs
2. ESX02-INTEL
a. CPU: TBD
b. RAM: TBD
c. NIC: Intel X710, driver i40en version: 1.9.5, firmware 10.51.5
d. STORAGE: any storage for test VMs
3. ESX01-QLOGIC
a. CPU: TBD
b. RAM: TBD
c. NIC: QLogic QL41xxx, driver qedentv version: 3.11.16.0, firmware mfw 8.52.9.0 storm
8.38.2.0
d. STORAGE: any storage for test VMs
4. ESX02-QLOGIC
a. CPU: TBD
b. RAM: TBD
c. NIC: QLogic QL41xxx, driver qedentv version: 3.11.16.0, firmware mfw 8.52.9.0 storm
8.38.2.0
d. STORAGE: any storage for test VMs
Note: Four ESXi hosts above can be consolidate into two ESXi hosts where each ESXi host will
have Intel and Qlogic NIC
3.2 VirtualMachines -Hardwareand App Specifications
1. APP-SERVER-01 - Application Server
o Hardware: 8 vCPU, 4 GB RAM, NIC (VMXNET3), MTU 1500
o OS: Linux RHEL7 or Centos7
o App: IFTOP, NUTTCP, NGINX
2. APP-SERVER-02 - Application Server
o Hardware: 8 vCPU, 4 GB RAM, NIC (VMXNET3), MTU 1500
o OS: Linux RHEL7 or Centos7
o App: IFTOP, NUTTCP, NGINX
3. APP-SERVER-03 - Application Server
o Hardware: 8 vCPU, 4 GB RAM, NIC (VMXNET3), MTU 1500
o OS: Linux RHEL7 or Centos7
o App: IFTOP, NUTTCP, NGINX
4. APP-SERVER-04 - Application Server
o Hardware: 8 vCPU, 4 GB RAM, NIC (VMXNET3), MTU 1500
o OS: Linux RHEL7 or Centos7
o App: IFTOP, NUTTCP, NGINX
5. APP-CLIENT-01 - Application Client
o Hardware: 8 vCPU, 4 GB RAM, NIC (VMXNET3), MTU 1500
o OS: Linux RHEL7 or Centos7
o App: IFTOP, NUTTCP, WRK, TASKSET
6. APP-LB-01 - Application Server
o Hardware: 8 vCPU, 4 GB RAM, NIC (VMXNET3), MTU 1500
o OS: Linux RHEL7 or Centos7
o App: IFTOP, NUTTCP, NGINX
3.3 Lab Architecture
vSphere Cluster DRS Rules are used to pin Server, Client and LoadBalancer VMs to particular ESXi
hosts.
4. Test Plan
All tests in this test plan should be tested with different NIC hardware offload methods enabled
and disabled. These methods are:
 Enable/Disable LRO
 Enable/Disable RSS
 Enable/Disable NetQueue
All performance results should be analyzed and discussed within expert group.
4.1 VMKernel - TCP (iperf)communication betweentwo ESXi host
consoles
Test Name VMKernel - TCP (iperf) communication between two ESXi host consoles
Success Criteria At least 5 Gbps (~500 MB/s) throughput
Test scenario /
runbook
Run IPERF listener on APP-SERVER-01:
esxcli network firewall set --enabled false
/usr/lib/vmware/vsan/bin/iperf3.copy -s -B [APP-SERVER-01 vMotion IP]
Run TCP listener on APP-CLIENT-01:
esxcli network firewall set --enabled false
/usr/lib/vmware/vsan/bin/iperf3.copy -t 300 -c [APP-SERVER-01 vMotion
IP]
Use VM network monitoring in vSphere Client to see network throughput.
Write down achieved results from iperf utility into Result row below.
After test, re-enable firewall
esxcli network firewall set --enabled true
Tester
Results 5.91 Gbits/sec
Comments Test duration: 5 minutes
Succeed Yes | No | Partially
4.2 VM - TCP (nuttcp)communication of 2 VMs acrosstwo ESXi
hosts
Test Name VM - TCP (nuttcp) communication of 2 VMs across two ESXi hosts
Success Criteria At least 8 Gbps (~800 MB/s) throughput
Note: 20 Gbps should be achievable in pure software stack.
Test scenario /
runbook
Run TCP listener on APP-SERVER-01:
nuttcp -S -P 5000 -N 20
Run TCP traffic generator on APP-CLIENT-01:
nuttcp -t -N 4 -P 5000 -T 300 APP-SERVER-01
Use iftop to see achieved traffic
Use VM network monitoring in vSphere Client to see network throughput.
Results
 Write down nuttcp reported results into test results below.
Tester
Results 9323.1294 Mbps = 9.1 Gbps
Comments
Succeed Yes | No | Partially
4.3 VM - TCP (iperf)communicationof 2 VMs across two ESXi hosts
Test Name VM - TCP (iperf) communication of 2 VMs across two ESXi hosts
Success Criteria At least 8 Gbps (~800 MB/s) throughput
Note: 20 Gbps should be achievable in pure software stack.
Test scenario /
runbook
Run TCP listener on APP-SERVER-01:
iperf3 -s
Run TCP traffic generator on APP-CLIENT-01:
iperf3 -t 300 -b 25g -P 4 -c [APP-SERVER-01]
Use iftop to see achieved traffic
Use VM network monitoring in vSphere Client to see network throughput.
Results
 Write down iperf reported results into test results below.
Tester
Results
9.39 Gbps
Comments
Succeed Yes | No | Partially
4.4 VM - UDP (nuttcp 64 KB) communicationof 2 VMs across two
ESXi hosts
Test Name VM - UDP (nuttcp 64 KB) communication of 2 VMs across two ESXi hosts
Success Criteria At least 8 Gbps (~800 MB/s) throughput
Note: 8 Gbps should be achievable in pure software stack.
Test scenario /
runbook
Run TCP listener on APP-SERVER-01:
nuttcp -S -P 5000 -N 20
Run TCP traffic generator on APP-CLIENT-01:
nuttcp -u -Ru -l65507 -N 4 -P 5000 -T 300 -i APP-SERVER-01
Use iftop to see achieved traffic
Use VM network monitoring in vSphere Client to see network throughput.
Results
 Write down nuttcp reported results into test results below.
Tester
Results 9315.9779 Mbps = 9.1 Gbps
1116.6931 MB / 1.00 sec = 9367.3418 Mbps 0 / 17875 ~drop/pkt 0.00 ~%loss
1119.2545 MB / 1.00 sec = 9388.5554 Mbps 3 / 17919 ~drop/pkt 0.01674
~%loss
333165.1325 MB / 300.00 sec = 9315.9779 Mbps 49 %TX 69 %RX 34816 /
5367818 drop/pkt 0.65 %loss
Comments
Succeed Yes | No | Partially
4.5 VM - UDP (iperf 64 KB) communication of2 VMs across two ESXi
hosts
Test Name VM - UDP (iperf 64 KB) communication of 2 VMs across two ESXi hosts
Success Criteria At least 8 Gbps (~800 MB/s) throughput
Note: 8 Gbps should be achievable in pure software stack.
Test scenario /
runbook
Run TCP listener on APP-SERVER-01:
iperf3 -s
Run TCP traffic generator on APP-CLIENT-01:
iperf3 -u -t 300 -b 25g -P 4 -l 65507 -c [APP-SERVER-01]
Use iftop to see achieved traffic
Use VM network monitoring in vSphere Client to see network throughput.
Results
 Write down iperf reported results into test results below.
Tester
Results 7.91 Gbps
Comments [root@server-U19 /]# iperf3 -c 10.226.97.155 -u -t 300 -b 25g -P 4 -l 65507
[SUM] 0.00-300.00 sec 276 GBytes 7.91 Gbits/sec 0.062 ms 24936/4527389
(0.55%)
Succeed Yes | No | Partially
4.6 VM - HTTP (Nginx)communicationof 2 VMs across two ESXi
hosts
Test Name VM - HTTP (nginx) communication of 2 VMs across two ESXi hosts
Success Criteria At least 8 Gbps (~800 MB/s) throughput
Note: 22 Gbps should be achievable in pure software stack.
Test scenario /
runbook
Install and run Nginx (See
) on APP-SERVER-01
Create test files on APP-SERVER-01
cd /usr/share/nginx/html
dd if=/dev/urandom of=1M.txt bs=1M count=1
Install wrk and takset on APP-CLIENT-01 (See sections 5.1.13 and 5.1.14)
Run wrk on APP-CLIENT-01 to generate traffic from APP-SERVER-01
taskset -c 0-8 /root/wrk -t 8 -c 8 -d 300s http://[APP-SERVER-01]/1M.txt
Use iftop to see achieved HTTP traffic
Use VM network monitoring in vSphere Client to see network throughput.
Results
 Write down WRK reported results (Gbps) into test results below
 Write down the context - wrk latency, requests/sec, transfer/sec
Tester
Results 660.16MB = 5252.28 Mbps = 5.15 Gbps
Comments [root@server-U19 /]# taskset -c 0-8 /bin/wrk -t 8 -c 8 -d 300s
http://10.226.97.155/1M.txt
198016 requests in 5.00m, 193.43GB read
Requests/sec: 659.98
Transfer/sec: 660.16MB
Advanced testing
 Test is uni-directional. Bi-directional test would require Lua script for wrk
 We do not use taskset utility which can be used to pin threads to logical
CPUs.
 We will do advance testing in phase 2 based on observed results.
Succeed Yes | No | Partially
4.7 VM - HTTPS (Nginx) communication of 2 VMs acrosstwo ESXi
hosts
Test Name VM – HTTPS (nginx) communication of 2 VMs across two ESXi hosts
Success Criteria At least 8 Gbps (~800 MB/s) throughput
Note: 22 Gbps should be achievable in pure software stack. Intel CPU Instructions
AES-NI accelerates SSL, thus encryption penalty should be mitigated.
Test scenario /
runbook
Install and run Nginx (See
) on APP-SERVER-01
Create test files on APP-SERVER-01
cd /usr/share/nginx/html
dd if=/dev/urandom of=1M.txt bs=1M count=1
Install wrk and takset on APP-CLIENT-01 (See sections 5.1.13 and 5.1.14)
Run wrk on APP-CLIENT-01 to generate traffic from APP-SERVER-01
taskset -c 0-8 /root/wrk -t 8 -c 8 -d 300s https://[APP-SERVER-01]/1M.txt
Use iftop to see achieved HTTP traffic
Use VM network monitoring in vSphere Client to see network throughput.
Results
 Write down WRK reported results (Gbps) into test results below
Write down the context - wrk latency, requests/sec, transfer/sec
Tester
Results 1.09 GB = 8.72 Gbps
Comments 335140 requests in 5.00m, 327.36GB read
Requests/sec: 1116.99
Transfer/sec: 1.09GB
Succeed Yes | No | Partially
4.8 VM - HTTP communication acrosstwo ESXihosts via
LoadBalancer (no RSS)
Test Name VM - HTTP communication across two ESXi hosts via LoadBalancer no RSS
Success Criteria At least 4 Gbps (~400 MB/s) throughput
Note: 4 Gbps should be achievable in pure software stack.
Test scenario /
runbook
Prepare test environment as depicted in section 3.3
Install and run Nginx (See
) on four servers APP-SERVER-01, APP-SERVER-02, APP-SERVER-03, APP-
SERVER-04
Configure DRS rules to keep servers and client on one ESXi host and load balancer
on another one.
Create test files on APP-SERVERs
cd /usr/share/nginx/html
dd if=/dev/urandom of=1M.txt bs=1M count=1
Install wrk and takset on APP-CLIENT-01 (See sections 5.1.13 and 5.1.14)
Install HTTP L7 Load Balancer APP-LB-01 (See 5.1.12) with four load balancer
members APP-SERVER-01, APP-SERVER-02, APP-SERVER-03, APP-SERVER-
04
Do NOT enable RSS (this is the default config) in virtual machine APP-LB-01. See
section 5.1.15
Run wrk on APP-CLIENT-01 to generate traffic from APP-LB-01
taskset -c 0-8 /root/wrk -t 8 -c 8 -d 300s http://[APP-LB-01]/1M.txt
Use iftop to see achieved HTTP traffic
Use VM network monitoring in vSphere Client to see network throughput.
Results
 Write down WRK reported results (Gbps) into test results below
Write down the context - wrk latency, requests/sec, transfer/sec
Tester
Results 635.97 MBps = 5087 Mbps = 4.97 Gbps
Comments Advanced testing
 Test is uni-directional. Bi-directional test would require Lua script for wrk
 We do not use taskset utility which can be used to pin threads to logical
CPUs.
 We will do advance testing in phase 2 based on observed results.
Test Name VM - HTTP communication across two ESXi hosts via LoadBalancer no RSS
Succeed Yes | No | Partially
4.9 VM - HTTP communication acrosstwo ESXihosts via
LoadBalancer (RSS)
Test Name VM - HTTP communication across two ESXi hosts via LoadBalancer
Success Criteria At least 4 Gbps (~400 MB/s) throughput
Note: 4 Gbps should be achievable in pure software stack.
Test scenario /
runbook
Prepare test environment as depicted in section 3.3
Install and run Nginx (See
) on four servers APP-SERVER-01, APP-SERVER-02, APP-SERVER-03, APP-
SERVER-04
Configure DRS rules to keep servers and client on one ESXi host and load balancer
on another one.
Create test files on APP-SERVERs
cd /usr/share/nginx/html
dd if=/dev/urandom of=1M.txt bs=1M count=1
Install wrk and takset on APP-CLIENT-01 (See Error! Reference source
not found. and
T)
Install HTTP L7 Load Balancer APP-LB-01 (See 5.1.12) with four load balancer
members APP-SERVER-01, APP-SERVER-02, APP-SERVER-03, APP-SERVER-
04
Enable RSS in virtual machine APP-LB-01. See section 5.1.15
Run wrk on APP-CLIENT-01 to generate traffic from APP-LB-01
taskset -c 0-8 /root/wrk -t 8 -c 8 -d 300s http://[APP-LB-01]/1M.txt
Use iftop to see achieved HTTP traffic
Use VM network monitoring in vSphere Client to see network throughput.
Results
 Write down WRK reported results (Gbps) into test results below
 Write down the context - wrk latency, requests/sec, transfer/sec
Tester
Results 664 MBps = 5312 Mbps = 5.18 Gbps
Comments Test duration: 5 minutes
Advanced testing
 Test is uni-directional. Bi-directional test would require Lua script for wrk
Test Name VM - HTTP communication across two ESXi hosts via LoadBalancer
 We do not use taskset utility which can be used to pin threads to logical
CPUs.
 We will do advance testing in phase 2 based on observed results.
Succeed Yes | No | Partially
5. Appendixes
5.1 Useful commands and tools for test cases
In this section are documented procedures and commands used for test cases. All commands are
targeted for RedHat 7 (or Centos 7) Linux operating system which are standard Linux
distribution.
5.1.1 Network settings
Information source: https://wiki.centos.org/FAQ/CentOS7
System files with network settings
 /etc/hostname
 /etc/resolv.conf
 /etc/sysconfig/network
o Common network settings
 GATEWAY=10.16.1.1
 DNS1=10.20.30.10
 DNS2=10.20.40.10
 /etc/sysconfig/network-scripts/ifcfg-eth0
o IP settings for interface eth0
 DHCP
 BOOTPROTO=dhcp
 Static IP
 BOOTPROTO=static
 IPADDR=10.16.1.106
 IPADDR1=10.16.1.107  Alias IP
 IPADDR2=10.16.1.108  Alias IP
 NETMASK=255.255.255.0
 /etc/hosts
o local hostname ip resolution
To apply network settings use following command
systemctl restart network
5.1.2 NTP
To install and configure NTPD use following commands
yum install ntp
systemctl start ntpd
systemctl enable ntpd
To set timezone, crearte symbolic link from /etc/localtime to /usr/share/zoneinfo/…
ln -s /usr/share/zoneinfo/Europe/Prague /etc/localtime
To check current timezone setting, just list the symlink
ls -la /etc/localtime
5.1.3 Open-VM-Tools
VMware tools are usually installed in Centos 7 by default but just in case, here is the install procedure.
sudo systemctl install open-vm-tools
sudo systemctl start vmtoolsd
sudo systemctl status vmtoolsd
sudo systemctl enable vmtoolsd
5.1.4 Firewall
To disable firewall services on RedHat linux use following commands
systemctl stop firewalld.service
systemctl disable firewalld.service
and to check firewall status use
systemctl status firewalld.service
5.1.5 SElinux
To disable SElinux on RedHat linux edit file /etc/selinux/config, change parameter SELINUX to disabled
and restart the system.
vi /etc/selinux/config
SELINUX=disabled
5.1.6 EPEL
CentOS or Red Hat Enterprise Linux (RHEL) version 7.x to use the Fedora Extra Packages for Enterprise
Linux (EPEL) repository.
yum install -y epel-release
5.1.7 Open-vm-tools
Check if open-vm-tools are installed
yum list installed | grep open-vm
In case VMware Tools are not install, install it.
yum install open-vm-tools
5.1.8 NUTTCP performance test tool
NUTTCP is a network performance measurement tool intended for use by network and system
managers. Its most basic usage is to determine the raw TCP (or UDP) network layer throughput
by transferring memory buffers from a source system across an interconnecting network to a
destination system, either transferring data for a specified time interval, or alternatively
transferring a specified number of bytes. In addition to reporting the achieved network
throughput in Mbps, nuttcp also provides additional useful information related to the data
transfer such as user, system, and wall-clock time, transmitter and receiver CPU utilization, and
loss percentage (for UDP transfers).
Assumptions
 EPEL repository is accessible
Installation on RHEL 7
yum install --enablerepo=Unsupported_EPEL nuttcp
Installation on CENTOS 7
yum install -y epel-release
yum install -y nuttcp
Usage …
Server part is started by following command
nuttcp -S -N 12
Client part is started by one of following commands
nuttcp -t -N 12 czchoapint092
cat /dev/zero | nuttcp -t -s -N 12 czchoapint092
cat /dev/urandom | nuttcp -t -s -N 12 czchoapint092
5.1.9 IPERF performance test tool
iperf3 is a tool for performing network throughput measurements. It can test either TCP or UDP
throughput. To perform an iperf3 test the user must establish both a server and a client.
Assumptions
 EPEL repository is accessible
Installation on RHEL 7
yum install --enablerepo=Unsupported_EPEL nuttcp
Installation on CENTOS 7
yum install -y epel-release
yum install -y iperf3
Usage …
Server part is started by following command
iperf3 -s
Client part is started by one of following commands
iperf3 -c 192.168.11.51 -u -t 300 -b 25g -P 4
Parameters -P and -b can be tuned to achieve minimal packet loss.
5.1.10 IFTOP - performance monitoring tool
iftop - display bandwidth usage on an interface by host
Assumptions
 EPEL repository is accessible
Installation on Centos 7
yum install -y iftop
Installation on RHEL 7
yum install --enablerepo=Unsupported_EPEL iftop
Usage …
# show interfaces
ip link
# use desired interface for iftop
iftop –i <INTERFACE>
5.1.11 NGINX – http/https server, load balancer
Centos 7 Nginx install procedure is based on tutorials at
https://phoenixnap.com/kb/how-to-install-nginx-on-centos-7
and
https://www.digitalocean.com/community/tutorials/how-to-create-a-self-signed-ssl-certificate-
for-nginx-on-centos-7
Other resources for NGINX performance tuning
 https://www.nginx.com/blog/performance-tuning-tips-tricks/
 https://www.tweaked.io/guide/nginx-proxying/
Assumptions
 Firewall is disabled
 SElinux is disabled
 Sudo or root privileges
Installation instructions …
sudo yum -y update
sudo yum install -y epel-release
sudo yum install -y nginx
sudo systemctl start nginx
sudo systemctl status nginx
sudo systemctl enable nginx
Website content (default server root) is in the directory /usr/share/nginx/html
Default server block configuration file, located at /etc/nginx/conf.d/default.conf
Global configuration is in /etc/nginx/nginx.conf
Configure SSL Certificate and enable HTTPS
mkdir /etc/ssl/private
chmod 700 /etc/ssl/private
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/ssl/private/nginx-
selfsigned.key -out /etc/ssl/certs/nginx-selfsigned.crt
openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048
vi /etc/nginx/conf.d/ssl.conf
server {
listen 443 http2 ssl;
listen [::]:443 http2 ssl;
server_name server_IP_address;
ssl_certificate /etc/ssl/certs/nginx-selfsigned.crt;
ssl_certificate_key /etc/ssl/private/nginx-selfsigned.key;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
root /usr/share/nginx/html;
location / {
autoindex on;
}
error_page 404 /404.html;
location = /404.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
Test if Nginx syntax is correct
nginx -t
Restart Nginx
systemctl restart nginx
5.1.12 NGINX – http/https L7 load balancer (reverse proxy)
Install NGINX package as documented in previous section. Load balancer function can be
configured in NGINX Global configuration file /etc/nginx/nginx.conf
Use the simplest configuration for load balancing with nginx like the following example:
http {
upstream myapp1 {
server srv1.example.com;
server srv2.example.com;
server srv3.example.com;
}
server {
listen 80;
location / {
proxy_pass http://myapp1;
}
}
}
Source: http://nginx.org/en/docs/http/load_balancing.html
5.1.13 WRK – http benchmarking
wrk is a modern HTTP benchmarking tool capable of generating significant load when run on a
single multi-core CPU. It combines a multithreaded design with scalable event notification
systems such as epoll and kqueue.
https://github.com/wg/wrk
https://github.com/wg/wrk/wiki/Installing-Wrk-on-Linux
Install procedure on Centos
sudo yum -y update
yum groupinstall 'Development Tools'
yum install -y openssl-devel git
git clone https://github.com/wg/wrk.git wrk
cd wrk
make
cp wrk /somewhere/in/your/PATH
Advanced benchmarking with wrk
WRK supports Lua scripts for more advanced benchmarking.
Resources
 Quick Start for the http Pressure Tool wrk
o https://programmer.ink/think/quick-start-for-the-http-pressure-tool-wrk.html
 POST request with wrk?
o https://stackoverflow.com/questions/15261612/post-request-with-wrk
 Benchmark testing of OSS with ab and wrk tools
o https://www.alibabacloud.com/forum/read-497
 Intelligent benchmark with wrk
o https://medium.com/@felipedutratine/intelligent-benchmark-with-wrk-163986c1587f
5.1.14 TASKSET
The taskset tool is provided by the util-linux package. It allows administrators to retrieve and set
the processor affinity of a running process, or launch a process with a specified processor
affinity.
Install procedure on Centos
sudo yum -y update
yum install -y util-linux
5.1.15 VMware Virtual Machine RSS configuration
You have to configure virtual machine advanced settings to enable RSS in virtual machine.
Additional advanced settings must be added into .vmx or advanced config of particular VM to
enable mutlti-queue support.
Below are VM advanced settings:
 ethernetX.pnicFeatures = “4” <<< Enable multi-queue (NetQueue RSS) in particular VM
 ethernetX.ctxPerDev = “3” <<< Allow multiple TX threads for particular VM
 ethernetX.udpRSS = “1” <<< Receive Side Scaling (RSS) for UDP
Note 1: RSS has to be enabled end to end. NIC Driver (driver specific) -> VMkernel (enabled
by default) -> Virtual Machine Advanced Settings (disabled by default) -> Guest OS vNIC
(enabled by default on vmxnet3 with open-vm-tools).
Following command validates RSS is enabled in VMkernel and driver for particular physical
NIC (vmnic1):
vsish -e cat /net/pNics/vmnic1/rxqueues/info
Note 2: You have to enable and configure RSS in the guest OS in addition to the VMkernel
driver module. Multi-queuing is enabled by default in Linux guest OS when the latest VMware
tools version (version 1.0.24.0 or later) is installed or when the Linux VMXNET3 driver version
1.0.16.0-k or later is used. Prior to these versions, you were required to manually enable multi-
queue or RSS support. Be sure to check the driver and version used to verify if your Linux OS
has RSS support enabled by default.
Guest OS Driver version within linux OS can be checked by following command:
# modinfo vmxnet3
You can determine the number of Tx and Rx queues allocated for a VMXNET3 driver on by
running the ethtool console command in the Linux guest operating system:
ethtool -S ens192
5.2 Diagnostic commands
In this section we will document diagnostic commands which should be run on each system to
understand implementation details of NIC offload capabilities and network traffic queueing.
ESXCLI commands are available at ESXCLI documentation:
https://code.vmware.com/docs/11743/esxi-7-0-esxcli-command-
reference/namespace/esxcli_network.html
For further detail about diagnostic commands, you can watch vmkernel log during execution of
commands below as there can be interesting outputs from NIC driver.
tail -f /var/log/vmkernel.log
5.2.1 ESXi Inventory
Collect hardware and ESXi inventory details.
esxcli system version get
esxcli hardware platform get
esxcli hardware cpu global get
smbiosDump
WebBrowser https://192.168.4.121/cgi-bin/esxcfg-info.cgi
5.2.2 Driver information
NIC inventory
esxcli network nic get -n <VMNIC>
NIC device info
vmkchdev –l | grep vmnic
Document VID:DID:SVID”SDID
To list all vib modules and understand what drivers are “Inbox” (aka native VMware) or
“Async” (from partners like Intel or Marvel/QLogic)
esxcli software vib list
5.2.3 Driver module settings
Identify NIC driver module name
esxcli network nic get -n vmnic0
Show driver module parameters
esxcli system module parameters list -m <DRIVER-MODULE-NAME>
5.2.4 TSO
To verify that your pNIC supports TSO and if it is enabled on your ESXi host
esxcli network nic tso get
5.2.5 LRO
To display the current LRO configuration values
esxcli system settings advanced list -o /Net/TcpipDefLROEnabled
Check the length of the LRO buffer by using the following esxcli command:
esxcli system settings advanced list - o /Net/VmxnetLROMaxLength
To check the VMXNET3 settings in relation to LRO, the following commands (hardware LRO,
software LRO) can be issued:
esxcli system settings advanced list -o /Net/Vmxnet3HwLRO
esxcli system settings advanced list -o /Net/Vmxnet3SwLRO
5.2.6 CSO (Checksum Offload)
To verify that your pNIC supports Checksum Offload (CSO) on your ESXi host
esxcli network nic cso get
5.2.7 Net Queue Count
Get netqueue count on a nic
esxcli network nic queue count get
5.2.8 Net Filter Classes
List the netqueue supported filterclass of all physical NICs currently installed and loaded on the
system.
esxcli network nic queue filterclass list
5.2.9 List the load balancer settings
List the load balancer settings of all the installed and loaded physical NICs. (S:supported,
U:unsupported, N:not-applicable, A:allowed, D:disallowed).
esxcli network nic queue loadbalancer list
5.2.10 Details of netqueue balancer plugins
Details of netqueue balancer plugins on all physical NICs currently installed and loaded on the
system
esxcli network nic queue loadbalancer plugin list
5.2.11 Net Queue balancer state
Netqueue balancer state of all physical NICs currently installed and loaded on the system
esxcli network nic queue loadbalancer state list
5.2.12 RX/TX ring buffer current parameters
Get current RX/TX ring buffer parameters of a NIC
esxcli network nic ring current get
5.2.13 RX/TX ring buffer parameters max values
Get preset maximums for RX/TX ring buffer parameters of a NIC.
esxcli network nic ring preset get -n vmnic0
5.2.14 SG (Scatter and Gather)
Scatter and Gather (Vectored I/O) is a concept that was primarily used in hard disks and it
enhances large I/O request performance, if supported by the hardware.
esxcli network nic sg get
5.2.15 List software simulation settings
List software simulation settings of physical NICs currently installed and loaded on the system.
esxcli network nic software list
5.2.16 RSS
We do not see any RSS related driver parameters, therefore, driver i40en 1.9.5 does not support
RSS.
On top of that, we have been assured by VMware Engineering that inbox driver i40en 1.9.5 does
not support RSS.
5.2.17 VMkernel software treads per VMNIC
Show number of VMkernel software treads per VMNIC
net-stats -A -t vW
vsish
/> cat /world/<WORLD-ID-1-IN-VMNIC>/name
/> cat /world/<WORLD-ID-2-IN-VMNIC>/name
/> cat /world/<WORLD-ID-3-IN-VMNIC>/name
…
/> cat /world/<WORLD-ID-n-IN-VMNIC>/name
5.3 ESX commandsto manage NIC Offloading Capabilities
5.3.1 LRO in the ESXi host
By default, a host is configured to use hardware TSO if its NICs support the feature.
To check the LRO configuration for the default TCP/IP stack on the ESXi host, execute the
following command to display the current LRO configuration values:
esxcli system settings advanced list -o /Net/TcpipDefLROEnabled
You are able to check the length of the LRO buffer by using the following esxcli command:
esxcli system settings advanced list - o /Net/VmxnetLROMaxLength
The LRO features are functional for the guest OS when the VMXNET3 virtual adapter is used.
To check the VMXNET3 settings in relation to LRO, the following commands (hardware LRO,
software LRO) can be issued:
esxcli system settings advanced list -o /Net/Vmxnet3HwLRO
esxcli system settings advanced list -o /Net/Vmxnet3SwLRO
You can disable LRO for all VMkernel adapters on a host with command
esxcli system settings advanced set -o /Net/TcpipDefLROEnabled -i 0
and enabling LRO with
esxcli system settings advanced set -o /Net/TcpipDefLROEnabled -i 1
5.3.2 Netqueue and RSS
5.3.2.1. How to validate RSS is enabled in VMkernel
If you have running system, you can check the status of RSS by following command from ESXi
shell
vsish -e cat /net/pNics/vmnic1/rxqueues/info
In figure below, you can see the command output for 1Gb Intel NIC not supporting NetQueue,
therefore RSS is logically not supported as well, because it does not make any sense.
Figure 1 Command to validate if RSS is enabled in VMkernel
It seems, that some drivers enabling RSS by default and some others not.
5.3.2.2. How to explicitly enable Netqueue RSS
The procedure to enable RSS is always dependent on specific driver, because specific parameters
have to be passed to driver module. The information how to enable RSS for particular driver
should be written in specific NIC vendor documentation.
Example for Intel ixgbe driver:
vmkload_mod ixgbe RSS=”4″
To enable the feature on multiple Intel 82599EB SFI/SFP+ 10Gb/s NICs, include another
comma-separated 4 for each additional NIC (for example, to enable the feature on three such
NICs, you'd run vmkload_mod ixgbe RSS="4,4,4").
Example for Mellanox nmlx4driver:
For Mellanox adapters, the RSS feature can be turned on by reloading the driver with
num_rings_per_rss_queue=4.
vmkload_mod nmlx4_en num_rings_per_rss_queue=4
NOTE: After loading the driver with vmkload_mod, you should make vmkdevmgr rediscover the
NICs with the following command:
kill -HUP ID … where ID is the process ID of the vmkdevmgr process
5.3.2.3. How to disable Netqueue RSS for particular driver
Disabling Netqueue RSS is also driver specific. It can be done using driver module parameter as
shown below. The example assumes there are four qedentv (QLogic NIC) instances.
[root@host:~] esxcfg-module -g qedentv
qedentv enabled = 1 options = ''
[root@host:~] esxcfg-module -s "num_queues=0,0,0,0 RSS=0,0,0,0" qedentv
[root@host:~] esxcfg-module -g qedentv
qedentv enabled = 1 options = 'num_queues=0,0,0,0 RSS=0,0,0,0'
Reboot the system for settings to take effect and will apply to all NICs managed by the qedentv
driver.
Source: https://kb.vmware.com/s/article/68147
5.3.2.4. How to disable Netqueue in VMkernel
Netqueue can be also totally disabled in VMkernel
esxcli system settings kernel set --setting="netNetqueueEnabled" --value="FALSE"

More Related Content

What's hot

[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
OpenStack Korea Community
 
OpenStack QA Tooling & How to use it for Production Cloud Testing | Ghanshyam...
OpenStack QA Tooling & How to use it for Production Cloud Testing | Ghanshyam...OpenStack QA Tooling & How to use it for Production Cloud Testing | Ghanshyam...
OpenStack QA Tooling & How to use it for Production Cloud Testing | Ghanshyam...
Vietnam Open Infrastructure User Group
 
VMworld 2016: vSphere 6.x Host Resource Deep Dive
VMworld 2016: vSphere 6.x Host Resource Deep DiveVMworld 2016: vSphere 6.x Host Resource Deep Dive
VMworld 2016: vSphere 6.x Host Resource Deep Dive
VMworld
 
Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...
Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...
Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...
Cloud Native Day Tel Aviv
 
VMworld 2015: VMware NSX Deep Dive
VMworld 2015: VMware NSX Deep DiveVMworld 2015: VMware NSX Deep Dive
VMworld 2015: VMware NSX Deep Dive
VMworld
 
[OpenStack Days Korea 2016] Track1 - Mellanox CloudX - Acceleration for Cloud...
[OpenStack Days Korea 2016] Track1 - Mellanox CloudX - Acceleration for Cloud...[OpenStack Days Korea 2016] Track1 - Mellanox CloudX - Acceleration for Cloud...
[OpenStack Days Korea 2016] Track1 - Mellanox CloudX - Acceleration for Cloud...
OpenStack Korea Community
 
Learning from ZFS to Scale Storage on and under Containers
Learning from ZFS to Scale Storage on and under ContainersLearning from ZFS to Scale Storage on and under Containers
Learning from ZFS to Scale Storage on and under Containers
inside-BigData.com
 
VMworld 2017 - Top 10 things to know about vSAN
VMworld 2017 - Top 10 things to know about vSANVMworld 2017 - Top 10 things to know about vSAN
VMworld 2017 - Top 10 things to know about vSAN
Duncan Epping
 
[OpenStack Days Korea 2016] Track1 - Red Hat enterprise Linux OpenStack Platform
[OpenStack Days Korea 2016] Track1 - Red Hat enterprise Linux OpenStack Platform[OpenStack Days Korea 2016] Track1 - Red Hat enterprise Linux OpenStack Platform
[OpenStack Days Korea 2016] Track1 - Red Hat enterprise Linux OpenStack Platform
OpenStack Korea Community
 
[OpenStack 하반기 스터디] HA using DVR
[OpenStack 하반기 스터디] HA using DVR[OpenStack 하반기 스터디] HA using DVR
[OpenStack 하반기 스터디] HA using DVR
OpenStack Korea Community
 
VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...
VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...
VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...
VMworld
 
[OpenStack Days Korea 2016] How open HW and SW drives telco infrastucture inn...
[OpenStack Days Korea 2016] How open HW and SW drives telco infrastucture inn...[OpenStack Days Korea 2016] How open HW and SW drives telco infrastucture inn...
[OpenStack Days Korea 2016] How open HW and SW drives telco infrastucture inn...
OpenStack Korea Community
 
Cloud-based Virtualization for Test Automation
Cloud-based Virtualization for Test AutomationCloud-based Virtualization for Test Automation
Cloud-based Virtualization for Test Automation
Vikram G Hosakote
 
BEST REST in OpenStack
BEST REST in OpenStackBEST REST in OpenStack
BEST REST in OpenStack
Vikram G Hosakote
 
[OpenStack Days Korea 2016] Track3 - OpenStack on 64-bit ARM with X-Gene
[OpenStack Days Korea 2016] Track3 - OpenStack on 64-bit ARM with X-Gene[OpenStack Days Korea 2016] Track3 - OpenStack on 64-bit ARM with X-Gene
[OpenStack Days Korea 2016] Track3 - OpenStack on 64-bit ARM with X-Gene
OpenStack Korea Community
 
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDSAccelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Ceph Community
 
OVHcloud Hosted Private Cloud Platform Network use cases with VMware NSX
OVHcloud Hosted Private Cloud Platform Network use cases with VMware NSXOVHcloud Hosted Private Cloud Platform Network use cases with VMware NSX
OVHcloud Hosted Private Cloud Platform Network use cases with VMware NSX
OVHcloud
 
[OpenStack Day in Korea 2015] Track 3-4 - Software Defined Storage (SDS) and ...
[OpenStack Day in Korea 2015] Track 3-4 - Software Defined Storage (SDS) and ...[OpenStack Day in Korea 2015] Track 3-4 - Software Defined Storage (SDS) and ...
[OpenStack Day in Korea 2015] Track 3-4 - Software Defined Storage (SDS) and ...
OpenStack Korea Community
 
Addressing DHCP and DNS scalability issues in OpenStack Neutron
Addressing DHCP and DNS scalability issues in OpenStack NeutronAddressing DHCP and DNS scalability issues in OpenStack Neutron
Addressing DHCP and DNS scalability issues in OpenStack Neutron
Vikram G Hosakote
 
One-click Hadoop Cluster Deployment on OpenPOWER Systems
One-click Hadoop Cluster Deployment on OpenPOWER SystemsOne-click Hadoop Cluster Deployment on OpenPOWER Systems
One-click Hadoop Cluster Deployment on OpenPOWER Systems
Pradeep Kumar
 

What's hot (20)

[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
 
OpenStack QA Tooling & How to use it for Production Cloud Testing | Ghanshyam...
OpenStack QA Tooling & How to use it for Production Cloud Testing | Ghanshyam...OpenStack QA Tooling & How to use it for Production Cloud Testing | Ghanshyam...
OpenStack QA Tooling & How to use it for Production Cloud Testing | Ghanshyam...
 
VMworld 2016: vSphere 6.x Host Resource Deep Dive
VMworld 2016: vSphere 6.x Host Resource Deep DiveVMworld 2016: vSphere 6.x Host Resource Deep Dive
VMworld 2016: vSphere 6.x Host Resource Deep Dive
 
Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...
Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...
Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...
 
VMworld 2015: VMware NSX Deep Dive
VMworld 2015: VMware NSX Deep DiveVMworld 2015: VMware NSX Deep Dive
VMworld 2015: VMware NSX Deep Dive
 
[OpenStack Days Korea 2016] Track1 - Mellanox CloudX - Acceleration for Cloud...
[OpenStack Days Korea 2016] Track1 - Mellanox CloudX - Acceleration for Cloud...[OpenStack Days Korea 2016] Track1 - Mellanox CloudX - Acceleration for Cloud...
[OpenStack Days Korea 2016] Track1 - Mellanox CloudX - Acceleration for Cloud...
 
Learning from ZFS to Scale Storage on and under Containers
Learning from ZFS to Scale Storage on and under ContainersLearning from ZFS to Scale Storage on and under Containers
Learning from ZFS to Scale Storage on and under Containers
 
VMworld 2017 - Top 10 things to know about vSAN
VMworld 2017 - Top 10 things to know about vSANVMworld 2017 - Top 10 things to know about vSAN
VMworld 2017 - Top 10 things to know about vSAN
 
[OpenStack Days Korea 2016] Track1 - Red Hat enterprise Linux OpenStack Platform
[OpenStack Days Korea 2016] Track1 - Red Hat enterprise Linux OpenStack Platform[OpenStack Days Korea 2016] Track1 - Red Hat enterprise Linux OpenStack Platform
[OpenStack Days Korea 2016] Track1 - Red Hat enterprise Linux OpenStack Platform
 
[OpenStack 하반기 스터디] HA using DVR
[OpenStack 하반기 스터디] HA using DVR[OpenStack 하반기 스터디] HA using DVR
[OpenStack 하반기 스터디] HA using DVR
 
VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...
VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...
VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...
 
[OpenStack Days Korea 2016] How open HW and SW drives telco infrastucture inn...
[OpenStack Days Korea 2016] How open HW and SW drives telco infrastucture inn...[OpenStack Days Korea 2016] How open HW and SW drives telco infrastucture inn...
[OpenStack Days Korea 2016] How open HW and SW drives telco infrastucture inn...
 
Cloud-based Virtualization for Test Automation
Cloud-based Virtualization for Test AutomationCloud-based Virtualization for Test Automation
Cloud-based Virtualization for Test Automation
 
BEST REST in OpenStack
BEST REST in OpenStackBEST REST in OpenStack
BEST REST in OpenStack
 
[OpenStack Days Korea 2016] Track3 - OpenStack on 64-bit ARM with X-Gene
[OpenStack Days Korea 2016] Track3 - OpenStack on 64-bit ARM with X-Gene[OpenStack Days Korea 2016] Track3 - OpenStack on 64-bit ARM with X-Gene
[OpenStack Days Korea 2016] Track3 - OpenStack on 64-bit ARM with X-Gene
 
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDSAccelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
 
OVHcloud Hosted Private Cloud Platform Network use cases with VMware NSX
OVHcloud Hosted Private Cloud Platform Network use cases with VMware NSXOVHcloud Hosted Private Cloud Platform Network use cases with VMware NSX
OVHcloud Hosted Private Cloud Platform Network use cases with VMware NSX
 
[OpenStack Day in Korea 2015] Track 3-4 - Software Defined Storage (SDS) and ...
[OpenStack Day in Korea 2015] Track 3-4 - Software Defined Storage (SDS) and ...[OpenStack Day in Korea 2015] Track 3-4 - Software Defined Storage (SDS) and ...
[OpenStack Day in Korea 2015] Track 3-4 - Software Defined Storage (SDS) and ...
 
Addressing DHCP and DNS scalability issues in OpenStack Neutron
Addressing DHCP and DNS scalability issues in OpenStack NeutronAddressing DHCP and DNS scalability issues in OpenStack Neutron
Addressing DHCP and DNS scalability issues in OpenStack Neutron
 
One-click Hadoop Cluster Deployment on OpenPOWER Systems
One-click Hadoop Cluster Deployment on OpenPOWER SystemsOne-click Hadoop Cluster Deployment on OpenPOWER Systems
One-click Hadoop Cluster Deployment on OpenPOWER Systems
 

Similar to Network performance test plan_v0.3

Intel & QLogic NIC performance test results v0.2
Intel & QLogic NIC performance test results v0.2Intel & QLogic NIC performance test results v0.2
Intel & QLogic NIC performance test results v0.2
David Pasek
 
Network Automation Tools
Network Automation ToolsNetwork Automation Tools
Network Automation Tools
Edwin Beekman
 
Install FD.IO VPP On Intel(r) Architecture & Test with Trex*
Install FD.IO VPP On Intel(r) Architecture & Test with Trex*Install FD.IO VPP On Intel(r) Architecture & Test with Trex*
Install FD.IO VPP On Intel(r) Architecture & Test with Trex*
Michelle Holley
 
VMware ESXi - Intel and Qlogic NIC throughput difference v0.6
VMware ESXi - Intel and Qlogic NIC throughput difference v0.6VMware ESXi - Intel and Qlogic NIC throughput difference v0.6
VMware ESXi - Intel and Qlogic NIC throughput difference v0.6
David Pasek
 
Summit 16: How to Compose a New OPNFV Solution Stack?
Summit 16: How to Compose a New OPNFV Solution Stack?Summit 16: How to Compose a New OPNFV Solution Stack?
Summit 16: How to Compose a New OPNFV Solution Stack?
OPNFV
 
KVM and docker LXC Benchmarking with OpenStack
KVM and docker LXC Benchmarking with OpenStackKVM and docker LXC Benchmarking with OpenStack
KVM and docker LXC Benchmarking with OpenStack
Boden Russell
 
Vsc 71-se-presentation-training
Vsc 71-se-presentation-trainingVsc 71-se-presentation-training
Vsc 71-se-presentation-training
narit_ton
 
Anycast all the things
Anycast all the thingsAnycast all the things
Anycast all the things
Maximilan Wilhelm
 
Use EPA for NFV & Test with OPNVF* Yardstick*
Use EPA for NFV & Test with OPNVF* Yardstick*Use EPA for NFV & Test with OPNVF* Yardstick*
Use EPA for NFV & Test with OPNVF* Yardstick*
Michelle Holley
 
Implementing an IPv6 Enabled Environment for a Public Cloud Tenant
Implementing an IPv6 Enabled Environment for a Public Cloud TenantImplementing an IPv6 Enabled Environment for a Public Cloud Tenant
Implementing an IPv6 Enabled Environment for a Public Cloud Tenant
Shixiong Shang
 
VMworld 2013: Extreme Performance Series: Network Speed Ahead
VMworld 2013: Extreme Performance Series: Network Speed Ahead VMworld 2013: Extreme Performance Series: Network Speed Ahead
VMworld 2013: Extreme Performance Series: Network Speed Ahead
VMworld
 
Iben from Spirent talks at the SDN World Congress about the importance of and...
Iben from Spirent talks at the SDN World Congress about the importance of and...Iben from Spirent talks at the SDN World Congress about the importance of and...
Iben from Spirent talks at the SDN World Congress about the importance of and...
Iben Rodriguez
 
FlexVPNLabHandbook-SAMPLE
FlexVPNLabHandbook-SAMPLEFlexVPNLabHandbook-SAMPLE
FlexVPNLabHandbook-SAMPLETariq Sheikh
 
VMware vSphere 4.1 deep dive - part 1
VMware vSphere 4.1 deep dive - part 1VMware vSphere 4.1 deep dive - part 1
VMware vSphere 4.1 deep dive - part 1
Louis Göhl
 
Test Report - OCe14000 Performance
Test Report - OCe14000 PerformanceTest Report - OCe14000 Performance
Test Report - OCe14000 PerformanceIT Brand Pulse
 
Comparison of Open Source Virtualization Technology
Comparison of Open Source Virtualization TechnologyComparison of Open Source Virtualization Technology
Comparison of Open Source Virtualization Technology
Benoit des Ligneris
 
Direct Code Execution - LinuxCon Japan 2014
Direct Code Execution - LinuxCon Japan 2014Direct Code Execution - LinuxCon Japan 2014
Direct Code Execution - LinuxCon Japan 2014Hajime Tazaki
 
Automating Software Development Life Cycle - A DevOps Approach
Automating Software Development Life Cycle - A DevOps ApproachAutomating Software Development Life Cycle - A DevOps Approach
Automating Software Development Life Cycle - A DevOps ApproachAkshaya Mahapatra
 
Raisecom GPON Solution Training - Chapter 4 NView_V2.pptx
Raisecom GPON Solution Training - Chapter 4 NView_V2.pptxRaisecom GPON Solution Training - Chapter 4 NView_V2.pptx
Raisecom GPON Solution Training - Chapter 4 NView_V2.pptx
Jean Carlos Cruz
 
Building PoC ready ODM Platforms with Arm SystemReady v5.2.pdf
Building PoC ready ODM Platforms with Arm SystemReady v5.2.pdfBuilding PoC ready ODM Platforms with Arm SystemReady v5.2.pdf
Building PoC ready ODM Platforms with Arm SystemReady v5.2.pdf
Paul Yang
 

Similar to Network performance test plan_v0.3 (20)

Intel & QLogic NIC performance test results v0.2
Intel & QLogic NIC performance test results v0.2Intel & QLogic NIC performance test results v0.2
Intel & QLogic NIC performance test results v0.2
 
Network Automation Tools
Network Automation ToolsNetwork Automation Tools
Network Automation Tools
 
Install FD.IO VPP On Intel(r) Architecture & Test with Trex*
Install FD.IO VPP On Intel(r) Architecture & Test with Trex*Install FD.IO VPP On Intel(r) Architecture & Test with Trex*
Install FD.IO VPP On Intel(r) Architecture & Test with Trex*
 
VMware ESXi - Intel and Qlogic NIC throughput difference v0.6
VMware ESXi - Intel and Qlogic NIC throughput difference v0.6VMware ESXi - Intel and Qlogic NIC throughput difference v0.6
VMware ESXi - Intel and Qlogic NIC throughput difference v0.6
 
Summit 16: How to Compose a New OPNFV Solution Stack?
Summit 16: How to Compose a New OPNFV Solution Stack?Summit 16: How to Compose a New OPNFV Solution Stack?
Summit 16: How to Compose a New OPNFV Solution Stack?
 
KVM and docker LXC Benchmarking with OpenStack
KVM and docker LXC Benchmarking with OpenStackKVM and docker LXC Benchmarking with OpenStack
KVM and docker LXC Benchmarking with OpenStack
 
Vsc 71-se-presentation-training
Vsc 71-se-presentation-trainingVsc 71-se-presentation-training
Vsc 71-se-presentation-training
 
Anycast all the things
Anycast all the thingsAnycast all the things
Anycast all the things
 
Use EPA for NFV & Test with OPNVF* Yardstick*
Use EPA for NFV & Test with OPNVF* Yardstick*Use EPA for NFV & Test with OPNVF* Yardstick*
Use EPA for NFV & Test with OPNVF* Yardstick*
 
Implementing an IPv6 Enabled Environment for a Public Cloud Tenant
Implementing an IPv6 Enabled Environment for a Public Cloud TenantImplementing an IPv6 Enabled Environment for a Public Cloud Tenant
Implementing an IPv6 Enabled Environment for a Public Cloud Tenant
 
VMworld 2013: Extreme Performance Series: Network Speed Ahead
VMworld 2013: Extreme Performance Series: Network Speed Ahead VMworld 2013: Extreme Performance Series: Network Speed Ahead
VMworld 2013: Extreme Performance Series: Network Speed Ahead
 
Iben from Spirent talks at the SDN World Congress about the importance of and...
Iben from Spirent talks at the SDN World Congress about the importance of and...Iben from Spirent talks at the SDN World Congress about the importance of and...
Iben from Spirent talks at the SDN World Congress about the importance of and...
 
FlexVPNLabHandbook-SAMPLE
FlexVPNLabHandbook-SAMPLEFlexVPNLabHandbook-SAMPLE
FlexVPNLabHandbook-SAMPLE
 
VMware vSphere 4.1 deep dive - part 1
VMware vSphere 4.1 deep dive - part 1VMware vSphere 4.1 deep dive - part 1
VMware vSphere 4.1 deep dive - part 1
 
Test Report - OCe14000 Performance
Test Report - OCe14000 PerformanceTest Report - OCe14000 Performance
Test Report - OCe14000 Performance
 
Comparison of Open Source Virtualization Technology
Comparison of Open Source Virtualization TechnologyComparison of Open Source Virtualization Technology
Comparison of Open Source Virtualization Technology
 
Direct Code Execution - LinuxCon Japan 2014
Direct Code Execution - LinuxCon Japan 2014Direct Code Execution - LinuxCon Japan 2014
Direct Code Execution - LinuxCon Japan 2014
 
Automating Software Development Life Cycle - A DevOps Approach
Automating Software Development Life Cycle - A DevOps ApproachAutomating Software Development Life Cycle - A DevOps Approach
Automating Software Development Life Cycle - A DevOps Approach
 
Raisecom GPON Solution Training - Chapter 4 NView_V2.pptx
Raisecom GPON Solution Training - Chapter 4 NView_V2.pptxRaisecom GPON Solution Training - Chapter 4 NView_V2.pptx
Raisecom GPON Solution Training - Chapter 4 NView_V2.pptx
 
Building PoC ready ODM Platforms with Arm SystemReady v5.2.pdf
Building PoC ready ODM Platforms with Arm SystemReady v5.2.pdfBuilding PoC ready ODM Platforms with Arm SystemReady v5.2.pdf
Building PoC ready ODM Platforms with Arm SystemReady v5.2.pdf
 

More from David Pasek

FlexBook Software - Conceptual Architecture
FlexBook Software - Conceptual ArchitectureFlexBook Software - Conceptual Architecture
FlexBook Software - Conceptual Architecture
David Pasek
 
Flex Cloud - Conceptual Design - ver 0.2
Flex Cloud - Conceptual Design - ver 0.2Flex Cloud - Conceptual Design - ver 0.2
Flex Cloud - Conceptual Design - ver 0.2
David Pasek
 
E tourism v oblasti cestovního ruchu
E tourism v oblasti cestovního ruchuE tourism v oblasti cestovního ruchu
E tourism v oblasti cestovního ruchu
David Pasek
 
Architektura a implementace digitálních knihoven v prostředí sítě Internet
Architektura a implementace digitálních knihoven v prostředí sítě InternetArchitektura a implementace digitálních knihoven v prostředí sítě Internet
Architektura a implementace digitálních knihoven v prostředí sítě Internet
David Pasek
 
Exchange office 3.0 - Stanovisko Státní banky československé
Exchange office 3.0 - Stanovisko Státní banky československéExchange office 3.0 - Stanovisko Státní banky československé
Exchange office 3.0 - Stanovisko Státní banky československé
David Pasek
 
vSAN architecture components
vSAN architecture componentsvSAN architecture components
vSAN architecture components
David Pasek
 
FlexBook overview - v2.4
FlexBook overview - v2.4FlexBook overview - v2.4
FlexBook overview - v2.4
David Pasek
 
Hybrid cloud overview and VCF on VxRAIL
Hybrid cloud overview and VCF on VxRAILHybrid cloud overview and VCF on VxRAIL
Hybrid cloud overview and VCF on VxRAIL
David Pasek
 
Private IaaS Cloud Provider
Private IaaS Cloud ProviderPrivate IaaS Cloud Provider
Private IaaS Cloud Provider
David Pasek
 
Spectre/Meltdown security vulnerabilities FAQ
Spectre/Meltdown security vulnerabilities FAQSpectre/Meltdown security vulnerabilities FAQ
Spectre/Meltdown security vulnerabilities FAQ
David Pasek
 
FlexBook Basic Overview - v2.0
FlexBook Basic Overview - v2.0FlexBook Basic Overview - v2.0
FlexBook Basic Overview - v2.0
David Pasek
 
Spectre meltdown performance_tests - v0.3
Spectre meltdown performance_tests - v0.3Spectre meltdown performance_tests - v0.3
Spectre meltdown performance_tests - v0.3
David Pasek
 
FlexBook basic overview v2.0
FlexBook basic overview v2.0FlexBook basic overview v2.0
FlexBook basic overview v2.0
David Pasek
 
FlexBook - reservation system basic overview v1.1
FlexBook - reservation system basic overview v1.1FlexBook - reservation system basic overview v1.1
FlexBook - reservation system basic overview v1.1
David Pasek
 
CLI for VMware Distributed Switch (Community project)
CLI for VMware Distributed Switch (Community project)CLI for VMware Distributed Switch (Community project)
CLI for VMware Distributed Switch (Community project)
David Pasek
 
Dell VLT reference architecture v2 0
Dell VLT reference architecture v2 0Dell VLT reference architecture v2 0
Dell VLT reference architecture v2 0
David Pasek
 
Metro Cluster High Availability or SRM Disaster Recovery?
Metro Cluster High Availability or SRM Disaster Recovery?Metro Cluster High Availability or SRM Disaster Recovery?
Metro Cluster High Availability or SRM Disaster Recovery?
David Pasek
 
Rezervační systém Flexbook - stručný přehled v.0.8
Rezervační systém Flexbook - stručný přehled v.0.8Rezervační systém Flexbook - stručný přehled v.0.8
Rezervační systém Flexbook - stručný přehled v.0.8
David Pasek
 
Creating content packs in VMware LogInsight
Creating content packs in VMware LogInsightCreating content packs in VMware LogInsight
Creating content packs in VMware LogInsight
David Pasek
 
What's new in log insight 3.3 presentation
What's new in log insight 3.3 presentationWhat's new in log insight 3.3 presentation
What's new in log insight 3.3 presentation
David Pasek
 

More from David Pasek (20)

FlexBook Software - Conceptual Architecture
FlexBook Software - Conceptual ArchitectureFlexBook Software - Conceptual Architecture
FlexBook Software - Conceptual Architecture
 
Flex Cloud - Conceptual Design - ver 0.2
Flex Cloud - Conceptual Design - ver 0.2Flex Cloud - Conceptual Design - ver 0.2
Flex Cloud - Conceptual Design - ver 0.2
 
E tourism v oblasti cestovního ruchu
E tourism v oblasti cestovního ruchuE tourism v oblasti cestovního ruchu
E tourism v oblasti cestovního ruchu
 
Architektura a implementace digitálních knihoven v prostředí sítě Internet
Architektura a implementace digitálních knihoven v prostředí sítě InternetArchitektura a implementace digitálních knihoven v prostředí sítě Internet
Architektura a implementace digitálních knihoven v prostředí sítě Internet
 
Exchange office 3.0 - Stanovisko Státní banky československé
Exchange office 3.0 - Stanovisko Státní banky československéExchange office 3.0 - Stanovisko Státní banky československé
Exchange office 3.0 - Stanovisko Státní banky československé
 
vSAN architecture components
vSAN architecture componentsvSAN architecture components
vSAN architecture components
 
FlexBook overview - v2.4
FlexBook overview - v2.4FlexBook overview - v2.4
FlexBook overview - v2.4
 
Hybrid cloud overview and VCF on VxRAIL
Hybrid cloud overview and VCF on VxRAILHybrid cloud overview and VCF on VxRAIL
Hybrid cloud overview and VCF on VxRAIL
 
Private IaaS Cloud Provider
Private IaaS Cloud ProviderPrivate IaaS Cloud Provider
Private IaaS Cloud Provider
 
Spectre/Meltdown security vulnerabilities FAQ
Spectre/Meltdown security vulnerabilities FAQSpectre/Meltdown security vulnerabilities FAQ
Spectre/Meltdown security vulnerabilities FAQ
 
FlexBook Basic Overview - v2.0
FlexBook Basic Overview - v2.0FlexBook Basic Overview - v2.0
FlexBook Basic Overview - v2.0
 
Spectre meltdown performance_tests - v0.3
Spectre meltdown performance_tests - v0.3Spectre meltdown performance_tests - v0.3
Spectre meltdown performance_tests - v0.3
 
FlexBook basic overview v2.0
FlexBook basic overview v2.0FlexBook basic overview v2.0
FlexBook basic overview v2.0
 
FlexBook - reservation system basic overview v1.1
FlexBook - reservation system basic overview v1.1FlexBook - reservation system basic overview v1.1
FlexBook - reservation system basic overview v1.1
 
CLI for VMware Distributed Switch (Community project)
CLI for VMware Distributed Switch (Community project)CLI for VMware Distributed Switch (Community project)
CLI for VMware Distributed Switch (Community project)
 
Dell VLT reference architecture v2 0
Dell VLT reference architecture v2 0Dell VLT reference architecture v2 0
Dell VLT reference architecture v2 0
 
Metro Cluster High Availability or SRM Disaster Recovery?
Metro Cluster High Availability or SRM Disaster Recovery?Metro Cluster High Availability or SRM Disaster Recovery?
Metro Cluster High Availability or SRM Disaster Recovery?
 
Rezervační systém Flexbook - stručný přehled v.0.8
Rezervační systém Flexbook - stručný přehled v.0.8Rezervační systém Flexbook - stručný přehled v.0.8
Rezervační systém Flexbook - stručný přehled v.0.8
 
Creating content packs in VMware LogInsight
Creating content packs in VMware LogInsightCreating content packs in VMware LogInsight
Creating content packs in VMware LogInsight
 
What's new in log insight 3.3 presentation
What's new in log insight 3.3 presentationWhat's new in log insight 3.3 presentation
What's new in log insight 3.3 presentation
 

Recently uploaded

Artificial Intelligence for XMLDevelopment
Artificial Intelligence for XMLDevelopmentArtificial Intelligence for XMLDevelopment
Artificial Intelligence for XMLDevelopment
Octavian Nadolu
 
Free Complete Python - A step towards Data Science
Free Complete Python - A step towards Data ScienceFree Complete Python - A step towards Data Science
Free Complete Python - A step towards Data Science
RinaMondal9
 
Microsoft - Power Platform_G.Aspiotis.pdf
Microsoft - Power Platform_G.Aspiotis.pdfMicrosoft - Power Platform_G.Aspiotis.pdf
Microsoft - Power Platform_G.Aspiotis.pdf
Uni Systems S.M.S.A.
 
Video Streaming: Then, Now, and in the Future
Video Streaming: Then, Now, and in the FutureVideo Streaming: Then, Now, and in the Future
Video Streaming: Then, Now, and in the Future
Alpen-Adria-Universität
 
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdfFIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance
 
How to Get CNIC Information System with Paksim Ga.pptx
How to Get CNIC Information System with Paksim Ga.pptxHow to Get CNIC Information System with Paksim Ga.pptx
How to Get CNIC Information System with Paksim Ga.pptx
danishmna97
 
Epistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI supportEpistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI support
Alan Dix
 
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
Neo4j
 
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
Neo4j
 
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
Neo4j
 
Monitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR EventsMonitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR Events
Ana-Maria Mihalceanu
 
RESUME BUILDER APPLICATION Project for students
RESUME BUILDER APPLICATION Project for studentsRESUME BUILDER APPLICATION Project for students
RESUME BUILDER APPLICATION Project for students
KAMESHS29
 
GridMate - End to end testing is a critical piece to ensure quality and avoid...
GridMate - End to end testing is a critical piece to ensure quality and avoid...GridMate - End to end testing is a critical piece to ensure quality and avoid...
GridMate - End to end testing is a critical piece to ensure quality and avoid...
ThomasParaiso2
 
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
SOFTTECHHUB
 
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
James Anderson
 
20240609 QFM020 Irresponsible AI Reading List May 2024
20240609 QFM020 Irresponsible AI Reading List May 202420240609 QFM020 Irresponsible AI Reading List May 2024
20240609 QFM020 Irresponsible AI Reading List May 2024
Matthew Sinclair
 
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
James Anderson
 
UiPath Test Automation using UiPath Test Suite series, part 5
UiPath Test Automation using UiPath Test Suite series, part 5UiPath Test Automation using UiPath Test Suite series, part 5
UiPath Test Automation using UiPath Test Suite series, part 5
DianaGray10
 
GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024
GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024
GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024
Neo4j
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
Laura Byrne
 

Recently uploaded (20)

Artificial Intelligence for XMLDevelopment
Artificial Intelligence for XMLDevelopmentArtificial Intelligence for XMLDevelopment
Artificial Intelligence for XMLDevelopment
 
Free Complete Python - A step towards Data Science
Free Complete Python - A step towards Data ScienceFree Complete Python - A step towards Data Science
Free Complete Python - A step towards Data Science
 
Microsoft - Power Platform_G.Aspiotis.pdf
Microsoft - Power Platform_G.Aspiotis.pdfMicrosoft - Power Platform_G.Aspiotis.pdf
Microsoft - Power Platform_G.Aspiotis.pdf
 
Video Streaming: Then, Now, and in the Future
Video Streaming: Then, Now, and in the FutureVideo Streaming: Then, Now, and in the Future
Video Streaming: Then, Now, and in the Future
 
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdfFIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
 
How to Get CNIC Information System with Paksim Ga.pptx
How to Get CNIC Information System with Paksim Ga.pptxHow to Get CNIC Information System with Paksim Ga.pptx
How to Get CNIC Information System with Paksim Ga.pptx
 
Epistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI supportEpistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI support
 
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
 
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
 
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
 
Monitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR EventsMonitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR Events
 
RESUME BUILDER APPLICATION Project for students
RESUME BUILDER APPLICATION Project for studentsRESUME BUILDER APPLICATION Project for students
RESUME BUILDER APPLICATION Project for students
 
GridMate - End to end testing is a critical piece to ensure quality and avoid...
GridMate - End to end testing is a critical piece to ensure quality and avoid...GridMate - End to end testing is a critical piece to ensure quality and avoid...
GridMate - End to end testing is a critical piece to ensure quality and avoid...
 
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
 
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
 
20240609 QFM020 Irresponsible AI Reading List May 2024
20240609 QFM020 Irresponsible AI Reading List May 202420240609 QFM020 Irresponsible AI Reading List May 2024
20240609 QFM020 Irresponsible AI Reading List May 2024
 
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
 
UiPath Test Automation using UiPath Test Suite series, part 5
UiPath Test Automation using UiPath Test Suite series, part 5UiPath Test Automation using UiPath Test Suite series, part 5
UiPath Test Automation using UiPath Test Suite series, part 5
 
GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024
GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024
GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
 

Network performance test plan_v0.3

  • 1. © VCDX 200 - https://www.vcdx200.com/ VMware NIC Performance Test Plan Prepared by David Pasek, VMware TAM dpasek@vmware.com
  • 2. Version History Date Rev. Author Description Reviewers 2020 Oct 29 0.1 David Pasek Initial Draft. Simple tests. More complex tests and multiple test combination can be tested in case of not seeing any performance issues. 2020 Nov 06 0.2 David Pasek NUTTCP test method fixed. In guest (vm2vm) iperf TCP test In guest (vm2vm) iperf UDP test Two Load balancer simulation tests (without RSS, with RSS) 2020 Nov 24 0.3 David Pasek Published as open source
  • 3. Contents 1. Overview..........................................................................................4 2. Requirements, Constraints and Assumptions ...............................5 2.1 Requirements ........................................................................................................5 2.2 Constraints ............................................................................................................5 2.3 Assumptions..........................................................................................................5 3. Test Lab Environments...................................................................6 3.1 ESXi hosts - Hardware Specifications......................................................................6 3.2 Virtual Machines - Hardware and App Specifications ................................................6 3.3 Lab Architecture.....................................................................................................7 4. Test Plan..........................................................................................8 4.1 VMKernel - TCP (iperf) communication between two ESXi host consoles ..................9 4.2 VM - TCP (nuttcp) communication of 2 VMs across two ESXi hosts ........................10 4.3 VM - TCP (iperf) communication of 2 VMs across two ESXi hosts ...........................11 4.4 VM - UDP (nuttcp 64 KB) communication of 2 VMs across two ESXi hosts ..............12 4.5 VM - UDP (iperf 64 KB) communication of 2 VMs across two ESXi hosts.................13 4.6 VM - HTTP (Nginx) communication of 2 VMs across two ESXi hosts .......................14 4.7 VM - HTTPS (Nginx) communication of 2 VMs across two ESXi hosts .....................15 4.8 VM - HTTP communication across two ESXi hosts via LoadBalancer (no RSS) .......16 4.9 VM - HTTP communication across two ESXi hosts via LoadBalancer (RSS) ............18 5. Appendixes................................................................................... 20 5.1 Useful commands and tools for test cases.............................................................20 5.2 Diagnostic commands ..........................................................................................29 5.3 ESX commands to manage NIC Offloading Capabilities .........................................32
  • 4. 1. Overview This document contains testing procedures to verify that the implemented design successfully addresses customer requirements and expectations. This document assumes that the person performing these tests has a basic understanding of VMware vSphere and is familiar with vSphere lab design and environment. This document is not intended for administrators or testers who have no prior knowledge of VMware vSphere concepts and terminology.
  • 5. 2. Requirements, Constraints and Assumptions 2.1 Requirements 1. VM to VM Network TCP communication across two ESXi hosts has to achieve at least ~5 Gbps (~500 MB/s) transmit/receive throughput bi-directionally (5 Gbps transmit and 5 Gbps receive) 2. VM to VM Network HTTP communication across two ESXi hosts has to achieve at least 5 Gbps (~500 MB/s) transmit/receive throughput bi-directionally (5 Gbps transmit and 5 Gbps receive) 3. VM to VM Network HTTPS communication across two ESXi hosts has to achieve at least 5 Gbps (~500 MB/s) transmit/receive throughput bi-directionally (5 Gbps transmit and 5 Gbps receive) 2.2 Constraints 1. Hardware a. 4x HPE DL560 Gen10 (BIOS: U34) b. 4x Intel NIC X710 (Firmware Version: 10.51.5, Driver Version: 1.9.5) c. 4x Qlogic FastLinQ QL41xxx 2. People a. Testers having access to lab environment 3. Processes a. VPN access to lab environment 2.3 Assumptions 1. Hardware a. We will have 2 ESXi hosts with Intel X710 NIC b. We will have 2 ESXi hosts with QLogic FastLinQ QL41xxx NIC 2. We will get VPN access to lab environment 3. We will be able to use Linux operating systems as Guest OS within VM and install testing software (nuttcp, iperf, nginx, wrk, iftop)
  • 6. 3. Test Lab Environments 3.1 ESXi hosts - HardwareSpecifications All ESXi host should be in following system  Server Platform: HPE ProLiant DL560 Gen10  BIOS: U34 | Date (ISO-8601): 2020-04-08  OS/Hypervisor: VMware ESXi 6.7.0 build-16075168 (6.7 U3) Following four ESXi hosts specifications are used for testing. 1. ESX01-INTEL a. CPU: TBD b. RAM: TBD c. NIC: Intel X710, driver i40en version: 1.9.5, firmware 10.51.5 d. STORAGE: any storage for test VMs 2. ESX02-INTEL a. CPU: TBD b. RAM: TBD c. NIC: Intel X710, driver i40en version: 1.9.5, firmware 10.51.5 d. STORAGE: any storage for test VMs 3. ESX01-QLOGIC a. CPU: TBD b. RAM: TBD c. NIC: QLogic QL41xxx, driver qedentv version: 3.11.16.0, firmware mfw 8.52.9.0 storm 8.38.2.0 d. STORAGE: any storage for test VMs 4. ESX02-QLOGIC a. CPU: TBD b. RAM: TBD c. NIC: QLogic QL41xxx, driver qedentv version: 3.11.16.0, firmware mfw 8.52.9.0 storm 8.38.2.0 d. STORAGE: any storage for test VMs Note: Four ESXi hosts above can be consolidate into two ESXi hosts where each ESXi host will have Intel and Qlogic NIC 3.2 VirtualMachines -Hardwareand App Specifications 1. APP-SERVER-01 - Application Server o Hardware: 8 vCPU, 4 GB RAM, NIC (VMXNET3), MTU 1500 o OS: Linux RHEL7 or Centos7 o App: IFTOP, NUTTCP, NGINX 2. APP-SERVER-02 - Application Server o Hardware: 8 vCPU, 4 GB RAM, NIC (VMXNET3), MTU 1500 o OS: Linux RHEL7 or Centos7 o App: IFTOP, NUTTCP, NGINX 3. APP-SERVER-03 - Application Server o Hardware: 8 vCPU, 4 GB RAM, NIC (VMXNET3), MTU 1500
  • 7. o OS: Linux RHEL7 or Centos7 o App: IFTOP, NUTTCP, NGINX 4. APP-SERVER-04 - Application Server o Hardware: 8 vCPU, 4 GB RAM, NIC (VMXNET3), MTU 1500 o OS: Linux RHEL7 or Centos7 o App: IFTOP, NUTTCP, NGINX 5. APP-CLIENT-01 - Application Client o Hardware: 8 vCPU, 4 GB RAM, NIC (VMXNET3), MTU 1500 o OS: Linux RHEL7 or Centos7 o App: IFTOP, NUTTCP, WRK, TASKSET 6. APP-LB-01 - Application Server o Hardware: 8 vCPU, 4 GB RAM, NIC (VMXNET3), MTU 1500 o OS: Linux RHEL7 or Centos7 o App: IFTOP, NUTTCP, NGINX 3.3 Lab Architecture vSphere Cluster DRS Rules are used to pin Server, Client and LoadBalancer VMs to particular ESXi hosts.
  • 8. 4. Test Plan All tests in this test plan should be tested with different NIC hardware offload methods enabled and disabled. These methods are:  Enable/Disable LRO  Enable/Disable RSS  Enable/Disable NetQueue All performance results should be analyzed and discussed within expert group.
  • 9. 4.1 VMKernel - TCP (iperf)communication betweentwo ESXi host consoles Test Name VMKernel - TCP (iperf) communication between two ESXi host consoles Success Criteria At least 5 Gbps (~500 MB/s) throughput Test scenario / runbook Run IPERF listener on APP-SERVER-01: esxcli network firewall set --enabled false /usr/lib/vmware/vsan/bin/iperf3.copy -s -B [APP-SERVER-01 vMotion IP] Run TCP listener on APP-CLIENT-01: esxcli network firewall set --enabled false /usr/lib/vmware/vsan/bin/iperf3.copy -t 300 -c [APP-SERVER-01 vMotion IP] Use VM network monitoring in vSphere Client to see network throughput. Write down achieved results from iperf utility into Result row below. After test, re-enable firewall esxcli network firewall set --enabled true Tester Results 5.91 Gbits/sec Comments Test duration: 5 minutes Succeed Yes | No | Partially
  • 10. 4.2 VM - TCP (nuttcp)communication of 2 VMs acrosstwo ESXi hosts Test Name VM - TCP (nuttcp) communication of 2 VMs across two ESXi hosts Success Criteria At least 8 Gbps (~800 MB/s) throughput Note: 20 Gbps should be achievable in pure software stack. Test scenario / runbook Run TCP listener on APP-SERVER-01: nuttcp -S -P 5000 -N 20 Run TCP traffic generator on APP-CLIENT-01: nuttcp -t -N 4 -P 5000 -T 300 APP-SERVER-01 Use iftop to see achieved traffic Use VM network monitoring in vSphere Client to see network throughput. Results  Write down nuttcp reported results into test results below. Tester Results 9323.1294 Mbps = 9.1 Gbps Comments Succeed Yes | No | Partially
  • 11. 4.3 VM - TCP (iperf)communicationof 2 VMs across two ESXi hosts Test Name VM - TCP (iperf) communication of 2 VMs across two ESXi hosts Success Criteria At least 8 Gbps (~800 MB/s) throughput Note: 20 Gbps should be achievable in pure software stack. Test scenario / runbook Run TCP listener on APP-SERVER-01: iperf3 -s Run TCP traffic generator on APP-CLIENT-01: iperf3 -t 300 -b 25g -P 4 -c [APP-SERVER-01] Use iftop to see achieved traffic Use VM network monitoring in vSphere Client to see network throughput. Results  Write down iperf reported results into test results below. Tester Results 9.39 Gbps Comments Succeed Yes | No | Partially
  • 12. 4.4 VM - UDP (nuttcp 64 KB) communicationof 2 VMs across two ESXi hosts Test Name VM - UDP (nuttcp 64 KB) communication of 2 VMs across two ESXi hosts Success Criteria At least 8 Gbps (~800 MB/s) throughput Note: 8 Gbps should be achievable in pure software stack. Test scenario / runbook Run TCP listener on APP-SERVER-01: nuttcp -S -P 5000 -N 20 Run TCP traffic generator on APP-CLIENT-01: nuttcp -u -Ru -l65507 -N 4 -P 5000 -T 300 -i APP-SERVER-01 Use iftop to see achieved traffic Use VM network monitoring in vSphere Client to see network throughput. Results  Write down nuttcp reported results into test results below. Tester Results 9315.9779 Mbps = 9.1 Gbps 1116.6931 MB / 1.00 sec = 9367.3418 Mbps 0 / 17875 ~drop/pkt 0.00 ~%loss 1119.2545 MB / 1.00 sec = 9388.5554 Mbps 3 / 17919 ~drop/pkt 0.01674 ~%loss 333165.1325 MB / 300.00 sec = 9315.9779 Mbps 49 %TX 69 %RX 34816 / 5367818 drop/pkt 0.65 %loss Comments Succeed Yes | No | Partially
  • 13. 4.5 VM - UDP (iperf 64 KB) communication of2 VMs across two ESXi hosts Test Name VM - UDP (iperf 64 KB) communication of 2 VMs across two ESXi hosts Success Criteria At least 8 Gbps (~800 MB/s) throughput Note: 8 Gbps should be achievable in pure software stack. Test scenario / runbook Run TCP listener on APP-SERVER-01: iperf3 -s Run TCP traffic generator on APP-CLIENT-01: iperf3 -u -t 300 -b 25g -P 4 -l 65507 -c [APP-SERVER-01] Use iftop to see achieved traffic Use VM network monitoring in vSphere Client to see network throughput. Results  Write down iperf reported results into test results below. Tester Results 7.91 Gbps Comments [root@server-U19 /]# iperf3 -c 10.226.97.155 -u -t 300 -b 25g -P 4 -l 65507 [SUM] 0.00-300.00 sec 276 GBytes 7.91 Gbits/sec 0.062 ms 24936/4527389 (0.55%) Succeed Yes | No | Partially
  • 14. 4.6 VM - HTTP (Nginx)communicationof 2 VMs across two ESXi hosts Test Name VM - HTTP (nginx) communication of 2 VMs across two ESXi hosts Success Criteria At least 8 Gbps (~800 MB/s) throughput Note: 22 Gbps should be achievable in pure software stack. Test scenario / runbook Install and run Nginx (See ) on APP-SERVER-01 Create test files on APP-SERVER-01 cd /usr/share/nginx/html dd if=/dev/urandom of=1M.txt bs=1M count=1 Install wrk and takset on APP-CLIENT-01 (See sections 5.1.13 and 5.1.14) Run wrk on APP-CLIENT-01 to generate traffic from APP-SERVER-01 taskset -c 0-8 /root/wrk -t 8 -c 8 -d 300s http://[APP-SERVER-01]/1M.txt Use iftop to see achieved HTTP traffic Use VM network monitoring in vSphere Client to see network throughput. Results  Write down WRK reported results (Gbps) into test results below  Write down the context - wrk latency, requests/sec, transfer/sec Tester Results 660.16MB = 5252.28 Mbps = 5.15 Gbps Comments [root@server-U19 /]# taskset -c 0-8 /bin/wrk -t 8 -c 8 -d 300s http://10.226.97.155/1M.txt 198016 requests in 5.00m, 193.43GB read Requests/sec: 659.98 Transfer/sec: 660.16MB Advanced testing  Test is uni-directional. Bi-directional test would require Lua script for wrk  We do not use taskset utility which can be used to pin threads to logical CPUs.  We will do advance testing in phase 2 based on observed results. Succeed Yes | No | Partially
  • 15.
  • 16. 4.7 VM - HTTPS (Nginx) communication of 2 VMs acrosstwo ESXi hosts Test Name VM – HTTPS (nginx) communication of 2 VMs across two ESXi hosts Success Criteria At least 8 Gbps (~800 MB/s) throughput Note: 22 Gbps should be achievable in pure software stack. Intel CPU Instructions AES-NI accelerates SSL, thus encryption penalty should be mitigated. Test scenario / runbook Install and run Nginx (See ) on APP-SERVER-01 Create test files on APP-SERVER-01 cd /usr/share/nginx/html dd if=/dev/urandom of=1M.txt bs=1M count=1 Install wrk and takset on APP-CLIENT-01 (See sections 5.1.13 and 5.1.14) Run wrk on APP-CLIENT-01 to generate traffic from APP-SERVER-01 taskset -c 0-8 /root/wrk -t 8 -c 8 -d 300s https://[APP-SERVER-01]/1M.txt Use iftop to see achieved HTTP traffic Use VM network monitoring in vSphere Client to see network throughput. Results  Write down WRK reported results (Gbps) into test results below Write down the context - wrk latency, requests/sec, transfer/sec Tester Results 1.09 GB = 8.72 Gbps Comments 335140 requests in 5.00m, 327.36GB read Requests/sec: 1116.99 Transfer/sec: 1.09GB Succeed Yes | No | Partially
  • 17. 4.8 VM - HTTP communication acrosstwo ESXihosts via LoadBalancer (no RSS) Test Name VM - HTTP communication across two ESXi hosts via LoadBalancer no RSS Success Criteria At least 4 Gbps (~400 MB/s) throughput Note: 4 Gbps should be achievable in pure software stack. Test scenario / runbook Prepare test environment as depicted in section 3.3 Install and run Nginx (See ) on four servers APP-SERVER-01, APP-SERVER-02, APP-SERVER-03, APP- SERVER-04 Configure DRS rules to keep servers and client on one ESXi host and load balancer on another one. Create test files on APP-SERVERs cd /usr/share/nginx/html dd if=/dev/urandom of=1M.txt bs=1M count=1 Install wrk and takset on APP-CLIENT-01 (See sections 5.1.13 and 5.1.14) Install HTTP L7 Load Balancer APP-LB-01 (See 5.1.12) with four load balancer members APP-SERVER-01, APP-SERVER-02, APP-SERVER-03, APP-SERVER- 04 Do NOT enable RSS (this is the default config) in virtual machine APP-LB-01. See section 5.1.15 Run wrk on APP-CLIENT-01 to generate traffic from APP-LB-01 taskset -c 0-8 /root/wrk -t 8 -c 8 -d 300s http://[APP-LB-01]/1M.txt Use iftop to see achieved HTTP traffic Use VM network monitoring in vSphere Client to see network throughput. Results  Write down WRK reported results (Gbps) into test results below Write down the context - wrk latency, requests/sec, transfer/sec Tester Results 635.97 MBps = 5087 Mbps = 4.97 Gbps Comments Advanced testing  Test is uni-directional. Bi-directional test would require Lua script for wrk  We do not use taskset utility which can be used to pin threads to logical CPUs.  We will do advance testing in phase 2 based on observed results.
  • 18. Test Name VM - HTTP communication across two ESXi hosts via LoadBalancer no RSS Succeed Yes | No | Partially
  • 19. 4.9 VM - HTTP communication acrosstwo ESXihosts via LoadBalancer (RSS) Test Name VM - HTTP communication across two ESXi hosts via LoadBalancer Success Criteria At least 4 Gbps (~400 MB/s) throughput Note: 4 Gbps should be achievable in pure software stack. Test scenario / runbook Prepare test environment as depicted in section 3.3 Install and run Nginx (See ) on four servers APP-SERVER-01, APP-SERVER-02, APP-SERVER-03, APP- SERVER-04 Configure DRS rules to keep servers and client on one ESXi host and load balancer on another one. Create test files on APP-SERVERs cd /usr/share/nginx/html dd if=/dev/urandom of=1M.txt bs=1M count=1 Install wrk and takset on APP-CLIENT-01 (See Error! Reference source not found. and T) Install HTTP L7 Load Balancer APP-LB-01 (See 5.1.12) with four load balancer members APP-SERVER-01, APP-SERVER-02, APP-SERVER-03, APP-SERVER- 04 Enable RSS in virtual machine APP-LB-01. See section 5.1.15 Run wrk on APP-CLIENT-01 to generate traffic from APP-LB-01 taskset -c 0-8 /root/wrk -t 8 -c 8 -d 300s http://[APP-LB-01]/1M.txt Use iftop to see achieved HTTP traffic Use VM network monitoring in vSphere Client to see network throughput. Results  Write down WRK reported results (Gbps) into test results below  Write down the context - wrk latency, requests/sec, transfer/sec Tester Results 664 MBps = 5312 Mbps = 5.18 Gbps Comments Test duration: 5 minutes Advanced testing  Test is uni-directional. Bi-directional test would require Lua script for wrk
  • 20. Test Name VM - HTTP communication across two ESXi hosts via LoadBalancer  We do not use taskset utility which can be used to pin threads to logical CPUs.  We will do advance testing in phase 2 based on observed results. Succeed Yes | No | Partially
  • 21. 5. Appendixes 5.1 Useful commands and tools for test cases In this section are documented procedures and commands used for test cases. All commands are targeted for RedHat 7 (or Centos 7) Linux operating system which are standard Linux distribution. 5.1.1 Network settings Information source: https://wiki.centos.org/FAQ/CentOS7 System files with network settings  /etc/hostname  /etc/resolv.conf  /etc/sysconfig/network o Common network settings  GATEWAY=10.16.1.1  DNS1=10.20.30.10  DNS2=10.20.40.10  /etc/sysconfig/network-scripts/ifcfg-eth0 o IP settings for interface eth0  DHCP  BOOTPROTO=dhcp  Static IP  BOOTPROTO=static  IPADDR=10.16.1.106  IPADDR1=10.16.1.107  Alias IP  IPADDR2=10.16.1.108  Alias IP  NETMASK=255.255.255.0  /etc/hosts o local hostname ip resolution To apply network settings use following command systemctl restart network 5.1.2 NTP To install and configure NTPD use following commands yum install ntp systemctl start ntpd systemctl enable ntpd
  • 22. To set timezone, crearte symbolic link from /etc/localtime to /usr/share/zoneinfo/… ln -s /usr/share/zoneinfo/Europe/Prague /etc/localtime To check current timezone setting, just list the symlink ls -la /etc/localtime 5.1.3 Open-VM-Tools VMware tools are usually installed in Centos 7 by default but just in case, here is the install procedure. sudo systemctl install open-vm-tools sudo systemctl start vmtoolsd sudo systemctl status vmtoolsd sudo systemctl enable vmtoolsd 5.1.4 Firewall To disable firewall services on RedHat linux use following commands systemctl stop firewalld.service systemctl disable firewalld.service and to check firewall status use systemctl status firewalld.service 5.1.5 SElinux To disable SElinux on RedHat linux edit file /etc/selinux/config, change parameter SELINUX to disabled and restart the system. vi /etc/selinux/config SELINUX=disabled 5.1.6 EPEL CentOS or Red Hat Enterprise Linux (RHEL) version 7.x to use the Fedora Extra Packages for Enterprise Linux (EPEL) repository. yum install -y epel-release 5.1.7 Open-vm-tools Check if open-vm-tools are installed yum list installed | grep open-vm In case VMware Tools are not install, install it. yum install open-vm-tools 5.1.8 NUTTCP performance test tool NUTTCP is a network performance measurement tool intended for use by network and system managers. Its most basic usage is to determine the raw TCP (or UDP) network layer throughput by transferring memory buffers from a source system across an interconnecting network to a
  • 23. destination system, either transferring data for a specified time interval, or alternatively transferring a specified number of bytes. In addition to reporting the achieved network throughput in Mbps, nuttcp also provides additional useful information related to the data transfer such as user, system, and wall-clock time, transmitter and receiver CPU utilization, and loss percentage (for UDP transfers). Assumptions  EPEL repository is accessible Installation on RHEL 7 yum install --enablerepo=Unsupported_EPEL nuttcp Installation on CENTOS 7 yum install -y epel-release yum install -y nuttcp Usage … Server part is started by following command nuttcp -S -N 12 Client part is started by one of following commands nuttcp -t -N 12 czchoapint092 cat /dev/zero | nuttcp -t -s -N 12 czchoapint092 cat /dev/urandom | nuttcp -t -s -N 12 czchoapint092 5.1.9 IPERF performance test tool iperf3 is a tool for performing network throughput measurements. It can test either TCP or UDP throughput. To perform an iperf3 test the user must establish both a server and a client. Assumptions  EPEL repository is accessible Installation on RHEL 7 yum install --enablerepo=Unsupported_EPEL nuttcp Installation on CENTOS 7 yum install -y epel-release yum install -y iperf3 Usage …
  • 24. Server part is started by following command iperf3 -s Client part is started by one of following commands iperf3 -c 192.168.11.51 -u -t 300 -b 25g -P 4 Parameters -P and -b can be tuned to achieve minimal packet loss. 5.1.10 IFTOP - performance monitoring tool iftop - display bandwidth usage on an interface by host Assumptions  EPEL repository is accessible Installation on Centos 7 yum install -y iftop Installation on RHEL 7 yum install --enablerepo=Unsupported_EPEL iftop Usage … # show interfaces ip link # use desired interface for iftop iftop –i <INTERFACE>
  • 25. 5.1.11 NGINX – http/https server, load balancer Centos 7 Nginx install procedure is based on tutorials at https://phoenixnap.com/kb/how-to-install-nginx-on-centos-7 and https://www.digitalocean.com/community/tutorials/how-to-create-a-self-signed-ssl-certificate- for-nginx-on-centos-7 Other resources for NGINX performance tuning  https://www.nginx.com/blog/performance-tuning-tips-tricks/  https://www.tweaked.io/guide/nginx-proxying/ Assumptions  Firewall is disabled  SElinux is disabled  Sudo or root privileges Installation instructions … sudo yum -y update sudo yum install -y epel-release sudo yum install -y nginx sudo systemctl start nginx sudo systemctl status nginx sudo systemctl enable nginx Website content (default server root) is in the directory /usr/share/nginx/html Default server block configuration file, located at /etc/nginx/conf.d/default.conf Global configuration is in /etc/nginx/nginx.conf Configure SSL Certificate and enable HTTPS mkdir /etc/ssl/private chmod 700 /etc/ssl/private openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/ssl/private/nginx- selfsigned.key -out /etc/ssl/certs/nginx-selfsigned.crt openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048 vi /etc/nginx/conf.d/ssl.conf server { listen 443 http2 ssl; listen [::]:443 http2 ssl;
  • 26. server_name server_IP_address; ssl_certificate /etc/ssl/certs/nginx-selfsigned.crt; ssl_certificate_key /etc/ssl/private/nginx-selfsigned.key; ssl_dhparam /etc/ssl/certs/dhparam.pem; root /usr/share/nginx/html; location / { autoindex on; } error_page 404 /404.html; location = /404.html { } error_page 500 502 503 504 /50x.html; location = /50x.html { } } Test if Nginx syntax is correct nginx -t Restart Nginx systemctl restart nginx 5.1.12 NGINX – http/https L7 load balancer (reverse proxy) Install NGINX package as documented in previous section. Load balancer function can be configured in NGINX Global configuration file /etc/nginx/nginx.conf Use the simplest configuration for load balancing with nginx like the following example: http { upstream myapp1 { server srv1.example.com; server srv2.example.com; server srv3.example.com; } server {
  • 27. listen 80; location / { proxy_pass http://myapp1; } } } Source: http://nginx.org/en/docs/http/load_balancing.html 5.1.13 WRK – http benchmarking wrk is a modern HTTP benchmarking tool capable of generating significant load when run on a single multi-core CPU. It combines a multithreaded design with scalable event notification systems such as epoll and kqueue. https://github.com/wg/wrk https://github.com/wg/wrk/wiki/Installing-Wrk-on-Linux Install procedure on Centos sudo yum -y update yum groupinstall 'Development Tools' yum install -y openssl-devel git git clone https://github.com/wg/wrk.git wrk cd wrk make cp wrk /somewhere/in/your/PATH Advanced benchmarking with wrk WRK supports Lua scripts for more advanced benchmarking. Resources  Quick Start for the http Pressure Tool wrk o https://programmer.ink/think/quick-start-for-the-http-pressure-tool-wrk.html  POST request with wrk? o https://stackoverflow.com/questions/15261612/post-request-with-wrk  Benchmark testing of OSS with ab and wrk tools o https://www.alibabacloud.com/forum/read-497  Intelligent benchmark with wrk o https://medium.com/@felipedutratine/intelligent-benchmark-with-wrk-163986c1587f
  • 28.
  • 29. 5.1.14 TASKSET The taskset tool is provided by the util-linux package. It allows administrators to retrieve and set the processor affinity of a running process, or launch a process with a specified processor affinity. Install procedure on Centos sudo yum -y update yum install -y util-linux 5.1.15 VMware Virtual Machine RSS configuration You have to configure virtual machine advanced settings to enable RSS in virtual machine. Additional advanced settings must be added into .vmx or advanced config of particular VM to enable mutlti-queue support. Below are VM advanced settings:  ethernetX.pnicFeatures = “4” <<< Enable multi-queue (NetQueue RSS) in particular VM  ethernetX.ctxPerDev = “3” <<< Allow multiple TX threads for particular VM  ethernetX.udpRSS = “1” <<< Receive Side Scaling (RSS) for UDP Note 1: RSS has to be enabled end to end. NIC Driver (driver specific) -> VMkernel (enabled by default) -> Virtual Machine Advanced Settings (disabled by default) -> Guest OS vNIC (enabled by default on vmxnet3 with open-vm-tools). Following command validates RSS is enabled in VMkernel and driver for particular physical NIC (vmnic1): vsish -e cat /net/pNics/vmnic1/rxqueues/info Note 2: You have to enable and configure RSS in the guest OS in addition to the VMkernel driver module. Multi-queuing is enabled by default in Linux guest OS when the latest VMware tools version (version 1.0.24.0 or later) is installed or when the Linux VMXNET3 driver version 1.0.16.0-k or later is used. Prior to these versions, you were required to manually enable multi- queue or RSS support. Be sure to check the driver and version used to verify if your Linux OS has RSS support enabled by default. Guest OS Driver version within linux OS can be checked by following command: # modinfo vmxnet3 You can determine the number of Tx and Rx queues allocated for a VMXNET3 driver on by running the ethtool console command in the Linux guest operating system: ethtool -S ens192
  • 30. 5.2 Diagnostic commands In this section we will document diagnostic commands which should be run on each system to understand implementation details of NIC offload capabilities and network traffic queueing. ESXCLI commands are available at ESXCLI documentation: https://code.vmware.com/docs/11743/esxi-7-0-esxcli-command- reference/namespace/esxcli_network.html For further detail about diagnostic commands, you can watch vmkernel log during execution of commands below as there can be interesting outputs from NIC driver. tail -f /var/log/vmkernel.log 5.2.1 ESXi Inventory Collect hardware and ESXi inventory details. esxcli system version get esxcli hardware platform get esxcli hardware cpu global get smbiosDump WebBrowser https://192.168.4.121/cgi-bin/esxcfg-info.cgi 5.2.2 Driver information NIC inventory esxcli network nic get -n <VMNIC> NIC device info vmkchdev –l | grep vmnic Document VID:DID:SVID”SDID To list all vib modules and understand what drivers are “Inbox” (aka native VMware) or “Async” (from partners like Intel or Marvel/QLogic) esxcli software vib list 5.2.3 Driver module settings Identify NIC driver module name esxcli network nic get -n vmnic0 Show driver module parameters esxcli system module parameters list -m <DRIVER-MODULE-NAME>
  • 31. 5.2.4 TSO To verify that your pNIC supports TSO and if it is enabled on your ESXi host esxcli network nic tso get 5.2.5 LRO To display the current LRO configuration values esxcli system settings advanced list -o /Net/TcpipDefLROEnabled Check the length of the LRO buffer by using the following esxcli command: esxcli system settings advanced list - o /Net/VmxnetLROMaxLength To check the VMXNET3 settings in relation to LRO, the following commands (hardware LRO, software LRO) can be issued: esxcli system settings advanced list -o /Net/Vmxnet3HwLRO esxcli system settings advanced list -o /Net/Vmxnet3SwLRO 5.2.6 CSO (Checksum Offload) To verify that your pNIC supports Checksum Offload (CSO) on your ESXi host esxcli network nic cso get 5.2.7 Net Queue Count Get netqueue count on a nic esxcli network nic queue count get 5.2.8 Net Filter Classes List the netqueue supported filterclass of all physical NICs currently installed and loaded on the system. esxcli network nic queue filterclass list 5.2.9 List the load balancer settings List the load balancer settings of all the installed and loaded physical NICs. (S:supported, U:unsupported, N:not-applicable, A:allowed, D:disallowed). esxcli network nic queue loadbalancer list 5.2.10 Details of netqueue balancer plugins Details of netqueue balancer plugins on all physical NICs currently installed and loaded on the system esxcli network nic queue loadbalancer plugin list
  • 32. 5.2.11 Net Queue balancer state Netqueue balancer state of all physical NICs currently installed and loaded on the system esxcli network nic queue loadbalancer state list 5.2.12 RX/TX ring buffer current parameters Get current RX/TX ring buffer parameters of a NIC esxcli network nic ring current get 5.2.13 RX/TX ring buffer parameters max values Get preset maximums for RX/TX ring buffer parameters of a NIC. esxcli network nic ring preset get -n vmnic0 5.2.14 SG (Scatter and Gather) Scatter and Gather (Vectored I/O) is a concept that was primarily used in hard disks and it enhances large I/O request performance, if supported by the hardware. esxcli network nic sg get 5.2.15 List software simulation settings List software simulation settings of physical NICs currently installed and loaded on the system. esxcli network nic software list 5.2.16 RSS We do not see any RSS related driver parameters, therefore, driver i40en 1.9.5 does not support RSS. On top of that, we have been assured by VMware Engineering that inbox driver i40en 1.9.5 does not support RSS. 5.2.17 VMkernel software treads per VMNIC Show number of VMkernel software treads per VMNIC net-stats -A -t vW vsish /> cat /world/<WORLD-ID-1-IN-VMNIC>/name /> cat /world/<WORLD-ID-2-IN-VMNIC>/name /> cat /world/<WORLD-ID-3-IN-VMNIC>/name … /> cat /world/<WORLD-ID-n-IN-VMNIC>/name
  • 33. 5.3 ESX commandsto manage NIC Offloading Capabilities 5.3.1 LRO in the ESXi host By default, a host is configured to use hardware TSO if its NICs support the feature. To check the LRO configuration for the default TCP/IP stack on the ESXi host, execute the following command to display the current LRO configuration values: esxcli system settings advanced list -o /Net/TcpipDefLROEnabled You are able to check the length of the LRO buffer by using the following esxcli command: esxcli system settings advanced list - o /Net/VmxnetLROMaxLength The LRO features are functional for the guest OS when the VMXNET3 virtual adapter is used. To check the VMXNET3 settings in relation to LRO, the following commands (hardware LRO, software LRO) can be issued: esxcli system settings advanced list -o /Net/Vmxnet3HwLRO esxcli system settings advanced list -o /Net/Vmxnet3SwLRO You can disable LRO for all VMkernel adapters on a host with command esxcli system settings advanced set -o /Net/TcpipDefLROEnabled -i 0 and enabling LRO with esxcli system settings advanced set -o /Net/TcpipDefLROEnabled -i 1 5.3.2 Netqueue and RSS 5.3.2.1. How to validate RSS is enabled in VMkernel If you have running system, you can check the status of RSS by following command from ESXi shell vsish -e cat /net/pNics/vmnic1/rxqueues/info In figure below, you can see the command output for 1Gb Intel NIC not supporting NetQueue, therefore RSS is logically not supported as well, because it does not make any sense. Figure 1 Command to validate if RSS is enabled in VMkernel It seems, that some drivers enabling RSS by default and some others not.
  • 34. 5.3.2.2. How to explicitly enable Netqueue RSS The procedure to enable RSS is always dependent on specific driver, because specific parameters have to be passed to driver module. The information how to enable RSS for particular driver should be written in specific NIC vendor documentation. Example for Intel ixgbe driver: vmkload_mod ixgbe RSS=”4″ To enable the feature on multiple Intel 82599EB SFI/SFP+ 10Gb/s NICs, include another comma-separated 4 for each additional NIC (for example, to enable the feature on three such NICs, you'd run vmkload_mod ixgbe RSS="4,4,4"). Example for Mellanox nmlx4driver: For Mellanox adapters, the RSS feature can be turned on by reloading the driver with num_rings_per_rss_queue=4. vmkload_mod nmlx4_en num_rings_per_rss_queue=4 NOTE: After loading the driver with vmkload_mod, you should make vmkdevmgr rediscover the NICs with the following command: kill -HUP ID … where ID is the process ID of the vmkdevmgr process 5.3.2.3. How to disable Netqueue RSS for particular driver Disabling Netqueue RSS is also driver specific. It can be done using driver module parameter as shown below. The example assumes there are four qedentv (QLogic NIC) instances. [root@host:~] esxcfg-module -g qedentv qedentv enabled = 1 options = '' [root@host:~] esxcfg-module -s "num_queues=0,0,0,0 RSS=0,0,0,0" qedentv [root@host:~] esxcfg-module -g qedentv qedentv enabled = 1 options = 'num_queues=0,0,0,0 RSS=0,0,0,0' Reboot the system for settings to take effect and will apply to all NICs managed by the qedentv driver. Source: https://kb.vmware.com/s/article/68147 5.3.2.4. How to disable Netqueue in VMkernel Netqueue can be also totally disabled in VMkernel
  • 35. esxcli system settings kernel set --setting="netNetqueueEnabled" --value="FALSE"