1. Iperf testing report
“Iperf was developed by NLANR/DAST as a modern alternative for measuring maximum TCP and
UDP bandwidth performance. Iperf allows the tuning of various parameters and UDP
characteristics. Iperf reports bandwidth, delay jitter, datagram loss.”
The above is Iperf’s summary on sourceforge, Iperf is widely used as a network performance
tool, and it’s in active development, and there’s a new version Iperf3 which host on google code,
which is a new implementation from scratch, but seems it don’t support UDP yet. Iperf also has
a JAVA front GUI jperf.
Iperf is a classic and widely used tool, even it use flooding tcp and udp, which doesn’t satisfy our
needs.
The reasons why testing it are the following:
1. We need a benchmark for our testing, which should be accurate.
2. We need to ensure our environment works fine.
3. Ipref is often used to produce cross traffic in bandwidth testing.
Iperf is used as a bandwidth measurement for many years by many people, so it is a good
choice.
Iperf at a first glance?
2. Use Iperf measure the bandwidth
Iperf has both client and server pieces, so it requires installation at both ends of the connection
you're measuring. Iperf can send both tcp and udp packets. For more information on how to use,
please refer to IPerf-The easy tutorial.
Test case: Two machine, no bandwidth throttle
TCP
Measure bi-directional bandwidth
bi-directional bandwidth are measured (using parameter ‘-r’ in client, the measurements are
taken sequentially).
Server(10.224.173.124)
D:bw_test>iperf.exe -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[1880] local 10.224.172.117 port 5001 connected with 10.224.172.186 port 2821
3. [ ID] Interval Transfer Bandwidth
[1880] 0.0-10.0 sec 100 MBytes 84.1 Mbits/sec
------------------------------------------------------------
Client connecting to 10.224.172.186, TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[1848] local 10.224.172.117 port 3273 connected with 10.224.172.186 port 5001
[ ID] Interval Transfer Bandwidth
[1848] 0.0-10.0 sec 94.5 MBytes 79.2 Mbits/sec
Client(10.224.172.186)
E:testbandwidthsoftwarejperfreleasejperf-2.0.2bin>iperf.exe -c 10.224.172
.117 -P 1 -t 10 -r
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 10.224.172.117, TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[1828] local 10.224.172.186 port 2821 connected with 10.224.172.117 port 5001
[ ID] Interval Transfer Bandwidth
[1828] 0.0-10.0 sec 100 MBytes 83.9 Mbits/sec
[1944] local 10.224.172.186 port 5001 connected with 10.224.172.117 port 3273
[ ID] Interval Transfer Bandwidth
[1944] 0.0-10.0 sec 94.5 MBytes 79.2 Mbits/sec
10.224.172.117 and 10.224.172.186 are in the same subnet, the theoretic bandwidth is
100Mbit/sec.
Adjust tcp window size
Now the bandwidth measured is nearly 82Mbits/sec, the tcp window size is only 8k, we can
adjust the window size to a bigger value to improve the throughput.
Server(10.224.172.117)
D:bw_test>iperf.exe -s -w 1M
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 1.00 MByte
------------------------------------------------------------
[1880] local 10.224.172.117 port 5001 connected with 10.224.172.186 port 2916
[ ID] Interval Transfer Bandwidth
[1880] 0.0-10.1 sec 112 MBytes 92.8 Mbits/sec
------------------------------------------------------------
4. Client connecting to 10.224.172.186, TCP port 5001
TCP window size: 1.00 MByte
------------------------------------------------------------
[1848] local 10.224.172.117 port 3275 connected with 10.224.172.186 port 5001
[ ID] Interval Transfer Bandwidth
[1848] 0.0-10.1 sec 111 MBytes 92.1 Mbits/sec
Client(10.224.172.186)
E:testbandwidthsoftwarejperfreleasejperf-2.0.2bin>iperf.exe -c 10.224.172
.117 -P 1 -t 10 -r -w 1M
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 1.00 MByte
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 10.224.172.117, TCP port 5001
TCP window size: 1.00 MByte
------------------------------------------------------------
[1832] local 10.224.172.186 port 2916 connected with 10.224.172.117 port 5001
[ ID] Interval Transfer Bandwidth
[1832] 0.0-10.1 sec 112 MBytes 92.8 Mbits/sec
[1944] local 10.224.172.186 port 5001 connected with 10.224.172.117 port 3275
[ ID] Interval Transfer Bandwidth
[1944] 0.0-10.1 sec 111 MBytes 92.1 Mbits/sec
After the adjustment of tcp windows size, Now the bandwidth measured is nearly 92.5Mbits/sec.
Use parallel tcp
Parallel tcp is supposed to improve the throughput, we use 2 parallel links below, and the
measured bandwidth is nearly 93.3Mbit/sec.
Server(10.224.173.124)
D:bw_test>iperf.exe -s -w 1M
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 1.00 MByte
------------------------------------------------------------
[1880] local 10.224.172.117 port 5001 connected with 10.224.172.186 port 3025
[1844] local 10.224.172.117 port 5001 connected with 10.224.172.186 port 3026
[ ID] Interval Transfer Bandwidth
[1844] 0.0-10.2 sec 56.8 MBytes 46.9 Mbits/sec
[1880] 0.0-10.2 sec 56.8 MBytes 46.8 Mbits/sec
[SUM] 0.0-10.2 sec 114 MBytes 93.6 Mbits/sec
------------------------------------------------------------
Client connecting to 10.224.172.186, TCP port 5001
5. TCP window size: 1.00 MByte
------------------------------------------------------------
[1880] local 10.224.172.117 port 3276 connected with 10.224.172.186 port 5001
[1868] local 10.224.172.117 port 3277 connected with 10.224.172.186 port 5001
[ ID] Interval Transfer Bandwidth
[1868] 0.0-10.2 sec 56.5 MBytes 46.4 Mbits/sec
[1880] 0.0-10.2 sec 56.7 MBytes 46.6 Mbits/sec
[SUM] 0.0-10.2 sec 113 MBytes 93.1 Mbits/sec
Client(10.224.172.186)
E:testbandwidthsoftwarejperfreleasejperf-2.0.2bin>iperf.exe -c 10.224.172
.117 -P 2 -t 10 -r -w 1M
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 1.00 MByte
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 10.224.172.117, TCP port 5001
TCP window size: 1.00 MByte
------------------------------------------------------------
[1812] local 10.224.172.186 port 3026 connected with 10.224.172.117 port 5001
[1828] local 10.224.172.186 port 3025 connected with 10.224.172.117 port 5001
[ ID] Interval Transfer Bandwidth
[1812] 0.0-10.2 sec 56.8 MBytes 46.8 Mbits/sec
[1828] 0.0-10.2 sec 56.8 MBytes 46.8 Mbits/sec
[SUM] 0.0-10.2 sec 114 MBytes 93.6 Mbits/sec
[1784] local 10.224.172.186 port 5001 connected with 10.224.172.117 port 3276
[1964] local 10.224.172.186 port 5001 connected with 10.224.172.117 port 3277
[ ID] Interval Transfer Bandwidth
[1964] 0.0-10.2 sec 56.5 MBytes 46.5 Mbits/sec
[1784] 0.0-10.2 sec 56.7 MBytes 46.6 Mbits/sec
[SUM] 0.0-10.2 sec 113 MBytes 93.1 Mbits/sec
UDP
Using udp SHOULD specify packet send bandwidth, the default value is 1Mbit/sec.
Server(10.224.172.117)
D:bw_test>iperf.exe -s -u
------------------------------------------------------------
Server listening on UDP port 5001
Receiving 1470 byte datagrams
UDP buffer size: 8.00 KByte (default)
6. ------------------------------------------------------------
[1928] local 10.224.172.117 port 5001 connected with 10.224.172.186 port 3680
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
[1928] 0.0-10.0 sec 29.3 MBytes 24.5 Mbits/sec 1.727 ms 0/20902 (0%)
Client(10.224.172.186)
E:testbandwidthsoftwarejperfreleasejperf-2.0.2bin>iperf.exe -c 10.224.172
.117 -t 10 -u -b 100M
------------------------------------------------------------
Client connecting to 10.224.172.117, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size: 8.00 KByte (default)
------------------------------------------------------------
[1908] local 10.224.172.186 port 3680 connected with 10.224.172.117 port 5001
[ ID] Interval Transfer Bandwidth
[1908] 0.0-10.0 sec 29.3 MBytes 24.5 Mbits/sec
[1908] Server Report:
[1908] 0.0-10.0 sec 29.3 MBytes 24.5 Mbits/sec 1.726 ms 0/20902 (0%)
[1908] Sent 20902 datagrams
Why lower than we expect?
The result is 24.5Mbits/sec, there’s no explanation except that this is a bug. I’ve searched for
this case and others encounter the same problem, but no answer yet. The windows version is
1.7.0, very old, the newest version is 2.0.5, and need cygwin to compile in windows, I’ll test it in
linux to see whether the problem remains.
Here’s the result, as we can see, the problem remains, but we have other clues, the lose rate is
very high, 96% when send bandwidth limitation is 100M, 49% when send bandwidth limitation is
50M. Now we can explain why the bandwidth measured using UDP can’t reach 93Mbits/sec, the
reason is, ipref has no flow control when sending udp packets, and the UDP buffer size is too
slow for the sending rate, so lots of packets are dropped, and cause the measured value much
lower than we expect.
Server (10.224.172.199)
7. Client (10.224.172.181)
After adjust the buffer size, the measured bandwidth is 37.4Mbits/sec-47.6Mbits/sec, much
better than before.
Server (10.224.172.199)
Client (10.224.172.181)
8. Even udp has such problem, we can still use it to generate cross traffic when we measure other
tools.
Coclusion
Ipref is precise enough for used as a benchmark in our later test.
Ipref can be used to product various kinds of cross traffic, tcp, udp, parallel traffic, bi-
direction traffic, in specified rate(for udp, we should adjust the buffer)
moremojo!
Jeromy.Fu