SlideShare a Scribd company logo
1 of 8
Iperf testing report
 “Iperf was developed by NLANR/DAST as a modern alternative for measuring maximum TCP and
UDP bandwidth performance. Iperf allows the tuning of various parameters and UDP
characteristics. Iperf reports bandwidth, delay jitter, datagram loss.”

The above is Iperf’s summary on sourceforge, Iperf is widely used as a network performance
tool, and it’s in active development, and there’s a new version Iperf3 which host on google code,
which is a new implementation from scratch, but seems it don’t support UDP yet. Iperf also has
a JAVA front GUI jperf.

Iperf is a classic and widely used tool, even it use flooding tcp and udp, which doesn’t satisfy our
needs.

The reasons why testing it are the following:

    1. We need a benchmark for our testing, which should be accurate.
    2. We need to ensure our environment works fine.
    3. Ipref is often used to produce cross traffic in bandwidth testing.

Iperf is used as a bandwidth measurement for many years by many people, so it is a good
choice.

Iperf at a first glance?
Use Iperf measure the bandwidth
Iperf has both client and server pieces, so it requires installation at both ends of the connection
you're measuring. Iperf can send both tcp and udp packets. For more information on how to use,
please refer to IPerf-The easy tutorial.

Test case: Two machine, no bandwidth throttle

TCP

Measure bi-directional bandwidth
bi-directional bandwidth are measured (using parameter ‘-r’ in client, the measurements are
taken sequentially).

Server(10.224.173.124)

D:bw_test>iperf.exe -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[1880] local 10.224.172.117 port 5001 connected with 10.224.172.186 port 2821
[ ID] Interval        Transfer Bandwidth
[1880] 0.0-10.0 sec 100 MBytes 84.1 Mbits/sec
------------------------------------------------------------
Client connecting to 10.224.172.186, TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[1848] local 10.224.172.117 port 3273 connected with 10.224.172.186 port 5001
[ ID] Interval        Transfer Bandwidth
[1848] 0.0-10.0 sec 94.5 MBytes 79.2 Mbits/sec


Client(10.224.172.186)

E:testbandwidthsoftwarejperfreleasejperf-2.0.2bin>iperf.exe -c 10.224.172
.117 -P 1 -t 10 -r
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 10.224.172.117, TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[1828] local 10.224.172.186 port 2821 connected with 10.224.172.117 port 5001
[ ID] Interval        Transfer Bandwidth
[1828] 0.0-10.0 sec 100 MBytes 83.9 Mbits/sec
[1944] local 10.224.172.186 port 5001 connected with 10.224.172.117 port 3273
[ ID] Interval        Transfer Bandwidth
[1944] 0.0-10.0 sec 94.5 MBytes 79.2 Mbits/sec

10.224.172.117 and 10.224.172.186 are in the same subnet, the theoretic bandwidth is
100Mbit/sec.

Adjust tcp window size
Now the bandwidth measured is nearly 82Mbits/sec, the tcp window size is only 8k, we can
adjust the window size to a bigger value to improve the throughput.

Server(10.224.172.117)

D:bw_test>iperf.exe -s -w 1M
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 1.00 MByte
------------------------------------------------------------
[1880] local 10.224.172.117 port 5001 connected with 10.224.172.186 port 2916
[ ID] Interval        Transfer Bandwidth
[1880] 0.0-10.1 sec 112 MBytes 92.8 Mbits/sec
------------------------------------------------------------
Client connecting to 10.224.172.186, TCP port 5001
TCP window size: 1.00 MByte
------------------------------------------------------------
[1848] local 10.224.172.117 port 3275 connected with 10.224.172.186 port 5001
[ ID] Interval        Transfer Bandwidth
[1848] 0.0-10.1 sec 111 MBytes 92.1 Mbits/sec


Client(10.224.172.186)

E:testbandwidthsoftwarejperfreleasejperf-2.0.2bin>iperf.exe -c 10.224.172
.117 -P 1 -t 10 -r -w 1M
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 1.00 MByte
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 10.224.172.117, TCP port 5001
TCP window size: 1.00 MByte
------------------------------------------------------------
[1832] local 10.224.172.186 port 2916 connected with 10.224.172.117 port 5001
[ ID] Interval        Transfer Bandwidth
[1832] 0.0-10.1 sec 112 MBytes 92.8 Mbits/sec
[1944] local 10.224.172.186 port 5001 connected with 10.224.172.117 port 3275
[ ID] Interval        Transfer Bandwidth
[1944] 0.0-10.1 sec 111 MBytes 92.1 Mbits/sec

After the adjustment of tcp windows size, Now the bandwidth measured is nearly 92.5Mbits/sec.

Use parallel tcp
Parallel tcp is supposed to improve the throughput, we use 2 parallel links below, and the
measured bandwidth is nearly 93.3Mbit/sec.

Server(10.224.173.124)

D:bw_test>iperf.exe -s -w 1M
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 1.00 MByte
------------------------------------------------------------
[1880] local 10.224.172.117 port 5001 connected with 10.224.172.186 port 3025
[1844] local 10.224.172.117 port 5001 connected with 10.224.172.186 port 3026
[ ID] Interval        Transfer Bandwidth
[1844] 0.0-10.2 sec 56.8 MBytes 46.9 Mbits/sec
[1880] 0.0-10.2 sec 56.8 MBytes 46.8 Mbits/sec
[SUM] 0.0-10.2 sec 114 MBytes 93.6 Mbits/sec
------------------------------------------------------------
Client connecting to 10.224.172.186, TCP port 5001
TCP window size: 1.00 MByte
------------------------------------------------------------
[1880] local 10.224.172.117 port 3276 connected with 10.224.172.186 port 5001
[1868] local 10.224.172.117 port 3277 connected with 10.224.172.186 port 5001
[ ID] Interval        Transfer Bandwidth
[1868] 0.0-10.2 sec 56.5 MBytes 46.4 Mbits/sec
[1880] 0.0-10.2 sec 56.7 MBytes 46.6 Mbits/sec
[SUM] 0.0-10.2 sec 113 MBytes 93.1 Mbits/sec



Client(10.224.172.186)

E:testbandwidthsoftwarejperfreleasejperf-2.0.2bin>iperf.exe -c 10.224.172
.117 -P 2 -t 10 -r -w 1M
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 1.00 MByte
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 10.224.172.117, TCP port 5001
TCP window size: 1.00 MByte
------------------------------------------------------------
[1812] local 10.224.172.186 port 3026 connected with 10.224.172.117 port 5001
[1828] local 10.224.172.186 port 3025 connected with 10.224.172.117 port 5001
[ ID] Interval        Transfer Bandwidth
[1812] 0.0-10.2 sec 56.8 MBytes 46.8 Mbits/sec
[1828] 0.0-10.2 sec 56.8 MBytes 46.8 Mbits/sec
[SUM] 0.0-10.2 sec 114 MBytes 93.6 Mbits/sec
[1784] local 10.224.172.186 port 5001 connected with 10.224.172.117 port 3276
[1964] local 10.224.172.186 port 5001 connected with 10.224.172.117 port 3277
[ ID] Interval        Transfer Bandwidth
[1964] 0.0-10.2 sec 56.5 MBytes 46.5 Mbits/sec
[1784] 0.0-10.2 sec 56.7 MBytes 46.6 Mbits/sec
[SUM] 0.0-10.2 sec 113 MBytes 93.1 Mbits/sec


UDP

Using udp SHOULD specify packet send bandwidth, the default value is 1Mbit/sec.

Server(10.224.172.117)

D:bw_test>iperf.exe -s -u
------------------------------------------------------------
Server listening on UDP port 5001
Receiving 1470 byte datagrams
UDP buffer size: 8.00 KByte (default)
------------------------------------------------------------
[1928] local 10.224.172.117 port 5001 connected with 10.224.172.186 port 3680
[ ID] Interval        Transfer Bandwidth                Jitter Lost/Total Datagrams
[1928] 0.0-10.0 sec 29.3 MBytes 24.5 Mbits/sec 1.727 ms 0/20902 (0%)


Client(10.224.172.186)

E:testbandwidthsoftwarejperfreleasejperf-2.0.2bin>iperf.exe -c 10.224.172
.117 -t 10 -u -b 100M
------------------------------------------------------------
Client connecting to 10.224.172.117, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size: 8.00 KByte (default)
------------------------------------------------------------
[1908] local 10.224.172.186 port 3680 connected with 10.224.172.117 port 5001
[ ID] Interval        Transfer Bandwidth
[1908] 0.0-10.0 sec 29.3 MBytes 24.5 Mbits/sec
[1908] Server Report:
[1908] 0.0-10.0 sec 29.3 MBytes 24.5 Mbits/sec 1.726 ms 0/20902 (0%)
[1908] Sent 20902 datagrams

Why lower than we expect?
The result is 24.5Mbits/sec, there’s no explanation except that this is a bug. I’ve searched for
this case and others encounter the same problem, but no answer yet. The windows version is
1.7.0, very old, the newest version is 2.0.5, and need cygwin to compile in windows, I’ll test it in
linux to see whether the problem remains.

Here’s the result, as we can see, the problem remains, but we have other clues, the lose rate is
very high, 96% when send bandwidth limitation is 100M, 49% when send bandwidth limitation is
50M. Now we can explain why the bandwidth measured using UDP can’t reach 93Mbits/sec, the
reason is, ipref has no flow control when sending udp packets, and the UDP buffer size is too
slow for the sending rate, so lots of packets are dropped, and cause the measured value much
lower than we expect.

Server (10.224.172.199)
Client (10.224.172.181)




After adjust the buffer size, the measured bandwidth is 37.4Mbits/sec-47.6Mbits/sec, much
better than before.

Server (10.224.172.199)




Client (10.224.172.181)
Even udp has such problem, we can still use it to generate cross traffic when we measure other
tools.

Coclusion
       Ipref is precise enough for used as a benchmark in our later test.
       Ipref can be used to product various kinds of cross traffic, tcp, udp, parallel traffic, bi-
       direction traffic, in specified rate(for udp, we should adjust the buffer)


moremojo!

Jeromy.Fu

More Related Content

What's hot

Part 10: 5G Use cases - 5G for Absolute Beginners
Part 10: 5G Use cases - 5G for Absolute BeginnersPart 10: 5G Use cases - 5G for Absolute Beginners
Part 10: 5G Use cases - 5G for Absolute Beginners3G4G
 
Tutorial: Using GoBGP as an IXP connecting router
Tutorial: Using GoBGP as an IXP connecting routerTutorial: Using GoBGP as an IXP connecting router
Tutorial: Using GoBGP as an IXP connecting routerShu Sugimoto
 
Advanced: 5G Service Based Architecture (SBA)
Advanced: 5G Service Based Architecture (SBA)Advanced: 5G Service Based Architecture (SBA)
Advanced: 5G Service Based Architecture (SBA)3G4G
 
Accelerating our 5G future: a first look at 3GPP Rel-17 and beyond
Accelerating our 5G future: a first look at 3GPP Rel-17 and beyondAccelerating our 5G future: a first look at 3GPP Rel-17 and beyond
Accelerating our 5G future: a first look at 3GPP Rel-17 and beyondQualcomm Research
 
DPDK in Containers Hands-on Lab
DPDK in Containers Hands-on LabDPDK in Containers Hands-on Lab
DPDK in Containers Hands-on LabMichelle Holley
 
Sample Network Analysis Report based on Wireshark Analysis
Sample Network Analysis Report based on Wireshark AnalysisSample Network Analysis Report based on Wireshark Analysis
Sample Network Analysis Report based on Wireshark AnalysisDavid Sweigert
 
Nokia L3 VPN Configuration Guide
Nokia L3 VPN Configuration GuideNokia L3 VPN Configuration Guide
Nokia L3 VPN Configuration GuideAbel Saduwa
 
Routing and OSPF
Routing and OSPFRouting and OSPF
Routing and OSPFarpit
 
6G Training Course Part 5: 6G Requirements
6G Training Course Part 5: 6G Requirements6G Training Course Part 5: 6G Requirements
6G Training Course Part 5: 6G Requirements3G4G
 
Introduction to eBPF and XDP
Introduction to eBPF and XDPIntroduction to eBPF and XDP
Introduction to eBPF and XDPlcplcp1
 
Transitioning IPv4 to IPv6
Transitioning IPv4 to IPv6Transitioning IPv4 to IPv6
Transitioning IPv4 to IPv6Jhoni Guerrero
 
Enable DPDK and SR-IOV for containerized virtual network functions with zun
Enable DPDK and SR-IOV for containerized virtual network functions with zunEnable DPDK and SR-IOV for containerized virtual network functions with zun
Enable DPDK and SR-IOV for containerized virtual network functions with zunheut2008
 
Demystifying EVPN in the data center: Part 1 in 2 episode series
Demystifying EVPN in the data center: Part 1 in 2 episode seriesDemystifying EVPN in the data center: Part 1 in 2 episode series
Demystifying EVPN in the data center: Part 1 in 2 episode seriesCumulus Networks
 
IEEE 802 Standard for Computer Networks
IEEE 802 Standard for Computer NetworksIEEE 802 Standard for Computer Networks
IEEE 802 Standard for Computer NetworksPradeep Kumar TS
 

What's hot (20)

6lowpan
6lowpan6lowpan
6lowpan
 
Part 10: 5G Use cases - 5G for Absolute Beginners
Part 10: 5G Use cases - 5G for Absolute BeginnersPart 10: 5G Use cases - 5G for Absolute Beginners
Part 10: 5G Use cases - 5G for Absolute Beginners
 
Tutorial: Using GoBGP as an IXP connecting router
Tutorial: Using GoBGP as an IXP connecting routerTutorial: Using GoBGP as an IXP connecting router
Tutorial: Using GoBGP as an IXP connecting router
 
Ip address and subnetting
Ip address and subnettingIp address and subnetting
Ip address and subnetting
 
Brkdct 3101
Brkdct 3101Brkdct 3101
Brkdct 3101
 
Advanced: 5G Service Based Architecture (SBA)
Advanced: 5G Service Based Architecture (SBA)Advanced: 5G Service Based Architecture (SBA)
Advanced: 5G Service Based Architecture (SBA)
 
Accelerating our 5G future: a first look at 3GPP Rel-17 and beyond
Accelerating our 5G future: a first look at 3GPP Rel-17 and beyondAccelerating our 5G future: a first look at 3GPP Rel-17 and beyond
Accelerating our 5G future: a first look at 3GPP Rel-17 and beyond
 
GTP Overview
GTP OverviewGTP Overview
GTP Overview
 
DPDK in Containers Hands-on Lab
DPDK in Containers Hands-on LabDPDK in Containers Hands-on Lab
DPDK in Containers Hands-on Lab
 
Sample Network Analysis Report based on Wireshark Analysis
Sample Network Analysis Report based on Wireshark AnalysisSample Network Analysis Report based on Wireshark Analysis
Sample Network Analysis Report based on Wireshark Analysis
 
Nokia L3 VPN Configuration Guide
Nokia L3 VPN Configuration GuideNokia L3 VPN Configuration Guide
Nokia L3 VPN Configuration Guide
 
Routing and OSPF
Routing and OSPFRouting and OSPF
Routing and OSPF
 
Intel dpdk Tutorial
Intel dpdk TutorialIntel dpdk Tutorial
Intel dpdk Tutorial
 
6G Training Course Part 5: 6G Requirements
6G Training Course Part 5: 6G Requirements6G Training Course Part 5: 6G Requirements
6G Training Course Part 5: 6G Requirements
 
Introduction to eBPF and XDP
Introduction to eBPF and XDPIntroduction to eBPF and XDP
Introduction to eBPF and XDP
 
Ieee 802.11overview
Ieee 802.11overviewIeee 802.11overview
Ieee 802.11overview
 
Transitioning IPv4 to IPv6
Transitioning IPv4 to IPv6Transitioning IPv4 to IPv6
Transitioning IPv4 to IPv6
 
Enable DPDK and SR-IOV for containerized virtual network functions with zun
Enable DPDK and SR-IOV for containerized virtual network functions with zunEnable DPDK and SR-IOV for containerized virtual network functions with zun
Enable DPDK and SR-IOV for containerized virtual network functions with zun
 
Demystifying EVPN in the data center: Part 1 in 2 episode series
Demystifying EVPN in the data center: Part 1 in 2 episode seriesDemystifying EVPN in the data center: Part 1 in 2 episode series
Demystifying EVPN in the data center: Part 1 in 2 episode series
 
IEEE 802 Standard for Computer Networks
IEEE 802 Standard for Computer NetworksIEEE 802 Standard for Computer Networks
IEEE 802 Standard for Computer Networks
 

Similar to Ipref

Tonyfortunatoiperfquickstart 1212633021928769-8
Tonyfortunatoiperfquickstart 1212633021928769-8Tonyfortunatoiperfquickstart 1212633021928769-8
Tonyfortunatoiperfquickstart 1212633021928769-8Jamil Jamil
 
cFrame framework slides
cFrame framework slidescFrame framework slides
cFrame framework slideskestasj
 
PLNOG15: VidMon - monitoring video signal quality in Service Provider IP netw...
PLNOG15: VidMon - monitoring video signal quality in Service Provider IP netw...PLNOG15: VidMon - monitoring video signal quality in Service Provider IP netw...
PLNOG15: VidMon - monitoring video signal quality in Service Provider IP netw...PROIDEA
 
Duganiperfn43 120911020533-phpapp02
Duganiperfn43 120911020533-phpapp02Duganiperfn43 120911020533-phpapp02
Duganiperfn43 120911020533-phpapp02Jamil Jamil
 
(NET404) Making Every Packet Count
(NET404) Making Every Packet Count(NET404) Making Every Packet Count
(NET404) Making Every Packet CountAmazon Web Services
 
AWS re:Invent 2016: Making Every Packet Count (NET404)
AWS re:Invent 2016: Making Every Packet Count (NET404)AWS re:Invent 2016: Making Every Packet Count (NET404)
AWS re:Invent 2016: Making Every Packet Count (NET404)Amazon Web Services
 
205583569 gb-interface-detailed-planning-final
205583569 gb-interface-detailed-planning-final205583569 gb-interface-detailed-planning-final
205583569 gb-interface-detailed-planning-finalOlivier Rostaing
 
Installing Oracle Database on LDOM
Installing Oracle Database on LDOMInstalling Oracle Database on LDOM
Installing Oracle Database on LDOMPhilippe Fierens
 
BRKRST-3068 Troubleshooting Catalyst 2K and 3K.pdf
BRKRST-3068  Troubleshooting Catalyst 2K and 3K.pdfBRKRST-3068  Troubleshooting Catalyst 2K and 3K.pdf
BRKRST-3068 Troubleshooting Catalyst 2K and 3K.pdfssusercbaa33
 
05 module managing your network enviornment
05  module managing your network enviornment05  module managing your network enviornment
05 module managing your network enviornmentAsif
 
newtwork opnet app project
newtwork opnet app project newtwork opnet app project
newtwork opnet app project Mohamed Elagnaf
 
Virtualizing the Network to enable a Software Defined Infrastructure (SDI)
Virtualizing the Network to enable a Software Defined Infrastructure (SDI)Virtualizing the Network to enable a Software Defined Infrastructure (SDI)
Virtualizing the Network to enable a Software Defined Infrastructure (SDI)Odinot Stanislas
 
DetailsiBeacon_endocsSetupBeaconInIOS.pdf
DetailsiBeacon_endocsSetupBeaconInIOS.pdfDetailsiBeacon_endocsSetupBeaconInIOS.pdf
DetailsiBeacon_endocsSetupBeaconInIOS.pdfSomnathKhamaru1
 
acn-practical_manual-19-20-1 final.pdf
acn-practical_manual-19-20-1 final.pdfacn-practical_manual-19-20-1 final.pdf
acn-practical_manual-19-20-1 final.pdfQual4
 
VYATTAによるマルチパスVPN接続手法
VYATTAによるマルチパスVPN接続手法VYATTAによるマルチパスVPN接続手法
VYATTAによるマルチパスVPN接続手法Naoto MATSUMOTO
 
In depth understanding network security
In depth understanding network securityIn depth understanding network security
In depth understanding network securityThanawan Tuamyim
 
PyConUK 2018 - Journey from HTTP to gRPC
PyConUK 2018 - Journey from HTTP to gRPCPyConUK 2018 - Journey from HTTP to gRPC
PyConUK 2018 - Journey from HTTP to gRPCTatiana Al-Chueyr
 
Communication Performance Over A Gigabit Ethernet Network
Communication Performance Over A Gigabit Ethernet NetworkCommunication Performance Over A Gigabit Ethernet Network
Communication Performance Over A Gigabit Ethernet NetworkIJERA Editor
 
4.1.1.10 Packet Tracer - Configuring Extended ACLs Scenario 1.pdf
4.1.1.10 Packet Tracer - Configuring Extended ACLs Scenario 1.pdf4.1.1.10 Packet Tracer - Configuring Extended ACLs Scenario 1.pdf
4.1.1.10 Packet Tracer - Configuring Extended ACLs Scenario 1.pdfssuserf7cd2b
 

Similar to Ipref (20)

Tonyfortunatoiperfquickstart 1212633021928769-8
Tonyfortunatoiperfquickstart 1212633021928769-8Tonyfortunatoiperfquickstart 1212633021928769-8
Tonyfortunatoiperfquickstart 1212633021928769-8
 
cFrame framework slides
cFrame framework slidescFrame framework slides
cFrame framework slides
 
PLNOG15: VidMon - monitoring video signal quality in Service Provider IP netw...
PLNOG15: VidMon - monitoring video signal quality in Service Provider IP netw...PLNOG15: VidMon - monitoring video signal quality in Service Provider IP netw...
PLNOG15: VidMon - monitoring video signal quality in Service Provider IP netw...
 
Duganiperfn43 120911020533-phpapp02
Duganiperfn43 120911020533-phpapp02Duganiperfn43 120911020533-phpapp02
Duganiperfn43 120911020533-phpapp02
 
(NET404) Making Every Packet Count
(NET404) Making Every Packet Count(NET404) Making Every Packet Count
(NET404) Making Every Packet Count
 
AWS re:Invent 2016: Making Every Packet Count (NET404)
AWS re:Invent 2016: Making Every Packet Count (NET404)AWS re:Invent 2016: Making Every Packet Count (NET404)
AWS re:Invent 2016: Making Every Packet Count (NET404)
 
205583569 gb-interface-detailed-planning-final
205583569 gb-interface-detailed-planning-final205583569 gb-interface-detailed-planning-final
205583569 gb-interface-detailed-planning-final
 
Installing Oracle Database on LDOM
Installing Oracle Database on LDOMInstalling Oracle Database on LDOM
Installing Oracle Database on LDOM
 
BRKRST-3068 Troubleshooting Catalyst 2K and 3K.pdf
BRKRST-3068  Troubleshooting Catalyst 2K and 3K.pdfBRKRST-3068  Troubleshooting Catalyst 2K and 3K.pdf
BRKRST-3068 Troubleshooting Catalyst 2K and 3K.pdf
 
05 module managing your network enviornment
05  module managing your network enviornment05  module managing your network enviornment
05 module managing your network enviornment
 
EMEA Airheads- Manage Devices at Branch Office (BOC)
EMEA Airheads- Manage Devices at Branch Office (BOC)EMEA Airheads- Manage Devices at Branch Office (BOC)
EMEA Airheads- Manage Devices at Branch Office (BOC)
 
newtwork opnet app project
newtwork opnet app project newtwork opnet app project
newtwork opnet app project
 
Virtualizing the Network to enable a Software Defined Infrastructure (SDI)
Virtualizing the Network to enable a Software Defined Infrastructure (SDI)Virtualizing the Network to enable a Software Defined Infrastructure (SDI)
Virtualizing the Network to enable a Software Defined Infrastructure (SDI)
 
DetailsiBeacon_endocsSetupBeaconInIOS.pdf
DetailsiBeacon_endocsSetupBeaconInIOS.pdfDetailsiBeacon_endocsSetupBeaconInIOS.pdf
DetailsiBeacon_endocsSetupBeaconInIOS.pdf
 
acn-practical_manual-19-20-1 final.pdf
acn-practical_manual-19-20-1 final.pdfacn-practical_manual-19-20-1 final.pdf
acn-practical_manual-19-20-1 final.pdf
 
VYATTAによるマルチパスVPN接続手法
VYATTAによるマルチパスVPN接続手法VYATTAによるマルチパスVPN接続手法
VYATTAによるマルチパスVPN接続手法
 
In depth understanding network security
In depth understanding network securityIn depth understanding network security
In depth understanding network security
 
PyConUK 2018 - Journey from HTTP to gRPC
PyConUK 2018 - Journey from HTTP to gRPCPyConUK 2018 - Journey from HTTP to gRPC
PyConUK 2018 - Journey from HTTP to gRPC
 
Communication Performance Over A Gigabit Ethernet Network
Communication Performance Over A Gigabit Ethernet NetworkCommunication Performance Over A Gigabit Ethernet Network
Communication Performance Over A Gigabit Ethernet Network
 
4.1.1.10 Packet Tracer - Configuring Extended ACLs Scenario 1.pdf
4.1.1.10 Packet Tracer - Configuring Extended ACLs Scenario 1.pdf4.1.1.10 Packet Tracer - Configuring Extended ACLs Scenario 1.pdf
4.1.1.10 Packet Tracer - Configuring Extended ACLs Scenario 1.pdf
 

Ipref

  • 1. Iperf testing report “Iperf was developed by NLANR/DAST as a modern alternative for measuring maximum TCP and UDP bandwidth performance. Iperf allows the tuning of various parameters and UDP characteristics. Iperf reports bandwidth, delay jitter, datagram loss.” The above is Iperf’s summary on sourceforge, Iperf is widely used as a network performance tool, and it’s in active development, and there’s a new version Iperf3 which host on google code, which is a new implementation from scratch, but seems it don’t support UDP yet. Iperf also has a JAVA front GUI jperf. Iperf is a classic and widely used tool, even it use flooding tcp and udp, which doesn’t satisfy our needs. The reasons why testing it are the following: 1. We need a benchmark for our testing, which should be accurate. 2. We need to ensure our environment works fine. 3. Ipref is often used to produce cross traffic in bandwidth testing. Iperf is used as a bandwidth measurement for many years by many people, so it is a good choice. Iperf at a first glance?
  • 2. Use Iperf measure the bandwidth Iperf has both client and server pieces, so it requires installation at both ends of the connection you're measuring. Iperf can send both tcp and udp packets. For more information on how to use, please refer to IPerf-The easy tutorial. Test case: Two machine, no bandwidth throttle TCP Measure bi-directional bandwidth bi-directional bandwidth are measured (using parameter ‘-r’ in client, the measurements are taken sequentially). Server(10.224.173.124) D:bw_test>iperf.exe -s ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 8.00 KByte (default) ------------------------------------------------------------ [1880] local 10.224.172.117 port 5001 connected with 10.224.172.186 port 2821
  • 3. [ ID] Interval Transfer Bandwidth [1880] 0.0-10.0 sec 100 MBytes 84.1 Mbits/sec ------------------------------------------------------------ Client connecting to 10.224.172.186, TCP port 5001 TCP window size: 8.00 KByte (default) ------------------------------------------------------------ [1848] local 10.224.172.117 port 3273 connected with 10.224.172.186 port 5001 [ ID] Interval Transfer Bandwidth [1848] 0.0-10.0 sec 94.5 MBytes 79.2 Mbits/sec Client(10.224.172.186) E:testbandwidthsoftwarejperfreleasejperf-2.0.2bin>iperf.exe -c 10.224.172 .117 -P 1 -t 10 -r ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 8.00 KByte (default) ------------------------------------------------------------ ------------------------------------------------------------ Client connecting to 10.224.172.117, TCP port 5001 TCP window size: 8.00 KByte (default) ------------------------------------------------------------ [1828] local 10.224.172.186 port 2821 connected with 10.224.172.117 port 5001 [ ID] Interval Transfer Bandwidth [1828] 0.0-10.0 sec 100 MBytes 83.9 Mbits/sec [1944] local 10.224.172.186 port 5001 connected with 10.224.172.117 port 3273 [ ID] Interval Transfer Bandwidth [1944] 0.0-10.0 sec 94.5 MBytes 79.2 Mbits/sec 10.224.172.117 and 10.224.172.186 are in the same subnet, the theoretic bandwidth is 100Mbit/sec. Adjust tcp window size Now the bandwidth measured is nearly 82Mbits/sec, the tcp window size is only 8k, we can adjust the window size to a bigger value to improve the throughput. Server(10.224.172.117) D:bw_test>iperf.exe -s -w 1M ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 1.00 MByte ------------------------------------------------------------ [1880] local 10.224.172.117 port 5001 connected with 10.224.172.186 port 2916 [ ID] Interval Transfer Bandwidth [1880] 0.0-10.1 sec 112 MBytes 92.8 Mbits/sec ------------------------------------------------------------
  • 4. Client connecting to 10.224.172.186, TCP port 5001 TCP window size: 1.00 MByte ------------------------------------------------------------ [1848] local 10.224.172.117 port 3275 connected with 10.224.172.186 port 5001 [ ID] Interval Transfer Bandwidth [1848] 0.0-10.1 sec 111 MBytes 92.1 Mbits/sec Client(10.224.172.186) E:testbandwidthsoftwarejperfreleasejperf-2.0.2bin>iperf.exe -c 10.224.172 .117 -P 1 -t 10 -r -w 1M ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 1.00 MByte ------------------------------------------------------------ ------------------------------------------------------------ Client connecting to 10.224.172.117, TCP port 5001 TCP window size: 1.00 MByte ------------------------------------------------------------ [1832] local 10.224.172.186 port 2916 connected with 10.224.172.117 port 5001 [ ID] Interval Transfer Bandwidth [1832] 0.0-10.1 sec 112 MBytes 92.8 Mbits/sec [1944] local 10.224.172.186 port 5001 connected with 10.224.172.117 port 3275 [ ID] Interval Transfer Bandwidth [1944] 0.0-10.1 sec 111 MBytes 92.1 Mbits/sec After the adjustment of tcp windows size, Now the bandwidth measured is nearly 92.5Mbits/sec. Use parallel tcp Parallel tcp is supposed to improve the throughput, we use 2 parallel links below, and the measured bandwidth is nearly 93.3Mbit/sec. Server(10.224.173.124) D:bw_test>iperf.exe -s -w 1M ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 1.00 MByte ------------------------------------------------------------ [1880] local 10.224.172.117 port 5001 connected with 10.224.172.186 port 3025 [1844] local 10.224.172.117 port 5001 connected with 10.224.172.186 port 3026 [ ID] Interval Transfer Bandwidth [1844] 0.0-10.2 sec 56.8 MBytes 46.9 Mbits/sec [1880] 0.0-10.2 sec 56.8 MBytes 46.8 Mbits/sec [SUM] 0.0-10.2 sec 114 MBytes 93.6 Mbits/sec ------------------------------------------------------------ Client connecting to 10.224.172.186, TCP port 5001
  • 5. TCP window size: 1.00 MByte ------------------------------------------------------------ [1880] local 10.224.172.117 port 3276 connected with 10.224.172.186 port 5001 [1868] local 10.224.172.117 port 3277 connected with 10.224.172.186 port 5001 [ ID] Interval Transfer Bandwidth [1868] 0.0-10.2 sec 56.5 MBytes 46.4 Mbits/sec [1880] 0.0-10.2 sec 56.7 MBytes 46.6 Mbits/sec [SUM] 0.0-10.2 sec 113 MBytes 93.1 Mbits/sec Client(10.224.172.186) E:testbandwidthsoftwarejperfreleasejperf-2.0.2bin>iperf.exe -c 10.224.172 .117 -P 2 -t 10 -r -w 1M ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 1.00 MByte ------------------------------------------------------------ ------------------------------------------------------------ Client connecting to 10.224.172.117, TCP port 5001 TCP window size: 1.00 MByte ------------------------------------------------------------ [1812] local 10.224.172.186 port 3026 connected with 10.224.172.117 port 5001 [1828] local 10.224.172.186 port 3025 connected with 10.224.172.117 port 5001 [ ID] Interval Transfer Bandwidth [1812] 0.0-10.2 sec 56.8 MBytes 46.8 Mbits/sec [1828] 0.0-10.2 sec 56.8 MBytes 46.8 Mbits/sec [SUM] 0.0-10.2 sec 114 MBytes 93.6 Mbits/sec [1784] local 10.224.172.186 port 5001 connected with 10.224.172.117 port 3276 [1964] local 10.224.172.186 port 5001 connected with 10.224.172.117 port 3277 [ ID] Interval Transfer Bandwidth [1964] 0.0-10.2 sec 56.5 MBytes 46.5 Mbits/sec [1784] 0.0-10.2 sec 56.7 MBytes 46.6 Mbits/sec [SUM] 0.0-10.2 sec 113 MBytes 93.1 Mbits/sec UDP Using udp SHOULD specify packet send bandwidth, the default value is 1Mbit/sec. Server(10.224.172.117) D:bw_test>iperf.exe -s -u ------------------------------------------------------------ Server listening on UDP port 5001 Receiving 1470 byte datagrams UDP buffer size: 8.00 KByte (default)
  • 6. ------------------------------------------------------------ [1928] local 10.224.172.117 port 5001 connected with 10.224.172.186 port 3680 [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [1928] 0.0-10.0 sec 29.3 MBytes 24.5 Mbits/sec 1.727 ms 0/20902 (0%) Client(10.224.172.186) E:testbandwidthsoftwarejperfreleasejperf-2.0.2bin>iperf.exe -c 10.224.172 .117 -t 10 -u -b 100M ------------------------------------------------------------ Client connecting to 10.224.172.117, UDP port 5001 Sending 1470 byte datagrams UDP buffer size: 8.00 KByte (default) ------------------------------------------------------------ [1908] local 10.224.172.186 port 3680 connected with 10.224.172.117 port 5001 [ ID] Interval Transfer Bandwidth [1908] 0.0-10.0 sec 29.3 MBytes 24.5 Mbits/sec [1908] Server Report: [1908] 0.0-10.0 sec 29.3 MBytes 24.5 Mbits/sec 1.726 ms 0/20902 (0%) [1908] Sent 20902 datagrams Why lower than we expect? The result is 24.5Mbits/sec, there’s no explanation except that this is a bug. I’ve searched for this case and others encounter the same problem, but no answer yet. The windows version is 1.7.0, very old, the newest version is 2.0.5, and need cygwin to compile in windows, I’ll test it in linux to see whether the problem remains. Here’s the result, as we can see, the problem remains, but we have other clues, the lose rate is very high, 96% when send bandwidth limitation is 100M, 49% when send bandwidth limitation is 50M. Now we can explain why the bandwidth measured using UDP can’t reach 93Mbits/sec, the reason is, ipref has no flow control when sending udp packets, and the UDP buffer size is too slow for the sending rate, so lots of packets are dropped, and cause the measured value much lower than we expect. Server (10.224.172.199)
  • 7. Client (10.224.172.181) After adjust the buffer size, the measured bandwidth is 37.4Mbits/sec-47.6Mbits/sec, much better than before. Server (10.224.172.199) Client (10.224.172.181)
  • 8. Even udp has such problem, we can still use it to generate cross traffic when we measure other tools. Coclusion Ipref is precise enough for used as a benchmark in our later test. Ipref can be used to product various kinds of cross traffic, tcp, udp, parallel traffic, bi- direction traffic, in specified rate(for udp, we should adjust the buffer) moremojo! Jeromy.Fu