More than a simple speed test, nPerf brings you the best and the fullest mobile connection quality measurement tool up to 1 Gb/s speeds! Full QoS test: In few seconds, test your bitrate speed, latency, browsing speed and video streaming quality on your mobile device. Comparison function: Compare your results with those of others. Dec 19, 2017 Install Netperf On Windows - erogonimmo Installing and Configuring Netperf on Linux to Measure Network Performance 12: 35 AM The first is the Netperf server netperf udp# Network. Netperf iptables masquerade - network A Principled Technologies test report 4 Comparing network performance: Red Hat Enterprise Linux 6 vs. Microsoft Windows Server.
The three key measures of network performance are latency (the time required to transfer data across the network), throughput (the amount of data or number of data packets that can be delivered on an IP network in a predefined timeframe) and jitter or delay jitter (the changes and their duration in delay that occur during transfers).
In this blog post, I will show you how to measure throughput using NetPerf and iPerf, two open source network performance benchmark tools that support both UDP and TCP protocols. Each tool provides in addition other information: NetPerf for example provides tests for end-to-end latency (round-trip times or RTT) and is a good replacement for Ping, iPerf provides packet loss and delay jitter, useful to troubleshoot network performance. Choosing one or the other tool depends on your use-case and the test you are planning to achieve. Note that for the same input parameters, the tools can report different bandwidths, as they are not designed the same.
I will use the default parameters and run each test for 5 minutes (300 seconds). For a good report, it is recommended to run the tests multiple times, at different times of the day, with different parameters.
1. Installing NetPerf and iPerf
NetPerf and iPerf have a client and server functionality, and must be installed both on the server and the client from where you are conducting network performance tests. For each tool I will provide the most common parameters, and conduct tests between a client (1GB MEM) and a server (1GB MEM) and in my LAN (Local Area Network), and between a client and a remote server (in a WAN).
The tools can be installed on different Linux distributions, on MacOS and on Windows. Refer to the following documentations for more information:
- NetPerf:https://github.com/HewlettPackard/netperf
- iPerf:https://iperf.fr/iperf-doc.php#3doc
2. Most common parameters
2.1. NetPerf common parameters
Netperf Servers
Install Netperf Windows
- -p (–port): Port number (12865 by default)
- -H (–host): Host/Server IP address or DNS name
- -t (–testname): Specifies test to perform (TCP_STREAM by default)
- -l (–testlen): Specifies test duration in seconds (>0 secs)
- -m value set the local send size to value bytes. [Default: local socket buffer size]
- -M value set the remove receive size to value bytes. [Default: remote receive socket buffer size]
For more information use: $ netperf -h
2.2. iPerf3 common parameters
- -c (–client): run in client mode, connecting to
- -p (–port): Port number (by default 5001 and 5201 for iperf and iperf3 respectively)
- -u (–udp): for UDP tests (default is tcp)
- -i (–interval): seconds between periodic bandwidth reports
- -b (–bandwidth): target bandwidth in bits/sec (0 for unlimited) – default is 1 Mbit/sec for UDP and unlimited for TCP
- -t (–time): time in seconds to transmit for (default 10 secs)
- -P (–parallel): number of parallel client streams to run
- –get-server-output: get results from server (useful for UDP tests)
For more information use $ iperf3 -h
3. Testing TCP throughput
3.1. Testing TCP throughput using NetPerf
Run Netperf as server on the server:
$ netserver
Starting netserver with host 'IN(6)ADDR_ANY' port '12865' and family AF_UNSPEC
TCP Throughput in my LAN
$ netperf -H 172.31.56.48 -l 300 -t TCP_STREAM
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 172.31.56.48 () port 0 AF_INET
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
87380 20480 20480 300.04 62.61
TCP Throughput in a WAN
$ netperf -H HOST -l 300 -t TCP_STREAM
MIGRATED TCP STREAM TEST from (null) (0.0.0.0) port 0 AF_INET to (null) () port 0 AF_INET : histogram : spin interval
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
87380 131072 131072 300.32 7.69
3.2. Testing TCP throughput using iPerf
Run iPerf3 as server on the server:
$ iperf3 --server --interval 30
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
TCP Throughput in my LAN
$ iperf3 --client 172.31.56.48 --time 300 --interval 30
Connecting to host 172.31.56.48, port 5201
[ 4] local 172.31.100.5 port 44728 connected to 172.31.56.48 port 5201
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-30.00 sec 1.70 GBytes 488 Mbits/sec 138 533 KBytes
[ 4] 30.00-60.00 sec 260 MBytes 72.6 Mbits/sec 19 489 KBytes
[ 4] 60.00-90.00 sec 227 MBytes 63.5 Mbits/sec 15 542 KBytes
[ 4] 90.00-120.00 sec 227 MBytes 63.3 Mbits/sec 13 559 KBytes
[ 4] 120.00-150.00 sec 228 MBytes 63.7 Mbits/sec 16 463 KBytes
[ 4] 150.00-180.00 sec 227 MBytes 63.4 Mbits/sec 13 524 KBytes
[ 4] 180.00-210.00 sec 227 MBytes 63.5 Mbits/sec 14 559 KBytes
[ 4] 210.00-240.00 sec 227 MBytes 63.5 Mbits/sec 14 437 KBytes
[ 4] 240.00-270.00 sec 228 MBytes 63.7 Mbits/sec 14 516 KBytes
[ 4] 270.00-300.00 sec 227 MBytes 63.5 Mbits/sec 14 524 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-300.00 sec 3.73 GBytes 107 Mbits/sec 270 sender
[ 4] 0.00-300.00 sec 3.73 GBytes 107 Mbits/sec receiver
TCP Throughput in a WAN
$ iperf3 --client HOST --time 300 --interval 30
Connecting to host HOST, port 5201
[ 5] local 192.168.1.73 port 56756 connected to HOST port 5201
[ ID] Interval Transfer Bitrate
[ 5] 0.00-30.00 sec 21.2 MBytes 5.93 Mbits/sec
[ 5] 30.00-60.00 sec 27.0 MBytes 7.55 Mbits/sec
[ 5] 60.00-90.00 sec 28.6 MBytes 7.99 Mbits/sec
[ 5] 90.00-120.00 sec 28.7 MBytes 8.02 Mbits/sec
[ 5] 120.00-150.00 sec 28.5 MBytes 7.97 Mbits/sec
[ 5] 150.00-180.00 sec 28.6 MBytes 7.99 Mbits/sec
[ 5] 180.00-210.00 sec 28.4 MBytes 7.94 Mbits/sec
[ 5] 210.00-240.00 sec 28.5 MBytes 7.97 Mbits/sec
[ 5] 240.00-270.00 sec 28.6 MBytes 8.00 Mbits/sec
[ 5] 270.00-300.00 sec 27.9 MBytes 7.81 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate
[ 5] 0.00-300.00 sec 276 MBytes 7.72 Mbits/sec sender
[ 5] 0.00-300.00 sec 276 MBytes 7.71 Mbits/sec receiver
Comments on the TCP tests
4. Testing UDP throughput
Install Netperf On Windows Xp
UDP does not provide an end to end control flow, when testing UDP throughput make sure you specify the size of the packet to be sent by the client. Always read the receive rate on the server, since UDP is an unreliable protocol, the reported send rate can be much higher than the actual receive rate.
4.1. Testing UDP throughput using NetPerf
Change the test name from TCP_STREAM to UDP_STREAM. Let’s use 1024 (1MB) as the message size to be sent by the client.
If you receive the following error send_data: data send error: Network is unreachable (errno 101) netperf: send_omni: send_data failed: Network is unreachable
, add option -R 1
to remove the iptable rule that prohibits NetPerf UDP flow.
UDP Throughput in my LAN
$ netperf -H 172.31.56.48 -t UDP_STREAM -l 300 -- -R 1 -m 1024
MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 172.31.56.48 () port 0 AF_INET
Socket Message Elapsed Messages
Size Size Time Okay Errors Throughput
bytes bytes secs # # 10^6bits/sec
212992 1024 300.00 9193386 0 251.04
212992 300.00 9131380 249.35
UDP Throughput in a WAN
$ netperf -H HOST -t UDP_STREAM -l 300 -- -R 1 -m 1024
MIGRATED UDP STREAM TEST from (null) (0.0.0.0) port 0 AF_INET to (null) () port 0 AF_INET : histogram : spin interval
Socket Message Elapsed Messages
Size Size Time Okay Errors Throughput
bytes bytes secs # # 10^6bits/sec
9216 1024 300.01 35627791 0 972.83
212992 300.01 253099 6.91
4.2. Testing UDP throughput using iPerf
For the UDP tests, it’s important to read the results on the server, or use --get-server-output
to read the server output on the client. The server indeed reports the data it was able to process, unlike the client that will send the amount of data specified in the -b (–bandwidth) parameter (1MB by default). For this parameter, it is recommended to use the maximum bandwidth the network can support (respectively around 100Mbits/s and 7.7Mbits/s when the client and the server are on the LAN and remote).
UDP Throughput in my LAN
$ iperf3 --client 172.31.56.48 --interval 30 -u -b 100MB
Accepted connection from 172.31.100.5, port 39444
[ 5] local 172.31.56.48 port 5201 connected to 172.31.100.5 port 36436
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
[ 5] 0.00-30.00 sec 354 MBytes 98.9 Mbits/sec 0.052 ms 330/41774 (0.79%)
[ 5] 30.00-60.00 sec 355 MBytes 99.2 Mbits/sec 0.047 ms 355/41903 (0.85%)
[ 5] 60.00-90.00 sec 354 MBytes 98.9 Mbits/sec 0.048 ms 446/41905 (1.1%)
[ 5] 90.00-120.00 sec 355 MBytes 99.4 Mbits/sec 0.045 ms 261/41902 (0.62%)
[ 5] 120.00-150.00 sec 354 MBytes 99.1 Mbits/sec 0.048 ms 401/41908 (0.96%)
[ 5] 150.00-180.00 sec 353 MBytes 98.7 Mbits/sec 0.047 ms 530/41902 (1.3%)
[ 5] 180.00-210.00 sec 353 MBytes 98.8 Mbits/sec 0.059 ms 496/41904 (1.2%)
[ 5] 210.00-240.00 sec 354 MBytes 99.0 Mbits/sec 0.052 ms 407/41904 (0.97%)
[ 5] 240.00-270.00 sec 351 MBytes 98.3 Mbits/sec 0.059 ms 725/41903 (1.7%)
[ 5] 270.00-300.00 sec 354 MBytes 99.1 Mbits/sec 0.043 ms 393/41908 (0.94%)
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
[ 5] 0.00-300.04 sec 3.45 GBytes 98.94 Mbits/sec 0.043 ms 4344/418913 (1%)
UDP Throughput in a WAN
$ iperf3 --client HOST --time 300 -u -b 7.7MB
Accepted connection from 45.29.190.145, port 60634
[ 5] local 172.31.56.48 port 5201 connected to 45.29.190.145 port 52586
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
[ 5] 0.00-30.00 sec 27.4 MBytes 7.67 Mbits/sec 0.438 ms 64/19902 (0.32%)
[ 5] 30.00-60.00 sec 27.5 MBytes 7.69 Mbits/sec 0.446 ms 35/19940 (0.18%)
[ 5] 60.00-90.00 sec 27.5 MBytes 7.68 Mbits/sec 0.384 ms 39/19925 (0.2%)
[ 5] 90.00-120.00 sec 27.5 MBytes 7.68 Mbits/sec 0.528 ms 70/19950 (0.35%)
[ 5] 120.00-150.00 sec 27.4 MBytes 7.67 Mbits/sec 0.460 ms 51/19924 (0.26%)
[ 5] 150.00-180.00 sec 27.5 MBytes 7.69 Mbits/sec 0.485 ms 37/19948 (0.19%)
[ 5] 180.00-210.00 sec 27.5 MBytes 7.68 Mbits/sec 0.572 ms 49/19941 (0.25%)
[ 5] 210.00-240.00 sec 26.8 MBytes 7.50 Mbits/sec 0.800 ms 443/19856 (2.2%)
[ 5] 240.00-270.00 sec 27.4 MBytes 7.66 Mbits/sec 0.570 ms 172/20009 (0.86%)
[ 5] 270.00-300.00 sec 25.3 MBytes 7.07 Mbits/sec 0.423 ms 1562/19867 (7.9%)
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
[ 5] 0.00-300.00 sec 272 MBytes 7.60 Mbits/sec 0.423 ms 2522/199284 (1.3%)
[SUM] 0.0-300.2 sec 31 datagrams received out-of-order
Comments on the UDP tests
Conclusion
Abstract
Netperf is a network performance benchmarking tool that can be used to measure the data throughput rate of both TCP and UDP communications across the network. This article will describe the steps necessary to
properly install and setup Netperf on Linux.
Introduction
Network performance monitoring is critical for today's high performance computing clusters. Many HPC
Linux clusters require the data transfer rate of installed network adapters to transmit data at the speed at which
they were designed to perform (i.e., 100Mb/s, 1000Mb/s, etc). Unfortunately, in the absence of a network
benchmarking tool, the node-to-node network performance of these adapters cannot be properly determined.
The downfall to this is that manufacturer specific tunable parameters for network adapters and operating
system's performance metrics will unequivocally go unused resulting in continued poor network performance.
In addition, the operating system's network protocols metrics might be configured to use minimum/default
values causing the system to bottleneck.
Obtaining and Installing Netperf
Netperf can be obtained from http://www.netperf.org/netperf/. The latest version, currently at 2.4.3, is recommended for networks configured to use IPv4 and IPv6. Once you have obtained the Netperf source code, extract it in a temporary directory then build the binaries. The following steps can be used to accomplish this task:
1. Download Netperf to a staging area on your Linux system
2. Unzip and untar the Netperf compressed source file
#tar -xzvf netperf-x.x.x.tar.gz, where x.x.x is the current version number.
3. Change to the directory where the Netperf source files were extracted (i.e., cd netperf-x.x.x, where x.x.x
is the version number)
4. Run ./configure
5. Run make
6. Run 'make install' to install the Netperf program binaries in /usr/local/bin. Note you must be logged in as root to write to the /usr/local/bin directory
Preparing to use Netperf
Netperf can be run as a standalone daemon or installed as a service daemon in the /etc/xinetd.d
directory. Two files are created when the Netperf source is compiled. The first is the Netperf server
'netserver', which must be run as a daemon in order to measure data throughput. The second 'netperf'
which is the client-side program that is used to communicate with the Netperf server program. The Netperf
client program sends the streams of data to the Netperf server unidirectionally and reports the rate of transfer
back to the user.
To run the Netperf as a standalone daemon, simply invoke the 'netserver' program then run 'netperf' to
observe the rate of transfer. For example, to see the data throughput on the node running both the server and
client program, do the following:
1. Invoke the Netperf server program
$ netserver
Starting netserver at port 12865
Starting netserver at hostname 0.0.0.0 port 12865 and family
AF_UNSPEC
2. Run the Netperf client program and observe the output
$ netperf
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
localhost (127.0.0.1) port 0 AF_INET
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
87380 87380 87380 10.01 634.75
Notice that the maximum data throughput is approximately 63 MB/s when the client program is run on the
same node as the server. For a more detailed description on interpreting the output, refer to the Netperf manual at http://www.netperf.org/svn/netperf2/tags/netperf-2.4.3/doc/netperf.html.
In order to run Netperf as a server daemon you must be logged in as root to edit the /etc/services file
and to add a service file in the /etc/xinetd.d/ directory. Assuming your are logged in as root, edit the
/etc/services file and add the entry such as the following to the end of the file.
# Add to end of /etc/services
netperf 12865/tcp # Network performance monitoring
netperf 12865/udp # Network performance monitoring
Next change to the /etc/xinetd.d directory and create a service daemon file call netperf with the
following content:
# Netperf server program service daemon
service netperf
{
disable = no
socket_type = stream
protocol = tcp
wait = no
user = root
server = /usr/local/bin/netserver
}
Restart the xinetd daemon for the Netperf server program to be added as a service daemon.
To test for communication between the Netperf service daemon and a remote client do the following:
1. Ensure that the Netperf client program code is available on the node from which it is to be invoked
2. Run the Netperf client program on the node with the -H option
$ netperf -H <server>, where server is the name of the machine running the Netperf server
service daemon. For example:
$ netperf -H tracker
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
tracker.mydomain.com (172.168.0.1) port 0 AF_INET
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
87380 87380 87380 10.02 92.04
Again, notice that maximum data transfer rate is approximately 9.2 MB/s. Also, the default receive socket size as set on the Netperf server is 87380 bytes while the send socket and send message size are equivalent. For a more detailed description of the flags that can be used with Netperf plus other examples, please refer to the Netperf manual located at http://www.netperf.org/svn/netperf2/tags/netperf-2.4.3/doc/netperf.html.
Summary
This article has demonstrated how Netperf can be used as a network benchmark tool for evaluating the
performance of your HPC network. Not only will you find the tool useful, but it can help you to decide what
settings should be passed to the Linux operating system TCP send and receive socket buffers to help increase data throughput for node-to-node communications.