Netperf Windows

0 views
Skip to first unread message

Sabel Kantah

unread,
Aug 5, 2024, 2:41:55 PM8/5/24
to newstitudi
Netperfis a benchmark that can be used to measure the performance ofmany different types of networking. It provides tests for bothunidirectional throughput, and end-to-end latency. The environmentscurrently measureable by netperf include:

Here are some of the netperf services available via this page:Download Netperf Clone or downloadvarious revisions of the Netperf benchmark.Netperf Numbers Submit and Retrieve Netperf results from the Netperf Database.Netperf Training View the Netperf manual or whitepapers on using Netperf.Netperf Feedback Providefeedback on the benchmark or the pages.Other Resources The network performance world does not live on netperf alone.Happy Benchmarking!


The three key measures of network performance are latency (the time required to transfer data across the network), throughput (the amount of data or number of data packets that can be delivered on an IP network in a predefined timeframe) and jitter or delay jitter (the changes and their duration in delay that occur during transfers).


In this blog post, I will show you how to measure throughput using NetPerf and iPerf, two open source network performance benchmark tools that support both UDP and TCP protocols. Each tool provides in addition other information: NetPerf for example provides tests for end-to-end latency (round-trip times or RTT) and is a good replacement for Ping, iPerf provides packet loss and delay jitter, useful to troubleshoot network performance. Choosing one or the other tool depends on your use-case and the test you are planning to achieve. Note that for the same input parameters, the tools can report different bandwidths, as they are not designed the same.


I will use the default parameters and run each test for 5 minutes (300 seconds). For a good report, it is recommended to run the tests multiple times, at different times of the day, with different parameters.


NetPerf and iPerf have a client and server functionality, and must be installed both on the server and the client from where you are conducting network performance tests. For each tool I will provide the most common parameters, and conduct tests between a client (1GB MEM) and a server (1GB MEM) and in my LAN (Local Area Network), and between a client and a remote server (in a WAN).


$ netperf -H HOST -l 300 -t TCP_STREAM

MIGRATED TCP STREAM TEST from (null) (0.0.0.0) port 0 AF_INET to (null) () port 0 AF_INET : histogram : spin interval

Recv Send Send

Socket Socket Message Elapsed

Size Size Size Time Throughput

bytes bytes bytes secs. 10^6bits/sec


UDP does not provide an end to end control flow, when testing UDP throughput make sure you specify the size of the packet to be sent by the client. Always read the receive rate on the server, since UDP is an unreliable protocol, the reported send rate can be much higher than the actual receive rate.


It's probably because one of the machines is using a poorly tuned TCP window (netperf apparently calls this the Recv Socket Size). During the TCP 3-way handshake that opens a TCP connection, each host communicates what size TCP window it can handle on receive, so the other host knows how much data to put in-flight before waiting for a TCP Ack.


To calculate a proper TCP window to use, you need to calculate your "Bandwidth * Delay Product" (BDP). Ping one machine from the other and note the ping time. On my busy GigE LAN, it's about 1ms right now. I think that's a bit high for GigE, but let's go with it since one end of your link is just 100BASE-TX.


Then again, if you're using a modern OS like Windows 8.x, this Answer shouldn't apply, because your hosts should have automatic TCP window tuning, so the initially-reported values might not be trustworthy. If you're using an ancient OS like Windows XP, or if automatic TCP window tuning is disabled or not working for some reason, then this applies.


Not sure if this is the right place to ask this but I am having issue getting netperf 2.7.0 using Cygwin. Based on what I read, many people have success getting netperf to compile using Cygwin so I am hoping that I am missing some libraries that I forgot to install and hoping someone would shed me some light on this.


I am testing the network performance of VMs in Xen and Hyper-V using Iperf and Netperf. In both hypervisors I found a Linux guest VM in different modes is having a relatively high performance than a Windows one. Even a fully virtualized Linux guest VM showed a better performance than a Windows guest VM with PV drivers.


So for each virtual machine I used the loopback address to create a client-server model on the same virtual machine. I established the same tests on all virtual machines and I didn't specify any buffer size or window size, and I left it for the tools to decide.


All your tests could be measuring is the performance of CPU scheduling and memory bandwidth. Linux and Windows have very different network stacks and I'm not sure anyone paid attention to do any performance optimizations on the loopback device driver (or whatever is the equivalent in the Windows kernel).


Is there any particular application you use that is transferring large amounts of data through local TCP/IP connections? If so, that's one of the most inefficient methods of transferring data locally.


Regarding actual network performance, please beware that Windows and Linux have different network stack with different default settings. You should investigate what your application will be using in terms of buffers, TCP window, etc and adjust iperf/netperf accordingly to mimic that behavior. Then transfer data between different VMs on the same host, VMs on different hosts, VMs and physical servers, etc. Pay attention to network ports' settings, uplink saturation, etc.


In this section the proposed TCP and UDP performance testtools are described that has been used. In general these are programs written inthe C language and / or C++. The tools, described in the followingsubsections, have been used. Also the modifications, when applied, are mentionedthere.


The Netperf toolis in principle a TCP and UDP benchmark. However, no shapingalgorithms have been implemented. Therefore, the value of the UDP testtype is limited, because due to the lack of shaping, the sender will oftenoverflow the receiver, because sending is more easier than receiving. In factvarious TCP and UDP traffic types can be defined. See themanual for more information.


In fact netserver is a true serverin the sense that all relevant data should be specified via thenetperf client. This feature makesnetserver also suited to bestarted from the Unix inetd net services daemon, such that inprinciple all security features, supplied by the TCP wrapper tool, arealso in effect here.


Between the netserver daemon andthe netperf client always twosocket connections will be opened: A communication socket socket that is used for all internal communication, including the handing over of the netserver options. A data socket that is used for the actual performance benchmark. The advantage of this procedure clearly is that thenetserver daemon can be completelycontrolled by the netperf client.However, the disadvantage is that it is not possible to specify directly theport of the data socket which may be forinstance a disadvantage for port-based TOS-bit settings.


To NetperfVersion 2.2p12 the most important following modifications have beenexecuted: The comparison of the return value of getaddrinfo() have been corrected. Otherwise at some platforms (a.o. Linux) sometimes the program would continue to run after failure of the getaddrinfo() call, resulting in a segmentation fault. Please note that the getaddrinfo() is only used when IPv6 has been enabled. In the netserver program usage also the IPv6 related options have been included, when enabled. The IPv6 related options have been add to the man pages, when enabled.


The current distribution can be downloaded from the "The Public Netperf Homepage". From this site also our modified tar-gzip archive can be downloaded. See for more information about the modifications the file README_MOD in the archive.


After unpacking the tar-gzip archive the appropriate directives in themakefile, contained in the archive should be edited. Concerningthese make directives, there is one remark to me made: thenetserver program uses a log filethat is defined in the LOG_FILE directive. Default that file islocated in the /tmp directory. However, that implies that one useris blocking the usage of netserverfor all other users, because they are not allowed to overwrite the log fileopened by the first user. Therefore, a better strategy in this situation is touse a user dependent log file. When used from inetd, the defaultlog file is fine.


In the following example a TCP stream test has been defined from hostgwgsara3 to host gwgsara2 with a duration of 10 secondsand with 256 Kbyte socket and buffer sizes. The server is listening atport 22113. All options besides the port option are specified at theclient. The socket and windows sizes options are stream type specific and shouldtherefore be specified after the argument --.


Also the Iperf tool is aTCP and UDP benchmark. Because shaping has been implemented inIperf, the tool is alsousable for UDP. A.o. other protocols also Multi-Cast has beensupported. See also theUser Docsfor more information


In contradiction toNetperf, theAlso the Iperf toolkitconsist of a combined server / client program namediperf. This implies that, incontradiction toNetperf, theserver site options should be specified directly to the server version of theprogram. All server oriented output will not send back to the client either, butremain at the server console. This also implies that only a test socket will beopened and no control socket.


This approach has the advantages that: The implementation of the server is relatively simple. The output of the client and server are independent. Therefore, the output can be adjusted more flexible to the wanted type of performance traffic.

3a8082e126
Reply all
Reply to author
Forward
0 new messages