Tcp Receive Window Size Auto-tuning

0 views
Skip to first unread message

Pablo Barjavel

unread,
Apr 27, 2024, 5:40:45 AM4/27/24
to fisogasapp

I'm trying to find a given setting that would force my client to advertise a fixed receiver window. I tried to give a same given value to net.core.rmem_max (/proc/sys/net/core/rmem_max) and net.core.rmem_default (/proc/sys/net/core/rmem_default) as well as ipv4.tcp_rmem (net.ipv4.tcp_rmem) but when checking the advertised window (using wireshark) it's absolutely not changing anything....

The Receive Window Auto-Tuning feature lets the operating system continually monitor routing conditions such as bandwidth, network delay, and application delay. Therefore, the operating system can configure connections by scaling the TCP receive window to maximize the network performance. To determine the optimal receive window size, the Receive Window Auto-Tuning feature measures the products that delay bandwidth and the application retrieve rates. Then, the Receive Window Auto-Tuning feature adapts the receive window size of the ongoing transmission to take advantage of any unused bandwidth.

Tcp Receive Window Size Auto-tuning


DOWNLOAD ✺✺✺ https://t.co/BanKWdrv3F



In Windows Vista, Windows Server 2008, and later versions of Windows, the Windows network stack uses a feature that is named TCP receive window autotuning level to negotiate the TCP receive window size. This feature can negotiate a defined receive window size for every TCP communication during the TCP Handshake.

In earlier versions of Windows, the Windows network stack used a fixed-size receive window (65,535 bytes) that limited the overall potential throughput for connections. The total achievable throughput of TCP connections could limit network usage scenarios. TCP receive window autotuning enables these scenarios to fully use the network.

For example, for a connection that has a latency of 10 ms, the total achievable throughput is only 51 Mbps. This value is reasonable for a large corporate network infrastructure. However, by using autotuning to adjust the receive window, the connection can achieve the full line rate of a 1-Gbps connection.

This feature also makes full use of other features to improve network performance. These features include the rest of the TCP options that are defined in RFC 1323. By using these features, Windows-based computers can negotiate TCP receive window sizes that are smaller but are scaled at a defined value, depending on the configuration. This behavior the sizes easier to handle for networking devices.

Unlike in versions of Windows that pre-date Windows 10 or Windows Server 2019, you can no longer use the registry to configure the TCP receive window size. For more information about the deprecated settings, see Deprecated TCP parameters.

Down there at the end of the document is a section on doing some tcp testing with Server 2008. They use a specific utility, NTttcp, to do benchmarking. One of the options on that tool is to set window size options.



This article describes how to disable or enable the transmission control protocol (TCP) window autotuning diagnostic tool in Vista. The autotuning diagnostic tool diagnoses and fixes problems related to autotuning. However, in some cases (for example, network performance testing) you may want to disable the diagnostic tool. This article describes the autotuning feature, the diagnostic tool, and how they help prevent problems.


The TCP receive window autotuning feature in Vista lets the operating system continually monitor conditions such as bandwidth, network delay, and application delay. The operating system can configure connections by scaling the TCP receive window to maximize the network performance based on these parameters. To determine the optimal size for the receive window, the receive window autotuning feature measures the product of the network delay and bandwidth, and also looks at the application retrieve-rates. Then, the receive window autotuning feature changes the receive window size of the ongoing transmission to take advantage of any unused bandwidth.



When the receive window autotuning feature is enabled, older routers, older firewalls, and older operating systems that are incompatible with the receive window autotuning feature may sometimes cause slow data transfer or a loss of connectivity between Vista clients. When this occurs, users may experience slow performance. Or, the applications may crash. These older devices do not comply with the RFC 1323 standard. Some device manufacturers provide software that works around the hardware limitations.


Note Contact the device manufacturer to determine whether this kind of software is available.


If the incompatible devices are outside your organization, and you cannot change the devices, this issue will remain. In order to determine whether the issue is caused by faulty firewalls, Vista SP1 contains an autotuning diagnostic tool that determines whether a faulty device is in the path from your computer. If the diagnostic tool detects a faulty device on the network, it reduces the degree of optimization performed by the receive window autotuning feature. The user may not notice that the window autotuning feature was turned down and may continue to use the computer as before. However, the state of the computer is changed. In order to control the state of the autotuning feature, the user may want to disable the diagnostic tool. Doing this lets them control the state of the receive window and lets them know what throughput to expect.

HKEY_LOCAL_MACHINE\Comm\Tcpip\Parms
TcpAutoTuningLevel

This registry value configures the window scaling strategy. When this value is set to 0 (TcpAutoTunningOff), the Window scaling feature is disabled. This limits the maximum TCP receive window to 65535 bytes. If you want the system to use a TCP Receive Window setting that's larger than 65535 bytes, this value should be set to a value of greater than 0. The default value is 3 (TcpAutoTuningNormal).


TcpWindowSize

This registry value specifies the initial TCP Receive Window size. Be aware that this size is not the final receive window size. The final receive window size is calculated by considering other parameters such as network adapter link speed. The final size may be 10x or more than the initial receive window size.

Clearly the link can sustain this high throughput, but I have to explicity set the window size to make any use of it, which most real world applications won't let me do. The TCP handshakes use the same starting points in each case, but the forced one scales

These algorithms work well for small BDPs and smaller receive window sizes. However, when you have a TCP connection with a large receive window size and a large BDP, such as replicating data between two servers located across a high-speed WAN link with a 100ms round-trip time, these algorithms do not increase the send window fast enough to fully utilize the bandwidth of the connection.

To better utilize the bandwidth of TCP connections in these situations, the Next Generation TCP/IP stack includes Compound TCP (CTCP). CTCP more aggressively increases the send window for connections with large receive window sizes and BDPs. CTCP attempts to maximize throughput on these types of connections by monitoring delay variations and losses. In addition, CTCP ensures that its behavior does not negatively impact other TCP connections.

The existing algorithms that prevent a sending TCP peer from overwhelming the network are known as slow start and congestion avoidance. These algorithms increase the amount of segments that the sender can send, known as the send window, when initially sending data on the connection and when recovering from a lost segment. Slow start increases the send window by one full TCP segment for either each acknowledgement segment received (for TCP in Windows XP and Windows Server 2003) or for each segment acknowledged (for TCP in Windows Vista and Windows Server 2008). Congestion avoidance increases the send window by one full TCP segment for each full window of data that is acknowledged.

These algorithms work well for LAN media speeds and smaller TCP window sizes. However, when you have a TCP connection with a large receive window size and a large bandwidth-delay product (high bandwidth and high delay), such as replicating data between two servers located across a high-speed WAN link with a 100 ms round trip time, these algorithms do not increase the send window fast enough to fully utilize the bandwidth of the connection. For example, on a 1 Gigabit per second (Gbps) WAN link with a 100 ms round trip time (RTT), it can take up to an hour for the send window to initially increase to the large window size being advertised by the receiver and to recover when there are lost segments.

To better utilize the bandwidth of TCP connections in these situations, the Next Generation TCP/IP stack includes Compound TCP (CTCP). CTCP more aggressively increases the send window for connections with large receive window sizes and large bandwidth-delay products. CTCP attempts to maximize throughput on these types of connections by monitoring delay variations and losses. CTCP also ensures that its behavior does not negatively impact other TCP connections.

My analysis is that the sender isn't sending fast enough because the send window (aka the congestion control window) isn't opening up enough to satisfy the RWIN of the receiver. So in short the receiver says "Give me More", and when Windows is the sender it isn't sending fast enough.

There's been some great info here by @Pat and @Kyle. Definitely pay attention to @Kyle's explanation of the TCP receive and send windows, I think there has been some confusion around that. To confuse matters further, iperf uses the term "TCP window" with the -w setting which is kind of an ambiguous term with regards to the receive, send, or overall sliding window. What it actually does is set the socket send buffer for the -c (client) instance and the socket receive buffer on the -s (server) instance. In src/tcp_window_size.c:

e2b47a7662
Reply all
Reply to author
Forward
0 new messages