HTTP persistent connection, also called HTTP keep-alive, or HTTP connection reuse, is the idea of using a single TCP connection to send and receive multiple HTTP requests/responses, as opposed to opening a new connection for every single request/response pair. The newer HTTP/2 protocol uses the same idea and takes it further to allow multiple concurrent requests/responses to be multiplexed over a single connection.
Since at least late 1995,[2] developers of popular products (browsers, web servers, etc.) using HTTP/1.0, started to add an unofficial extension (to the protocol) named "keep-alive" in order to allow the reuse of a connection for multiple requests/responses.[3][4]
When the server receives this request and generates a response, if it supports keep-alive then it also adds the same above header to the response. Following this, the connection is not dropped, but is instead kept open. When the client sends another request, it uses the same connection.
Since 1997, the various versions of HTTP/1.1 specifications acknowledged the usage of this unofficial extension and included a few caveats regarding the interoperability between HTTP/1.0 (keep-alive) and HTTP/1.1 clients / servers.[5]
In HTTP 1.1, all connections are considered persistent unless declared otherwise.[5] The HTTP persistent connections do not use separate keepalive messages, they just allow multiple requests to use a single connection. However, the default connection timeout of Apache httpd 1.3 and 2.0 is as little as 15 seconds[6][7] and just 5 seconds for Apache httpd 2.2 and above.[8][9] The advantage of a short timeout is the ability to deliver multiple components of a web page quickly while not consuming resources to run multiple server processes or threads for too long.[10]
Keepalive makes it difficult for the client to determine where one response ends and the next response begins, particularly during pipelined HTTP operation.[11] This is a serious problem when Content-Length cannot be used due to streaming.[12] To solve this problem, HTTP 1.1 introduced a chunked transfer coding that defines a last-chunk bit.[13] The last-chunk bit is set at the end of each response so that the client knows where the next response begins.
If the client does not close the connection when all of the data it needs has been received, the resources needed to keep the connection open on the server will be unavailable for other clients. How much this affects the server's availability and how long the resources are unavailable depend on the server's architecture and configuration.
Keep-alives were added to HTTP to basically reduce the significant overhead of rapidly creating and closing socket connections for each new request. The following is a summary of how it works within HTTP 1.0 and 1.1:
There is overhead in establishing a new TCP connection (DNS lookups, TCP handshake, SSL/TLS handshake, etc). Without a keep-alive, every HTTP request has to establish a new TCP connection, and then close the connection once the response has been sent/received. A keep-alive allows an existing TCP connection to be re-used for multiple requests/responses, thus avoiding all of that overhead. That is what makes the connection "persistent".
In HTTP 0.9 and 1.0, by default the server closes its end of a TCP connection after sending a response to a client. The client must close its end of the TCP connection after receiving the response. In HTTP 1.0 (but not in 0.9), a client can explicitly ask the server not to close its end of the connection by including a Connection: keep-alive header in the request. If the server agrees, it includes a Connection: keep-alive header in the response, and does not close its end of the connection. The client may then re-use the same TCP connection to send its next request.
In HTTP 1.1, keep-alive is the default behavior, unless the client explicitly asks the server to close the connection by including a Connection: close header in its request, or the server decides to includes a Connection: close header in its response.
Making a phone call and ending it takes time and resources. Control data (like the phone number) must transit over the network. It would be more efficient to make a single phone call to get the page and the two images. That's what keep-alive allows doing. With keep-alive, the above becomes
Most of the rest of Internet client-server protocols (HTTP, Telnet, SSH, SMTP) are layered on top of TCP. Thus a client opens a connection (a socket), writes its request (which is transmitted as one or more pockets in the underlying IP) to the socket, reads the response from a socket (and the response can contain data from multiple IP packets as well) and then... Then the choice is to keep the connection open for the next request or to close it. Pre-KeepAlive HTTP always closed the connection. New clients and servers can keep it open.
Keep-alive exemplifies The Law of Leaky Abstractions. While HTTP is intentionally designed as a stateless protocol, it is built upon TCP, which is inherently stateful. As a result, we must make certain compromises to prevent performance drawbacks.
I was facing this issue while communicating with a Siemens PLC through OPC UA, but that could be generalized as the issue comes from how Ignition handles session timeout and keep-alive timeout.
I was periodically get disconnected by the PLC with SessionInvalid error so I did some investigations and I founf that the PLC was setting the revised session timeout to a different value than the one requested (which is a standard OPC UA behavior) but that revised values was anyhow greater the Keep-Alive Timout value set in UA Connection settings.
So Ignition was requesting to create a session with a specified timeout, then the PLC creates with a reviewed timeout but if you're not aware of this value you would set the keep-alive timeout to an inconsistent value that might led to frequent disconnection as the keepalive is issued after session expiration.
I did solved the issue by setting the keep-live timeout according to the session timeout but wouldn't be better to have Ignition (Milo in this case) to set the keep-alive timeout accordingly to the revised session timeout (like Kepware does for example) or having the possibility to choose between fixed or pre-calculated?
The default settings are set with 1 and 2 in mind, and since 3 isn't really a goal and the defaults will keep any reasonably configured session alive, changing them in response to a revised session timeout doesn't really make sense IMO.
interact with each other. I'm running an Apache website on two app servers living behind a haproxy load balancer. Right now I don't have keep-alive enabled, but I've been experimenting with enabling it because I think it would help optimize the site. My goal was to enable keep-alive for connection between the browser and haproxy, but disable keep-alive between haproxy and apache. I accomplished this with
Now I'm looking into setting up the keep-alive timeouts. I've been studying the haproxy manual for the timeout http-request option, timeout http-keep-alive option, and timeout server option. If I'm understanding the manual correctly, timeout http-keep-alive is the time to keep the connection open between new requests and timeout http-request is the time to wait for the response's headers before closing the connection. But what I can't seem to figure out is what timeout server dictates. I want to say that timeout server is the time to wait for the full response, but can anyone confirm that? If I'm right that timeout server is the time to wait for the full response, then am I correct that it shouldn't have any bearing on the keep-alive timeout settings?
Is the time from the first client byte received, until last byte sent to the client (regardless of keep alive). So if your backend is too slow or the client is sending his request too slow, the whole communication might take longer than this, and the request is dropped (and a timeout sent to the client).
The time to keep a connection open between haproxy and the client (after the client response is sent out). This has nothing to do with the backend response time. This has nothing to do with the length of a single request (i.e. http-request timeout). This allows faster responses if the user requests multiple ressources (i.e. html, img, and js). With keep alive the single requests can make use of the same tcp connection. This way the load time for a full webpage is reduced.
This is the timeout for your backend servers. When reached, haproxy replies with 504 (gateway timeout). This also has nothing to do with keep alive, as it is only about the connection between proxy and backend.
With this setup, we will be able to use all of the interface for peer and keep-alive so it is more logical and safe. I don't know but the keep-alive network package must be only a tcp package nothing more and the communication speed is 1 sec or similar I suppose. So I really do not understand why we have to use seperate keep-alive link. what is the logic behind this???
Regarding the VPC keep-alive link, it is recommended to have a separate link or links dedicated solely for VPC keep-alive traffic. The purpose of the keep-alive link is to provide a reliable and independent communication path between the two Nexus switches for the VPC peer-link heartbeat and VPC consistency checks.
The keep-alive link is crucial for detecting failures and ensuring proper coordination between the switches in the VPC. It helps prevent scenarios where the VPC peers might incorrectly assume that the other peer is down due to other network issues affecting the data traffic. By having a separate link for keep-alive, it helps maintain the integrity and stability of the VPC operation.
It is generally recommended to dedicate separate links specifically for this purpose. This separation ensures that the keep-alive traffic is isolated and does not interfere with or get affected by the regular data traffic flowing over the VPC links. Using dedicated links for keep-alive also provides better control and visibility over the VPC keep-alive traffic, making troubleshooting and monitoring easier. Therefore, it is recommended to follow the best practice of using separate links for VPC keep-alive traffic, even if it means utilizing additional ports or interfaces on the switches.
df19127ead