Hello,
I'm currently simulating a TCP-based application under heavy packet loss. After a while, a lot of nodes start disconnecting (with error ERROR_NOTERROR), because they depleted the allowed number of TCP retransmission timeouts (m_dataRetrCount == 0 in tcp-socket-base.cc).
While this is probably the expected behavior, I wonder if the default parameters for ns-3 are indeed correctly set or at least what rationale is behind it.
Currently, the 'DataRetries' Attribute is initialized with value 6 and 'ConnTimeout' with 3 seconds in tcp-socket.cc. At the point of the disconnect, the current m_rto value is 8 seconds. Am I mistaken to assume that that would correspond to a maximum total timeout of 6*8 = 48 seconds?
If that is correct, what is the rationale behind it? RFC1122 (
https://tools.ietf.org/html/rfc1122#page-101) seems to suggest a total value ('R2') of at least 100 seconds, and the tcp(7) manual page under linux even states:
tcp_retries2 (integer; default: 15; since Linux 2.2)
The maximum number of times a TCP packet is retransmitted in established state before giving up. The default value is 15,
which corresponds to a duration of approximately between 13 to 30 minutes, depending on the retransmission timeout.
The RFC 1122 specified minimum limit of 100 seconds is typically deemed too short.
So, are my observation correct or am I missing something? Does ns-3 deploy too conservative parameters here, or is there a good rationale behind it?
Thanks & Kind Regards,
Elias Rohrer