Hello All,
I am tracking the congestion window of a TCP Reno source. After the initial slow-start phase, the source enters congestion avoidance phase. Theoretically, on seeing a packet loss the congestion window is halved and it is increased by 1 packet every RTT. But the congestion window graph obtained experimentally (shown below) suggests something else. After every loss, the source enters slow-start again (seen as spikes in the graphs) and during the next packet loss congestion avoidance is resumed. Why is there a difference between theory and implementation? Am I missing out on some important detail here?
Experimental observation:
Ideal congestion window profile taken from the internet.
Any cues in this regard will be of great help!
Thanks in advance,
Vineet