Unexpected behavior of TCP Reno in INET

180 views
Skip to first unread message

John Blue

unread,
Dec 13, 2021, 10:44:49 AM12/13/21
to OMNeT++ Users

Hi to all,


Recently I decided to try to recreate some diagrams of TCP from Emory University (site). More specific the congestion window of TCP Tahoe and TCP Reno over time. In the simulations, I use small numbers to understand better the behavior of each TCP. In my set up I use two connected standard hosts with a 10ms delay and the capacity of Ppp.queue.packetCapacity = 12 with no package error in the link. Also in TcpTahoeRenoFamily, I change the ssthresh to 16 packages times the tcp.mss that I have set in the .ini file.

(In my diagrams the Infligth Data is the cwnd )

TCP_Tahoe.png

Simple_TCP_Exp-1.png

TCP_Reno.png

Simple_TCP_Reno_Exp-1.png

(Raw data from Omnet++ without cleaning)

Simple_TCP_Reno_Exp_Clean-1.png

(Cleaning data from Omnet++. More specific for cwmd for every time wean the measure is taking place I keep the max value.)


For two experiments I use the same setup that I mention above. The TCP Tahoe has the expected behavior as you can see from the diagrams below. On the other hand, the TCP Reno has a strange behavior as you can see. The TCP Reno implements the Fast Recovery Algorithm, so if he loses one package they reduce SSThresh to the half of current cwnd and retransmit the lost package, and enders to congestion avoidance algorithm. As you can see from the diagrams the TCP Reno has some wired spics when it loses one package above the initial SSThresh. I cant understand why the TCP Reno Sends more data than the initial SSThresh repeatedly . Can someone explain what is going on? I have attached all the necessary files for the above experiments with the results files. (I don't believe that I have done something wrong).  My version of Omnet++: 5.6 and the INET:inet-4.2.2-5537bcd677

I will appreciate it if someone can check it.

Best Rergards Ioannis Aggelis

TCP_reno.rar
Reply all
Reply to author
Forward
0 new messages