Hi all,
I'm trying to send a given number of bytes via TCP over a lossy link. I borrowed tutorial/fifth.cc, and modified it to use RateErrorModel::ERROR_UNIT_PACKET. See the modified script as baseline-tcp.cc attached. The simulation sends 200 packets of size 1200 bytes (i.e., 240,000 bytes) over a link of 0.1 rate of packet loss. However, GetTotalRx() of PacketSink shows that only 144,000 bytes are received. I also verified using the Rx trace of PacketSink. Summing up the sizes of received Packet, the number is still 144,000. Even more oddly, even if I set the ErrorRate to 0.0, I still cannot get 240,000 bytes. What I got is 158,400 with the attached script.
I then tried the examples/tcp/tcp-bulk-send.cc, and also add the RateErrorModel. The modified script is attached as lossy-tcp-bulk-send.cc. Again, I cannot get 240,000 by running
./waf --run "lossy-tcp-bulk-send --maxBytes=240000"
if I set the ErrorRate to 0.1. However, I can get 240,000 if I set ErrorRate to 0.0 with script.
The results confuse me. I thought TCP is a reliable protocol, and therefore introducing the ErrorModel would only affect the completion time and Cwnd sizes. Maybe I had a misunderstanding of the RateErrorModel or other parts in NS-3 or the scripts were not correct? Can anyone please help figure this out? Thanks a lot.
Best Regards