TCP SegmentSize problem

141 views
Skip to first unread message

PN

unread,
Nov 20, 2015, 9:54:27 PM11/20/15
to ns-3-users
I've written a net device module for my research and I'm trying to run tcp simulations on it (module has been tested in a lot of scenarios to make sure there are no bugs in my module). I'm running the following simulation: I have one Access point and one station. The bulk send application is installed on the AP and is sending packets over a tcp socket. Segment size has been set to 1024 bytes for tcp socket. However, instead of receiving segments of size 1024 bytes, the station most of the times receives segments of size 512 bytes and sometimes 1024 bytes. So essentially a single 1024 byte segment is split into two segments of 512 bytes. And this does not happen all the time. Now in my simulation, there is no channel error, no packet loss, and so basically tcp's window size never falls (I've verified this). So is this normal tcp behaviour? Also, the packets enqueued at the Access point are also 512 bytes (plus header), so my module is not breaking the segments.

I tried to look into where the error might be coming from. I checked two different versions of ns 3: ns 3.20 and 3.21. Interestingly, 3.20 does not have this problem however 3.21 does. Now the one thing that changed between the two versions is the window scaling feature of tcp (found this from the release notes). I tried to look further and found the following: in tcp-socket-base.cc there is a line where the available window 'w' is being calculated. Following this, there is an if condition:

if (w < m_segmentSize && m_txBuffer.SizeFromSequence (m_nextTxSequence) > w)
{
 
break; // No more
}

Now in 3.21 w's value increases from 0 to 1, 10, 19 etc where as in ns 3.20 it only seems to increase in multiples of segment size i.e. 1024. When a segment of size 512 is generated, m_txBuffer.SizeFromSequence (m_nextTxSequence) 's value is 512 and w's value is also 512 so it makes sense why tcp never enters the if loop and sends a packet of size 512. Even if the segment size is set to 1500, it divides segments into 512 bytes. w is calcualted by a call to AvailableWindow() which calls Window() which returns m_rWnd. Now in DoForwardUp(), m_rWnd is changed by the following operation:

m_rWnd <<= m_rcvScaleFactor;

Now between ns 3.20 and 3.21 there is no change made to bulk send application, so this is coming from tcp-socket-base itself. Also, the above change to m_rWnd is not made in ns 3.20

Also, from the release notes it does not seem that this problem was faced/fixed in any of the following versions.

I'm having some difficulties in using my module with ns 3.24 because some of the syntax that I'm using is giving errors in ns 3.24. I'm working on it but if you could provide me with some insights on what's going or how I can fix this (if it is a bug) on that would be great and would save me lots of time on changing the syntax! Many thanks!

Nat P

unread,
Nov 21, 2015, 5:19:03 AM11/21/15
to ns-3-users
Il giorno sabato 21 novembre 2015 03:54:27 UTC+1, PN ha scritto:
So essentially a single 1024 byte segment is split into two segments of 512 bytes. And this does not happen all the time. Now in my simulation, there is no channel error, no packet loss, and so basically tcp's window size never falls (I've verified this). So is this normal tcp behaviour?

Fast response: yes. MSS indicates the "maximum" segment size you can send; but nothing prevent TCP to send smaller segment. This has an impact on the performance, of course, so TCP tries to send the maximum value, except when... see below.
 
I tried to look into where the error might be coming from. I checked two different versions of ns 3: ns 3.20 and 3.21. Interestingly, 3.20 does not have this problem however 3.21 does. Now the one thing that changed between the two versions is the window scaling feature of tcp (found this from the release notes). I tried to look further and found the following: in tcp-socket-base.cc there is a line where the available window 'w' is being calculated. Following this, there is an if condition:

if (w < m_segmentSize && m_txBuffer.SizeFromSequence (m_nextTxSequence) > w)
{
 
break; // No more
}

Now in 3.21 w's value increases from 0 to 1, 10, 19 etc where as in ns 3.20 it only seems to increase in multiples of segment size i.e. 1024. When a segment of size 512 is generated, m_txBuffer.SizeFromSequence (m_nextTxSequence) 's value is 512 and w's value is also 512 so it makes sense why tcp never enters the if loop and sends a packet of size 512.

your analysis is correct, but the last phrase contains an error: you don't generate a segment size. This is the application behavior. I explain this with a diagram:

-> Apps sends down 512 byte with Send()
-> TCP has cWnd = 1024 and unack = 0, so w = 1024
-> if is not true, so we continue
-> in buffer we have 512, which is less than 1024
-> giving that sends 512

Then, ACK will ack 512 byte, so the window will advance by 512 byts, and sometimes (delayed ack is in place, so I send one cumulative ACK for 2 segments) by 1024.

This is perfectly fine, and it is funny that the first thing that come to the mind of researchers (also, in my mind, very often) is "I FOUND A BUG". Simply stated, very often we have academic information on how a protocol works; in reality everything is much subtle and many concurrect fact comes into play.

To resolve this? By sending, from the application, a bulk of data (> of 1024).

I hope to have explained it; if you have other doubts, feel free to ask.

Nat

Tommaso Pecorella

unread,
Nov 21, 2015, 6:24:08 AM11/21/15
to ns-3-users
I could add to test your system with 3.24.1. Nat did a enormous work on TCP refactoring and these issues could have been removed. or there could be other things. Still, you should always use the latest version.

Moreover, I'd strongly suggest you to think twice to how you're using TCP. TCP is a stream protocol. No matter how you are generating your data, there is always a chance that it will split the data in two or more packets, or that it will join data
The L7 PDU boundary detection must not be based on the assumption that the TCP receive call is symmetric with the send call, because this is plainly wrong.

Translated: you must always have the L7 (application-level) receive call doing something like
while (the PDU is not complete)
{
  receive some data and append it to the previous received data
}
clear the L7 rx buffer
put the extra data from the last receive call into the buffer (because they belong to the next PDU)

Of course the above will have to cope with the fact that the socket could not have more data, etc. Nothing dramatic, tho.
It's funny that most courses don't teach a so basic thing, isn't it ? 

Cheers,

T.
Reply all
Reply to author
Forward
0 new messages