Tracing TCP throughput

1,231 views
Skip to first unread message

Antti Mäkelä

unread,
Feb 10, 2010, 7:46:43 AM2/10/10
to ns-3-users
Hey,

How would I trace TCP session throughput? There's a tracesource for
congestion window, but nothing else TCP-specific. I'd like to see
stuff like retransmits due to drops, sequence numbers of packets (and
therefore bytes) as they are being sent (not just inserted into
transmit buffer), and so on. Do I have to start adding Tracesources to
tcp-socket-impl.cc or is there something more readily built-in?

Also, example tcp-large-transfer.cc notes that the FIN exchange
isn't quite compliant with TCP spec. I couldn't find the referred
release notes. How would you make it compliant (right now it seems
that the PacketSink never sends out a FIN). Is it a simple matter of
editing PacketSink's

void PacketSink::HandlePeerClose (Ptr<Socket> socket)
{
NS_LOG_INFO("PktSink, peerClose");
}

and add socket->Close() (and erase the socket from m_socketlist -
although I'd probably change it to a std::set instead of list for
this..)?

Tom Henderson

unread,
Feb 11, 2010, 10:01:44 AM2/11/10
to ns-3-...@googlegroups.com
On 2/10/10 4:46 AM, Antti M�kel� wrote:
> Hey,
>
> How would I trace TCP session throughput? There's a tracesource for
> congestion window, but nothing else TCP-specific. I'd like to see
> stuff like retransmits due to drops, sequence numbers of packets (and
> therefore bytes) as they are being sent (not just inserted into
> transmit buffer), and so on. Do I have to start adding Tracesources to
> tcp-socket-impl.cc or is there something more readily built-in?

Did you look whether FlowMonitor (in src/contrib) could help?

Also, there are probably several external tools that might apply for
working with the pcap traces. I would look at argus and tcptrace.

>
> Also, example tcp-large-transfer.cc notes that the FIN exchange
> isn't quite compliant with TCP spec. I couldn't find the referred
> release notes. How would you make it compliant (right now it seems
> that the PacketSink never sends out a FIN). Is it a simple matter of
> editing PacketSink's
>
> void PacketSink::HandlePeerClose (Ptr<Socket> socket)
> {
> NS_LOG_INFO("PktSink, peerClose");
> }
>
> and add socket->Close() (and erase the socket from m_socketlist -
> although I'd probably change it to a std::set instead of list for
> this..)?
>

Josh Pelkey and George Riley are working on this. I think that things
have been fixed to some extent since tcp-large-transfer was written; the
FIN is generated in both directions if the applications call close
explicitly. However, there are a few substates in the TCP state machine
that are not supported right now, for certain socket call combinations.

Antti Mäkelä

unread,
Feb 11, 2010, 2:09:04 PM2/11/10
to ns-3-users
On Feb 11, 5:01 pm, Tom Henderson <t...@tomh.org> wrote:
> On 2/10/10 4:46 AM, Antti M kel wrote:
> >    How would I trace TCP session throughput? There's a tracesource for
> > congestion window, but nothing else TCP-specific. I'd like to see
> Did you look whether FlowMonitor (in src/contrib) could help?

No - but it seems to fit. Is the paper referred to in manual (http://
www.nsnam.org/docs/release/manual.html#SEC183) available somewhere - I
couldn't find it at least in Ieeexplore. Only other docs seem to be
the stuff in doxygen, so examples would be good..

> Also, there are probably several external tools that might apply for
> working with the pcap traces.  I would look at argus and tcptrace.

I'd rather avoid pcaps for this one since the dumps tend to become
huge.

> >    Also, example tcp-large-transfer.cc notes that the FIN exchange
> > isn't quite compliant with TCP spec. I couldn't find the referred

> Josh Pelkey and George Riley are working on this.  I think that things
> have been fixed to some extent since tcp-large-transfer was written; the
> FIN is generated in both directions if the applications call close
> explicitly.  However, there are a few substates in the TCP state machine
> that are not supported right now, for certain socket call combinations.

Yes, but right now PacketSink does NOT call Close upon close, so the
sending socket gets stuck in FIN_WAIT_1 (and I guess Packetsink socket
that got allocated for the flow gets stuck in CLOSE_WAIT and that
Close is never called). So based upon this, simply adding the Close()
call WOULD probably fix the issue.

I used tcp-large-transfer as basis for my TCP app which basically
sends out multiple, short burts of data (creating a new flow for each
burst) and had to change quite a few bits to make it work.

Anyway, thanks for the pointer to FlowMonitor.

to...@tomh.org

unread,
Feb 11, 2010, 2:19:49 PM2/11/10
to ns-3-...@googlegroups.com
On Thu, 11 Feb 2010 11:09:04 -0800 (PST), Antti Mäkelä <zar...@gmail.com>
wrote:

> On Feb 11, 5:01 pm, Tom Henderson <t...@tomh.org> wrote:
>> On 2/10/10 4:46 AM, Antti M kel wrote:
>> >    How would I trace TCP session throughput? There's a tracesource
for
>> > congestion window, but nothing else TCP-specific. I'd like to see
>> Did you look whether FlowMonitor (in src/contrib) could help?
>
> No - but it seems to fit. Is the paper referred to in manual (http://
> www.nsnam.org/docs/release/manual.html#SEC183) available somewhere - I
> couldn't find it at least in Ieeexplore. Only other docs seem to be
> the stuff in doxygen, so examples would be good..

I am not aware of a public version of that paper, unfortunately. We need
to generate some project documentation for it and move it out of contrib.

>
>> Also, there are probably several external tools that might apply for
>> working with the pcap traces.  I would look at argus and tcptrace.
>
> I'd rather avoid pcaps for this one since the dumps tend to become
> huge.
>
>> >    Also, example tcp-large-transfer.cc notes that the FIN exchange
>> > isn't quite compliant with TCP spec. I couldn't find the referred
>> Josh Pelkey and George Riley are working on this.  I think that things
>> have been fixed to some extent since tcp-large-transfer was written;
the
>> FIN is generated in both directions if the applications call close
>> explicitly.  However, there are a few substates in the TCP state
machine
>> that are not supported right now, for certain socket call combinations.
>
> Yes, but right now PacketSink does NOT call Close upon close, so the
> sending socket gets stuck in FIN_WAIT_1 (and I guess Packetsink socket
> that got allocated for the flow gets stuck in CLOSE_WAIT and that
> Close is never called). So based upon this, simply adding the Close()
> call WOULD probably fix the issue.

I will look into your suggestion of making packet sink sockets close
themselves when they receive a close.

Antti Mäkelä

unread,
Feb 12, 2010, 4:04:29 AM2/12/10
to ns-3-users
Ok, after studing this a bit further - it doesn't seem to address
transport-protocol specific issues, even though it works on "flow"-
level. So if transport protocol has congestion control mechanisms and
retransmissions those are still hidden - there's no awareness of TCP.
I'm interested in checking on how badly any end-user traffic is
affected when a router switches to a backup link due to primary one
going down. Thus I'd mostly like to see how fast TCP can recover, and
if the application was e.g. web browsing, how does end-user perceive
it - so I'm really looking to trace stuff like:

packet drops due to link being down
TCP times out on server, re-sends packet
If handover not complete, times out again with longer period, re-
sends packet
If handover complete, ack is received from client, slow-start occurs
again, transmission continues

But Flowmonitor doesn't seem to be able to distinguish between re-
transmits and completely new data. Of course if I know that the
session is, say, 10000 bytes and Flowmonitor shows 10000 bytes
received, 16000 sent I know that the extra 6000 was for retransmits..

and in case of UDP (probably VoIP), where there is no congestion
control, I'm hoping to find out not the total number of lost packets
as such, but the total number of lost packets in a single *burst* due
to the handover processing. Actually, would be interesting experiment
if I could, say, encode some sound as G.711 or other codec used in
VoIP, run it through the simulation and play it back after it has lost
some of the stuff..

For the UDP case I probably could use FlowMonitor since it also
produces the jitter/delay histograms and those are quite important for
VoIP.

On Feb 11, 9:19 pm, <t...@tomh.org> wrote:
> I am not aware of a public version of that paper, unfortunately.  We need
> to generate some project documentation for it and move it out of contrib.

Too bad, then I can't really cite it :)

Anyway, based on Doxygen I really right now only have question about
method

std::vector<uint32_t> ns3::FlowMonitor::FlowStats::packetsDropped,
it's described as

This attribute also tracks the number of lost packets and bytes, but
discriminates the losses by a _reason code_. This reason code is
usually an enumeration defined by the concrete FlowProbe class, and
for each reason code there may be a vector entry indexed by that code
and whose value is the number of packets or bytes lost due to this
reason. For instance, in the Ipv4FlowProbe case the following reasons
are currently defined: DROP_NO_ROUTE (no IPv4 route found for a
packet), DROP_TTL_EXPIRE (a packet was dropped due to an IPv4 TTL
field decremented and reaching zero), and DROP_BAD_CHECKSUM (a packet
had bad IPv4 header checksum and had to be dropped).

So...what's actually stored in the vector? Reason codes? Ok then, what
is the significance of indices?

Also, do I need to install FlowMonitor to *all* nodes in the
network, or simply endpoints - or endpoints and possible points where
drops happen?

Gustavo Carneiro

unread,
Feb 12, 2010, 9:33:34 AM2/12/10
to ns-3-...@googlegroups.com
On Thu, Feb 11, 2010 at 7:09 PM, Antti Mäkelä <zar...@gmail.com> wrote:
On Feb 11, 5:01 pm, Tom Henderson <t...@tomh.org> wrote:
> On 2/10/10 4:46 AM, Antti M kel wrote:
> >    How would I trace TCP session throughput? There's a tracesource for
> > congestion window, but nothing else TCP-specific. I'd like to see
> Did you look whether FlowMonitor (in src/contrib) could help?

 No - but it seems to fit. Is the paper referred to in manual (http://
www.nsnam.org/docs/release/manual.html#SEC183) available somewhere - I
couldn't find it at least in Ieeexplore. Only other docs seem to be
the stuff in doxygen, so examples would be good..

You should google (scholar) search for "FlowMonitor - a network monitoring framework for the Network Simulator 3 (NS-3)".  Maybe you'll get lucky.  For copyright reasons I can say no more.

--
Gustavo J. A. M. Carneiro
INESC Porto, UTM, WiN, http://win.inescporto.pt/gjc
"The universe is always one step beyond logic." -- Frank Herbert

Gustavo Carneiro

unread,
Feb 12, 2010, 10:08:27 AM2/12/10
to ns-3-...@googlegroups.com
On Fri, Feb 12, 2010 at 9:04 AM, Antti Mäkelä <zar...@gmail.com> wrote:
 Ok, after studing this a bit further - it doesn't seem to address
transport-protocol specific issues, even though it works on "flow"-
level. So if transport protocol has congestion control mechanisms and
retransmissions those are still hidden - there's no awareness of TCP.
I'm interested in checking on how badly any end-user traffic is
affected when a router switches to a backup link due to primary one
going down. Thus I'd mostly like to see how fast TCP can recover, and
if the application was e.g. web browsing, how does end-user perceive
it - so I'm really looking to trace stuff like:

 packet drops due to link being down
 TCP times out on server, re-sends packet
 If handover not complete, times out again with longer period, re-
sends packet
 If handover complete, ack is received from client, slow-start occurs
again, transmission continues

 But Flowmonitor doesn't seem to be able to distinguish between re-
transmits and completely new data.

You are correct.  FlowMonitor uses a L4 flow classifier, but is essentially measuring IPv4 packets.

Of course if I know that the
session is, say, 10000 bytes and Flowmonitor shows 10000 bytes
received, 16000 sent I know that the extra 6000 was for retransmits..

I think it should be interesting to create a new FlowProbe that measures packets at application layer (OnOffApplication and PacketSink).  That should allow us to measure TCP goodput.  As it is now, we are measuring the overheads for ACKs and packet retransmissions.  I think I might be going to need this as well for my research; the problem is that I still use NS 3.2 :-/
 

 and in case of UDP (probably VoIP), where there is no congestion
control, I'm hoping to find out not the total number of lost packets
as such, but the total number of lost packets in a single *burst* due
to the handover processing. Actually, would be interesting experiment
if I could, say, encode some sound as G.711 or other codec used in
VoIP, run it through the simulation and play it back after it has lost
some of the stuff..

 For the UDP case I probably could use FlowMonitor since it also
produces the jitter/delay histograms and those are quite important for
VoIP.

On Feb 11, 9:19 pm, <t...@tomh.org> wrote:
> I am not aware of a public version of that paper, unfortunately.  We need
> to generate some project documentation for it and move it out of contrib.

 Too bad, then I can't really cite it :)

Feel free to cite it, even if to say bad things about it! ;-)
 

 Anyway, based on Doxygen I really right now only have question about
method

std::vector<uint32_t> ns3::FlowMonitor::FlowStats::packetsDropped,
it's described as

This attribute also tracks the number of lost packets and bytes, but
discriminates the losses by a _reason code_. This reason code is
usually an enumeration defined by the concrete FlowProbe class, and
for each reason code there may be a vector entry indexed by that code
and whose value is the number of packets or bytes lost due to this
reason. For instance, in the Ipv4FlowProbe case the following reasons
are currently defined: DROP_NO_ROUTE (no IPv4 route found for a
packet), DROP_TTL_EXPIRE (a packet was dropped due to an IPv4 TTL
field decremented and reaching zero), and DROP_BAD_CHECKSUM (a packet
had bad IPv4 header checksum and had to be dropped).

So...what's actually stored in the vector? Reason codes? Ok then, what
is the significance of indices?

Let's see.  We have:

struct FlowStats
{
[...]
    std::vector<uint32_t> packetsDropped; // packetsDropped[reasonCode] => number of dropped packets
[...]
};

FlowStats::packetsDropped is a vector indexed by reason code.  For example, packetsDropped[DROP_BAD_CHECKSUM] is a value that gives you the number of packets dropped due to bad checksum.  Of course packetsDropped is a variable-length vector, so you should check packetsDropped.size () before accessing it.


 Also, do I need to install FlowMonitor to *all* nodes in the
network, or simply endpoints - or endpoints and possible points where
drops happen?

I guess in your case you only need to install it in the end points, not intermediate nodes.
 

Antti Mäkelä

unread,
Feb 15, 2010, 6:15:25 AM2/15/10
to ns-3-users
Thanks for all comments. For now, I'll try to add to TCP's
tracesources.

Like I said, I'd want to at least check on retx:s. Looks like there's
already NS_LOG_LOGIC statements in tcp-socket-impl.cc at appropriate
points so I can probably just insert some tracevalues there without
too much trouble.

So I think I should just convert appropriate variables to
TracedValue<>'s. However, not entire structure is that well commented
and I'd appreciate pointers in checking what to trace. Right now, my
needs are to trace over time:

Last sent sequence number and possibly packet "number" - I guess this
could be simply seqno/MSS. Is this just m_nextTxSequence?
Last received ack number - m_highestRxAck?
Number of retransmitted packets - new variable possibly, calculated in
Retransmit() from m_nextTxSequence - m_firstPendingSequence?
Some sort of "state" tracing - in function NewAck(), lines 1518
onwards - set when socket is in Slow Start vs. congestion avoidance.
Also should know when user calls connect() or a new connection is
accept()ed.

I'll see how this works out...

Reply all
Reply to author
Forward
0 new messages