udp-client-server packet loss counters

595 views
Skip to first unread message

Antti Mäkelä

unread,
Feb 24, 2010, 8:51:16 AM2/24/10
to ns-3-users
Hey,

I'm interested in checking burstiness and packet loss for constant
rate UDP traffic (to simulate voice application). UDP-client-server
seems to fit the bill relatively well. However, I'm a bit baffled by
the packet-loss-counter.cc

The implementation has basically a table of booleans (I'm wondering
why author didn't use vector<bool> here to avoid all the modulus
trickery) indicating whether a packet was received or not.

Anyway, I can't quite wrap it around my head how this thing works:

void
PacketLossCounter::NotifyReceived (uint32_t seqNum)
{
if (seqNum>(uint32_t)(m_bitMapSize*8)-1)
{
for (uint32_t i=m_lastMaxSeqNum+1; i<=seqNum;i++)
{
if (GetBit(i)!=1)
{
NS_LOG_INFO ("Packet lost: " << i-(m_bitMapSize*8));
m_lost++;
}
SetBit(i, 0);
}
}
SetBit(seqNum, 1);
if (seqNum>m_lastMaxSeqNum)
{
m_lastMaxSeqNum = seqNum;
}
}

I mean, the bitMapSize is the size of the array of booleans (even
tho it's implemented as an uint8_t[]). If sequence number isn't
larger, then you just set the value for appropriate seqnumber to
"true".

Anyway, after you receive your first packet in which the seqnumber
is higher than what fits into the table (the first if condition is
true), I'm not sure I understand the logic.

My understanding is that the existence of the "reception table"
instead of simple sequence checking is there so that out-of-order
reception doesn't register as packet loss, and the bitmapsize
basically gives the size of the "window" in which to receive out-of-
order stuff.

Anyway, something is wrong here once you have filled the array once.
If the bitmap size is 8 packets (1 byte), and let's suppose I start
transmission, where first packet is received properly, and rest up to
packet number 8 are out of order and instead arrive in sequence of
8,1,2,3,4,5,6,7 (indexing starts at zero):

Thus m_lastMaxSeqNum = 0
NotifyReceived gets called with 8.

Since 8 > (bitmapsize*8)-1, we execute the for-loop.

This checks EVERY packet's status from 1 to 8. Obviously we get
packet loss from every one of them, but why, when their window hasn't
really passed yet?

Anyway, m_lost gets incremented.

Then rest of the packets arrive. They are SetBit(i)'d to true fine
and m_lastMaxSeqNum does not increase. However, they have already been
counted as lost.

So, is there some grander logic going on in here or can these be
considered bugs?

I'm going to need this sort of thing so I'm going to add
functionality to measure sizes of packet loss bursts anyway (probably
a map <uint32_t, uint32_t> (burstsizeinpackets, numberofbursts)), so
wondering if I should fix things otherwise. Mostly I'd just like to
change that m_loss is only incremented when stuff drops off from the
end of the window (and probably redesign around vector <bool> ).

Ismail Amine

unread,
Feb 24, 2010, 9:56:02 AM2/24/10
to ns-3-...@googlegroups.com, zar...@gmail.com
Antti M�kel� wrote:
> My understanding is that the existence of the "reception table"
> instead of simple sequence checking is there so that out-of-order
> reception doesn't register as packet loss, and the bitmapsize
> basically gives the size of the "window" in which to receive out-of-
> order stuff.
>
You are right

> Anyway, something is wrong here once you have filled the array once.
> If the bitmap size is 8 packets (1 byte), and let's suppose I start
> transmission, where first packet is received properly, and rest up to
> packet number 8 are out of order and instead arrive in sequence of
> 8,1,2,3,4,5,6,7 (indexing starts at zero):
>
> Thus m_lastMaxSeqNum = 0
> NotifyReceived gets called with 8.
>
> Since 8 > (bitmapsize*8)-1, we execute the for-loop.
>
> This checks EVERY packet's status from 1 to 8. Obviously we get
> packet loss from every one of them, but why, when their window hasn't
> really passed yet?
>
Here there is a bug. Packets considered lost here are packets with
sequence number (-7, -6, -5, ... , -1) and not packets from 1 to 7.

NS_LOG_INFO ("Packet lost: " << i-(m_bitMapSize*8));
This happens only for the m_bitMapSize first packets.

To fix this bug you just need to fill the bit map with 1 at the
initialization. Replace memset (m_receiveBitMap,0,m_bitMapSize); in
SetBitMapSize function with memset (m_receiveBitMap,1,m_bitMapSize);

Antti Mäkelä

unread,
Feb 24, 2010, 11:04:44 AM2/24/10
to ns-3-users
On Feb 24, 4:56 pm, Ismail Amine <Amine.Ism...@sophia.inria.fr> wrote:
> To fix this bug you just need to fill the bit map with 1 at the
> initialization. Replace memset (m_receiveBitMap,0,m_bitMapSize); in
> SetBitMapSize function with memset (m_receiveBitMap,1,m_bitMapSize);

Ok - seems to be so indeed. Will you commit this to the tree as well
or should I file bug report?

I should be able to include a burst tracker then - seems simple
enough.

Ismail Amine

unread,
Feb 24, 2010, 11:13:24 AM2/24/10
to ns-3-...@googlegroups.com
Antti M�kel� wrote:
> Ok - seems to be so indeed. Will you commit this to the tree as well
> or should I file bug report?
>
Please report a bug

Antti Mäkelä

unread,
Mar 1, 2010, 4:55:28 AM3/1/10
to ns-3-users
On Feb 24, 6:13 pm, Ismail Amine <Amine.Ism...@sophia.inria.fr> wrote:
> Antti M kel wrote:
> >   Ok - seems to be so indeed. Will you commit this to the tree as well
> > or should I file bug report?
>
> Please report a bug

Filed. http://www.nsnam.org/bugzilla/show_bug.cgi?id=825

Another thing I noticed that when the server is stopped, the loss
window is not "flushed", e.g. at that point you really should count
any missing packets - I had a breakdown configured to happen from
(simulation time) 1-5 secs after application started, app stopped at
30 secs, window being the default 32 and packets being sent once a
second => recorded no losses. Not sure how to best implement this
though.

Reply all
Reply to author
Forward
0 new messages