OLSR traffic spikes and long startup time

75 views
Skip to first unread message

Zack Weinberg

unread,
Apr 28, 2012, 3:38:26 PM4/28/12
to ns-3-...@googlegroups.com
I'm seeing suboptimal traffic behavior with OLSR on a 10x10 grid
(created with point-to-point-grid, per-link capacity 10Mbps). The
attached graph shows the average per-node bytes per second *received*
(that is, packets inbound whose ultimate destination is not some other
node) and *delivered* (application data sent to the packet sink). The
difference between these curves is approximately the per-node overhead
due to OLSR control packets. The application traffic generator (which
is sending UDP packets at a fixed rate; each node is sending
fixed-size packets to random other nodes) turns on ten seconds into
the simulation.

There are two problems: first, I expected routing overhead to be
roughly the same before and after the traffic generator kicks in, but
you can clearly see that it is much, much larger after traffic
generation starts. Second, if I enable the traffic generator at
simulation time zero instead of ten seconds in, it still takes roughly
ten seconds before the 'delivered' line comes up to 100000kB/s/node.
Logging tells me that the routing tables take ten seconds to converge,
and before that point, lots of sendto() calls fail with 'destination
unreachable' errors.

Questions, then:

1) Is there any way to get the routing table to converge faster?
2) Is there any way to *know* from inside the simulation when the
routing table has converged?
3) How do I reduce the routing overhead after traffic generation is up
and running?

The simulation program is also attached. Kibitzing on how to do
things better in general would also be appreciated.

Thanks,
zw
olsr-traffic-spikes.png
cf-grid-olsr.cc

Konstantinos

unread,
Apr 29, 2012, 7:36:51 AM4/29/12
to ns-3-...@googlegroups.com
Dear Zack,

OLSR is mainly used for wireless adhoc networks.
If you use fixed p2p network, then why don't you use GlobalRouting? It does not introduce overhead since there are no packets broadcasted, there is no converenge time.


On Saturday, 28 April 2012 20:38:26 UTC+1, Zack Weinberg wrote:
I'm seeing suboptimal traffic behavior with OLSR on a 10x10 grid
(created with point-to-point-grid, per-link capacity 10Mbps).  The
attached graph shows the average per-node bytes per second *received*
(that is, packets inbound whose ultimate destination is not some other
node) and *delivered* (application data sent to the packet sink).  The
difference between these curves is approximately the per-node overhead
due to OLSR control packets.  The application traffic generator (which
is sending UDP packets at a fixed rate; each node is sending
fixed-size packets to random other nodes) turns on ten seconds into
the simulation.

There are two problems: first, I expected routing overhead to be
roughly the same before and after the traffic generator kicks in, but
you can clearly see that it is much, much larger after traffic
generation starts.  Second, if I enable the traffic generator at
simulation time zero instead of ten seconds in, it still takes roughly
ten seconds before the 'delivered' line comes up to 100000kB/s/node.
Logging tells me that the routing tables take ten seconds to converge,
and before that point, lots of sendto() calls fail with 'destination
unreachable' errors.

Questions, then:

1) Is there any way to get the routing table to converge faster?

Do not use OLSR since you have fixed p2p network. If you "MUST" use OLSR, then reduce the interval that it sends HELLO, then converge will be faster.
 
2) Is there any way to *know* from inside the simulation when the
routing table has converged?

For fixed network I think that you can calculate it and know a-priori.
 
3) How do I reduce the routing overhead after traffic generation is up
and running?


By reducing the HELLO interval. Get pointer to OLSR routing protocol for each node and set a larger interval time.
 
The simulation program is also attached.  Kibitzing on how to do
things better in general would also be appreciated.

Thanks,
zw

Regards,
Konstantinos

Zack Weinberg

unread,
Apr 29, 2012, 11:31:12 AM4/29/12
to ns-3-...@googlegroups.com
On Sun, Apr 29, 2012 at 4:36 AM, Konstantinos <dinos.k...@gmail.com> wrote:
> Dear Zack,
>
> OLSR is mainly used for wireless adhoc networks.
> If you use fixed p2p network, then why don't you use GlobalRouting? It does
> not introduce overhead since there are no packets broadcasted, there is no
> converenge time.

The experiment is about modifications to OLSR. The fixed p2p physical
layer is to give me precise control over which nodes can talk to which
other nodes. I may go back and do it over 802.11 once it's working
properly.

>> 1) Is there any way to get the routing table to converge faster?
>
> Do not use OLSR since you have fixed p2p network. If you "MUST" use OLSR,
> then reduce the interval that it sends HELLO, then converge will be faster.
>
>> 3) How do I reduce the routing overhead after traffic generation is up
>> and running?
>
> By reducing the HELLO interval. Get pointer to OLSR routing protocol for
> each node and set a larger interval time.

You mean, adjust it down until the routing table converges, and then
up, yes? This makes sense but does not explain why the routing
overhead is larger after there is application traffic. That seems
like a bug.

>> 2) Is there any way to *know* from inside the simulation when the
>> routing table has converged?
>
> For fixed network I think that you can calculate it and know a-priori.

That's not going to work for me, my topologies are too varied. I need
a way to measure it.

zw
Reply all
Reply to author
Forward
0 new messages