New to ns-3 - how to set packet delays

232 views
Skip to first unread message

Eric Swankoski

unread,
Nov 10, 2017, 11:10:09 AM11/10/17
to ns-3-users
I am new to ns-3, having never used it or its predecessor. I have used GloMoSim and QualNet in the past, so I am coming from a very different development perspective.

My dissertation research was on deny-by-default in MANETs, which required that all communications between nodes be authorized. This was accomplished by piggybacking capability requests onto regular data packets, and thus packets containing capability-related administrative data would necessarily require delays at both intermediate nodes (if they were involved in capability setup) and destination nodes. So, what I need to do is recreate my work in ns-3, no longer having access to a student license of commercial software.

I am trying to learn as much as possible as I can about ns-3, and I have been able to set up file-based mobility, use different MANET routing protocols, and all of that. What I need to do is the following:

1) Set up capability structure information, which I think can be accomplished in the simulation program itself rather than necessarily having to modify the ns-3 source;
2) Modify packet headers and sizes to simulate the inclusion of cryptographic data, which I think I can do based on a few tutorials I've found using tags (correct me if this is a bad idea);
and
3) Modify the retransmission times at intermediate nodes if the packet contains certain tags (simulating a verification of cryptographic signatures, MACs, etc.) and incur a processing delay at the receiving node. This is the step I really have no idea how to do.

In any case, I'm open to all suggestions and would love any input if any of this has an obvious or intuitive solution. It's probably just a matter of me being unfamiliar with new software, and that part I'm actively working on.
Message has been deleted

pdbarnes

unread,
Nov 10, 2017, 5:00:46 PM11/10/17
to ns-3-users
2) is a bad idea. Tags are a simulator-only feature, and constitute out-of-band information. The only time a receiving node should look at tag data from a sending node is for data related to the simulation study, e. g. packet send time, in order to compute end-to-end delay as a performance metric.

You want to represent information (crypto/auth headers) which is in band in the system you are simulating, with all the performance implications: extra bytes, latency, possible fragmentation...

Instead you could add explicit headers with that data to your packets. You’ll have to decide how far down the stack to go to wrap your end-to-end data, yet still allow one-hop packet transmission, and possibly routing, to still work.

For 3) the way to approximate processing delays is to schedule the packet forwarding for some time in the future (rather than sending it immediately once you’ve updated any headers and ro information).

Peter

Eric Swankoski

unread,
Nov 13, 2017, 7:59:41 AM11/13/17
to ns-3-users
Future scheduling sounds like a plausible idea. If I can create and manipulate the headers appropriately, that would make the most sense.

What is the proper way to do this? I think ideally I would want this done in a method that has access to the packet header already. In IPv4 (L3), I could potentially see doing this before sending to the socket (as this would allow me to handle all incoming packets regardless of whether it's intended for this node or another) but I don't see any real way to "sleep" the simulator. It also doesn't look like sending the packet and header to the socket is intended to be a scheduled event.

QualNet had an adjustable delay built in that could be either hard-coded or specified in a config file that would be incurred prior to its main function (which I think was called RoutePacketAndSendToMAC()). I guess this stack is quite different.

pdbarnes

unread,
Nov 14, 2017, 11:32:49 AM11/14/17
to ns-3-users
“I don't see any real way to "sleep" the simulator. It also doesn't look like sending the packet and header to the socket is intended to be a scheduled event.”

You don’t want to sleep the simulator. Your packet processor isn’t the only work the simulator has to do. You want to model your process sleeping (or taking time to process some internal work). The way to do this is split the processing in to two parts, linked by a Schedule call to implement the delay. Often all the work is in the first part, and the second part just retrieves it from an internal queue/cache and hands it off to the rest of the Model.

In your case put the packet on an internal queue, then Schedule a function to run after the appropriate delay. This function will remove the packet from the queue and hand it to the socket.

Alternatively, store the packet and its delay in the queue. On adding the first packet to your queue schedule your function. When it runs it passes the first packet to the socket, then schedules itself for the delay stored with the next packet in the queue.

The first way sends each packet after its own processing delay (as if packets were processed in parallel). The second way accumulates the delays, so later packets incur their own delay *after* all previous packets have been processed.

Peter
Reply all
Reply to author
Forward
0 new messages