Manipulating DataRate and Delay in Realtime (csma-module)

724 views
Skip to first unread message

Bernhard Blieninger

unread,
Mar 10, 2015, 9:42:03 AM3/10/15
to ns-3-...@googlegroups.com
Hey guys,

i am just working on a realtime interface for the ns-3, therefore i just come up with a simple test manipulating DataRate and Delay of a CSMA-Example.
Therefore I used the first.cc example and edited it (below).

I just set up two threads and tried to manipulate the DataRate with the second thread after 4 seconds.
I also figured out that delay will be changed immediately but DataRate doesn´t.

I had to add one line into csma-net-device.cc function  TransmitStart  to update DataRate during the simulation as well.

m_bps = m_channel->GetDataRate ();


So my problem now is, i don't know where the DataRate is stored exactly, can i just write some patch that every module can use or is there no "global" storepoint where my patch/interface can interact.

For this example/test i created, my question is: Is it right to interact like this or does someone know a better interacting method and is there any reason why datarate is not "updated/asked" during simulation.


Thanks for your help.


Best Regards,

Bernhard.



/* -*- Mode:C++; c-file-style:"gnu"; indent-tabs-mode:nil; -*- */
/*
 * This program is free software; you can redistribute it and/or modify
 * it under the terms of the GNU General Public License version 2 as
 * published by the Free Software Foundation;
 *
 * This program is distributed in the hope that it will be useful,
 * but WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
 * GNU General Public License for more details.
 *
 * You should have received a copy of the GNU General Public License
 * along with this program; if not, write to the Free Software
 * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
 */
#include <fstream>
#include "ns3/core-module.h"
#include "ns3/network-module.h"
#include "ns3/internet-module.h"
#include "ns3/point-to-point-module.h"
#include "ns3/applications-module.h"
#include "ns3/csma-helper.h"
#include <pthread.h>
#include <unistd.h>

using namespace ns3;

NS_LOG_COMPONENT_DEFINE ("FirstScriptExample");

void* runSim (void* p) {
Simulator::Stop(Seconds(15));
Simulator::Run();
return NULL;
}

NetDeviceContainer* handle = NULL;

void* inject (void* p) {
sleep (4);
handle->Get(0)->GetChannel()->SetAttribute("Delay", TimeValue (Seconds (0.5)));
handle->Get(0)->GetChannel()->SetAttribute("DataRate", StringValue("2Mbps"));

std::cout << "Changed." << std::endl;
return NULL;
}

int
main (int argc, char *argv[])
{

GlobalValue::Bind ("SimulatorImplementationType",
                    StringValue ("ns3::RealtimeSimulatorImpl"));
  Time::SetResolution (Time::NS);
  LogComponentEnable ("UdpEchoClientApplication", LOG_LEVEL_INFO);
  LogComponentEnable ("UdpEchoServerApplication", LOG_LEVEL_INFO);

  NodeContainer nodes;
  nodes.Create (2);


  CsmaHelper csma;
  csma.SetChannelAttribute ("DataRate", StringValue("5Mbps"));
  csma.SetChannelAttribute ("Delay", TimeValue (MilliSeconds (2)));
  csma.SetDeviceAttribute ("Mtu", UintegerValue (1400));

  NetDeviceContainer devices;
  devices = csma.Install (nodes);
  handle = &devices;

  InternetStackHelper stack;
  stack.Install (nodes);

  Ipv4AddressHelper address;
  address.SetBase ("10.1.1.0", "255.255.255.0");

  Ipv4InterfaceContainer interfaces = address.Assign (devices);

  UdpEchoServerHelper echoServer (9);

  ApplicationContainer serverApps = echoServer.Install (nodes.Get (1));
  serverApps.Start (Seconds (1.0));
  serverApps.Stop (Seconds (10.0));

  UdpEchoClientHelper echoClient (interfaces.GetAddress (1), 9);
  echoClient.SetAttribute ("MaxPackets", UintegerValue (1024));
  echoClient.SetAttribute ("Interval", TimeValue (Seconds (1.0)));
  echoClient.SetAttribute ("PacketSize", UintegerValue (1024));

  ApplicationContainer clientApps = echoClient.Install (nodes.Get (0));
  clientApps.Start (Seconds (2.0));
  clientApps.Stop (Seconds (10.0));

  pthread_t p, injector;
  if (pthread_create (&p, NULL, runSim, NULL )){
 std::cout << "Error" << std::endl;
 return -1;
  }


  if (pthread_create (&injector, NULL, inject, NULL)){
 std::cout << "Error" << std::endl;
 return -1;
  }

  pthread_join(injector, NULL);
  pthread_join(p, NULL);

  std::cout << "joined thread" << std::endl;

  Simulator::Destroy ();

  std::cout << "simulator destroyed" << std::endl;
  return 0;
}


Tommaso Pecorella

unread,
Mar 11, 2015, 1:08:00 PM3/11/15
to ns-3-...@googlegroups.com
Hi,

the answer (unfortunately) is that there's no standardized and global approach: you'll have to write specialized code for every NetDevice / Channel.
The problem is that what you're looking for is really dependent on the model. Some allows a dynamic datarate change, some doesn't. Csma, for example, doesn't foresee a runtime change of these params, and changing them may lead to horrible bugs (like overlapping packets) that have never been addressed simply because, as it is, the code doesn't have the bug.

Anyway, the answer is: no, you'll have to write special code for each NetDevice. E.g., for csma the DataRate is a Channel property, while for PointToPoint is a NetDevice property.

Cheers,

T.

Bernhard Blieninger

unread,
Mar 11, 2015, 3:11:10 PM3/11/15
to ns-3-...@googlegroups.com
Hi,

thanks for your fast and very helpful reply, I already thought so.
The fact that there is no generic way to develope a realtime interface in ns-3 is a pitty but for our project its not that bad as we just want to use csma at first and will develop some realtime ethernet protocol later maybe.

I already saw that point to point devices and csma devices handle their datarates a little bit differnently but i thought as they all get "DataRate" from ns3 classes i can maybe somehow ask ns3 or the Simulator to change the DataRate in a found DataRate Type. ( I just checked to use Config::Set as adviced here: https://groups.google.com/forum/#!searchin/ns-3-users/config$3A$3Aset$20simulation$20time/ns-3-users/gXEXhjeNy_o/cIgaqHQzgEkJ, but it didn´t change anything. It seems like the only thing which is working is the approach i first stated:-( )

I also saw that many other modules depend on the csma module, is this correct and do you know if they all use the csma approach of DataRate, because this would mean i can change the csma module and help changing DataRate of a lot of other modules, too.

Maybe you or somebody else who is more into the csma module than I am at the moment could tell me if my approach of changing the datarate during simulation time is wrong or could be hazardous.
What i figured out is that the delay is updated with every simulated event, but DataRate is given to the devices once when they are attached and is never refreshed afterwards, so all what i did is
refreshing the value of m_bps and get the actual channel value when the device calls a transmit start and calculates the time when the next event should be scheduled.

I also want to realize my realtime interface, where the user can change datarate,delay and nodes which are active or inactive during simulation time by creating a new module using a client server appoach, almost like the guys from VsimRTI did.
What do you think about it, is it a good idea or how should such a thing be realized to be most useful for ns3 and its users?


Thanks a lot for all the help.

Best Regards,

Bernhard.

Tommaso Pecorella

unread,
Mar 11, 2015, 6:15:00 PM3/11/15
to ns-3-...@googlegroups.com
Hi,

so many questions... answers in-line.


On Wednesday, March 11, 2015 at 8:11:10 PM UTC+1, Bernhard Blieninger wrote:
Hi,

thanks for your fast and very helpful reply, I already thought so.
The fact that there is no generic way to develope a realtime interface in ns-3 is a pitty but for our project its not that bad as we just want to use csma at first and will develop some realtime ethernet protocol later maybe.

Good. Please consider that the csma model is a very basic and simple csma model. One may think that it's an Ethernet-like model, while it's not quite Ethernet-like. It's more like a old bus-based Ethernet (let's say a 10-base T), and even for that there are some important differences.
We discussed a lot about developing a "true" Ethernet model, including a hub and a switch, and it's still something we're interested in having. As a consequence, if you plan to develop an Ethernet model, my suggestion is to coordinate with the other interested people in the ns-dev mailing list.
 
I already saw that point to point devices and csma devices handle their datarates a little bit differnently but i thought as they all get "DataRate" from ns3 classes i can maybe somehow ask ns3 or the Simulator to change the DataRate in a found DataRate Type. ( I just checked to use Config::Set as adviced here: https://groups.google.com/forum/#!searchin/ns-3-users/config$3A$3Aset$20simulation$20time/ns-3-users/gXEXhjeNy_o/cIgaqHQzgEkJ, but it didn´t change anything. It seems like the only thing which is working is the approach i first stated:-( )

I guess you mean that you want to allow only a given subset of DataRates. I.e., something like "set the standard to 1000BASE-T". That could be indeed useful for a true Ethernet model, where it doesn't make sense to have a 42Mbps DataRate.
This may be done in a number of ways, but I'd suggest an approach similar to the Wi-Fi one, i.e., to define the standards as enumerations / literals and use them to initialize the particular bits. The user may be more interested in saying 1000Base-T than to have to cope with the finer details like DataRate, full duplex and so on.
 
I also saw that many other modules depend on the csma module, is this correct and do you know if they all use the csma approach of DataRate, because this would mean i can change the csma module and help changing DataRate of a lot of other modules, too.

The csma dependencies are (mostly) due to the examples or tests. Some particular modules (e.g., LTE) use csma links to connect various model pieces. As a consequence there's a dependency.
When a module is dependent on csma, it means that there's a csma link somewhere in the module. 
Anyway, the only modules dependent on csma (in 3.22) are:
- csma-layout (obviously)
- LTE (as noted above)
- test (they're just tests)
- netanim (it depends on almost all the devices, it's the visualization module...)
Overall I'd say that it's not really widespread among other modules :)
 
Maybe you or somebody else who is more into the csma module than I am at the moment could tell me if my approach of changing the datarate during simulation time is wrong or could be hazardous.
What i figured out is that the delay is updated with every simulated event, but DataRate is given to the devices once when they are attached and is never refreshed afterwards, so all what i did is
refreshing the value of m_bps and get the actual channel value when the device calls a transmit start and calculates the time when the next event should be scheduled.

Changing either Delay or DataRate may result in a problem.
The issue is this. When a packet is transmitted, the sequence of functions is:
T0 - TransmitStart (assume it find the channel free)
T1 - TransmitEnd (T0 + a delay dependent on the DataRate)
T2 - Receive (T1 + the propagation delay)

By the way, one of the csma problems is that the channel is "BUSY" between T0 and T2, while it shouldn't. It's busy depending on the propagation delay, and each device will "sense" it busy only for the time needed to transmit the packet. In other terms, there's no collision possibility AND the channel is busy for too much time. This is a problem when T2-T1 is much longer than T1-T0.

Anyway, if you change the delay or the DataRate when a packet is being transmitted... what will happen ?
Right now I'm not sure if this will cause something horrible (e.g., Cthulhu awakening). However one thing is sure: the model wasn't developed to have varying attributes. The proof is that you souln't be able to change the DataRate or the delay in the middle of a transmission, and that the delay variation should be a continuous function.
 
I also want to realize my realtime interface, where the user can change datarate,delay and nodes which are active or inactive during simulation time by creating a new module using a client server appoach, almost like the guys from VsimRTI did.
What do you think about it, is it a good idea or how should such a thing be realized to be most useful for ns3 and its users?

It would be useful indeed. The problem may be to make it generic enough.
Another (big) issue is the realtime requirement. It's a design tradeoff (not all the modules can work in real time) and I guess that only you can decide if it's more useful to satisfy the real-time users or to allow the most complex models.

Cheers,

T.

Bernhard Blieninger

unread,
Mar 11, 2015, 11:02:00 PM3/11/15
to ns-3-...@googlegroups.com
Hi,

thanks for all the answers they help a lot, I will just point some things out a litte better, so that you know what we want to achieve for my thesis and our project. (also in-line)


On Wednesday, 11 March 2015 23:15:00 UTC+1, Tommaso Pecorella wrote:
Hi,

so many questions... answers in-line.

On Wednesday, March 11, 2015 at 8:11:10 PM UTC+1, Bernhard Blieninger wrote:
Hi,

thanks for your fast and very helpful reply, I already thought so.
The fact that there is no generic way to develope a realtime interface in ns-3 is a pitty but for our project its not that bad as we just want to use csma at first and will develop some realtime ethernet protocol later maybe.

Good. Please consider that the csma model is a very basic and simple csma model. One may think that it's an Ethernet-like model, while it's not quite Ethernet-like. It's more like a old bus-based Ethernet (let's say a 10-base T), and even for that there are some important differences.
We discussed a lot about developing a "true" Ethernet model, including a hub and a switch, and it's still something we're interested in having. As a consequence, if you plan to develop an Ethernet model, my suggestion is to coordinate with the other interested people in the ns-dev mailing list.


What we want to build in the future is a maybe a realtime ethernet protocol, something like this: http://en.wikipedia.org/wiki/Ethernet_Powerlink . So its not just a normal ethernet protocol, because we want to garantuee that a package is transmitted during a short time period, because can be a requirement for some problems in our car environment. Basic model of all of this is my mailinglist entry here: https://groups.google.com/forum/#!topic/ns-3-users/a1WNV1Wt8zw 
What is done so far is that i configured QEMU VMs via TAP-Bridge and now i want to see how i can manipulate the testbed during simulation;-)
 
 
I already saw that point to point devices and csma devices handle their datarates a little bit differnently but i thought as they all get "DataRate" from ns3 classes i can maybe somehow ask ns3 or the Simulator to change the DataRate in a found DataRate Type. ( I just checked to use Config::Set as adviced here: https://groups.google.com/forum/#!searchin/ns-3-users/config$3A$3Aset$20simulation$20time/ns-3-users/gXEXhjeNy_o/cIgaqHQzgEkJ, but it didn´t change anything. It seems like the only thing which is working is the approach i first stated:-( )

I guess you mean that you want to allow only a given subset of DataRates. I.e., something like "set the standard to 1000BASE-T". That could be indeed useful for a true Ethernet model, where it doesn't make sense to have a 42Mbps DataRate.
This may be done in a number of ways, but I'd suggest an approach similar to the Wi-Fi one, i.e., to define the standards as enumerations / literals and use them to initialize the particular bits. The user may be more interested in saying 1000Base-T than to have to cope with the finer details like DataRate, full duplex and so on.

This is a good idea but my use case is somehow different, i want to have these strange datarates because i want to simulate that a cable is broken/half-broken. As i said we want to connect nodes in a car via a mesh network with each other.
The nodes have to manage themselves and if there is for example a car crashing into our car's right front side and the cable there is damaged and will not function any more or maybe there is heat somewhere next to the cables, nodes have to switch to other good cables, which are
operating stable and are not broken or damaged. thats why we dont want to have a given subset because we want to see at which point of damage the connection switches to a new cable connection automaticly for example.
 
I also saw that many other modules depend on the csma module, is this correct and do you know if they all use the csma approach of DataRate, because this would mean i can change the csma module and help changing DataRate of a lot of other modules, too.

The csma dependencies are (mostly) due to the examples or tests. Some particular modules (e.g., LTE) use csma links to connect various model pieces. As a consequence there's a dependency.
When a module is dependent on csma, it means that there's a csma link somewhere in the module. 
Anyway, the only modules dependent on csma (in 3.22) are:
- csma-layout (obviously)
- LTE (as noted above)
- test (they're just tests)
- netanim (it depends on almost all the devices, it's the visualization module...)
Overall I'd say that it's not really widespread among other modules :)

Ok so there is really no good generic approach. So i will just drop the idea, at least for now ;-) 

 
Maybe you or somebody else who is more into the csma module than I am at the moment could tell me if my approach of changing the datarate during simulation time is wrong or could be hazardous.
What i figured out is that the delay is updated with every simulated event, but DataRate is given to the devices once when they are attached and is never refreshed afterwards, so all what i did is
refreshing the value of m_bps and get the actual channel value when the device calls a transmit start and calculates the time when the next event should be scheduled.

Changing either Delay or DataRate may result in a problem.
The issue is this. When a packet is transmitted, the sequence of functions is:
T0 - TransmitStart (assume it find the channel free)
T1 - TransmitEnd (T0 + a delay dependent on the DataRate)
T2 - Receive (T1 + the propagation delay)

By the way, one of the csma problems is that the channel is "BUSY" between T0 and T2, while it shouldn't. It's busy depending on the propagation delay, and each device will "sense" it busy only for the time needed to transmit the packet. In other terms, there's no collision possibility AND the channel is busy for too much time. This is a problem when T2-T1 is much longer than T1-T0.

Anyway, if you change the delay or the DataRate when a packet is being transmitted... what will happen ?
Right now I'm not sure if this will cause something horrible (e.g., Cthulhu awakening). However one thing is sure: the model wasn't developed to have varying attributes. The proof is that you souln't be able to change the DataRate or the delay in the middle of a transmission, and that the delay variation should be a continuous function.

Ok, yes indeed, i read it before in some manual that csma handles collision detection with the channel being busy. But what I dont get till now is how this "receive" is actually done. The Point-To-Point module states that there are no no bits actually sent (https://www.nsnam.org/docs/release/3.9/doxygen/group___point_to_point_model.html) so i assumed that the packages were just given to the nodes by giving them the package location and they can do something with them, at least for realtime scheduling because i think with simulated nodes there is not such a great need to get the actual packages forwarded but when a VM(real hardware) is involved the packages have to be forwarded;-)

Is this correct?
And as far as i can see the manual and the comments in the code say that the events (T0 to T2)  just schedule other events so the channel gets blocked first, then time is calculated how long it will take for the package to be serialized and the next event is then scheduled after simulated time, then delay time is simulated and then the event were the package is given to the other node is called.

But anyway the delay gets changed immediately without any modifications by me to the csma module and both "logs"(before and after in-simulation change of datarate and/or delay) of the sent packages look very good(as expected) when i run them, so the only thing I am worried about right now is maybe the interframeGap, or something else depending on these values, not also modified by me and that other nodes will ignore the busy signal on the channel because they just use  a short wait period and send their packages out. So in my opinion there can be very wrong simulation results if there is something like a to short waiting time or a ignore of the busy channel, but as i said i just ask for the lastest datarate from the channel within the TransmitStart function before the next function gets scheduled and before something is calculated on base of the datarate.
So the Datarate will not be changed during T0-T2 it is just updated in T0(everytime it gets called instead of being written to the device only at attachment) and stays in each device until the device calls a new T0 and delay is as said not modified so it should be fine, hopefully ;-)

As we want to build a mesh network and will realize it with different channels between the nodes another question is wether to take csma or point to point. I just favoured csma because it is more similar to real ethernet than ptp.
But for now we just need the ns3 to be our flexible ethernet cable which will unplug, break, get damaged or plug with another(new) node, so I think both would be fine and having one channel for only to devices will also help to decrease the risk of dangerous behaviour of channel and nodes ;-)

 
 
I also want to realize my realtime interface, where the user can change datarate,delay and nodes which are active or inactive during simulation time by creating a new module using a client server appoach, almost like the guys from VsimRTI did.
What do you think about it, is it a good idea or how should such a thing be realized to be most useful for ns3 and its users?

It would be useful indeed. The problem may be to make it generic enough.
Another (big) issue is the realtime requirement. It's a design tradeoff (not all the modules can work in real time) and I guess that only you can decide if it's more useful to satisfy the real-time users or to allow the most complex models.

Hmm your right, at first I will just see that i get everything running with our use case and the csma module. Another problem of this realtime interface is that there are no reproducable results anymore unless the interface gets all your changes and will give you a "rerun" function of the inputs you made during the last simulation run. It kinda kills the idea of a reproducable simulation a bit towards a more flexible interactive "showcase" emulation and could be good for testing real devices;-)

So sorry for the long text, that many questions and explanations but this really helps me a lot understanding ns3 and developing our interface now.

Thanks in advance.

Best Regards,

Bernhard.

Tommaso Pecorella

unread,
Mar 12, 2015, 2:58:39 AM3/12/15
to ns-3-...@googlegroups.com
Hi,

the post is becoming long,so I'll reply NOT in-line (hoping to not forget stuff).

Ethernet PowerLink: nice idea, but ns-3 doesn't support it. It would be a nice addition, but you'd need also to have a good Ethernet model to start from.
More below.

Strange DataRate: I beg to differ. An Ethernet cable is either broken or working. If it gets damaged (e.g., heat, partial cut, etc.) the NICs may downgrade to a lower "standard" set, but the options are fairly limited. I.e., the available DataRate to choose from are very rough. This, of course, unless you have your own Ethernet-like standard. See for example this: http://www.lightreading.com/google-wants-variable-rate-ethernet/d/d-id/701991
More below (again).

And now the long part: PointToPoint or CSMA. Spoiler alert: use PointToPoint.
First things first tho - Internet never forgets. We don't either, and we keep our past manuals for looong time. You were looking at the 3.9 manual, while we're at 3.22... it's like checking Windows XP manual to get hints on Windows 10.

The manual is kinda cryptic on this point. It's not the PointToPoint model that is not sending bits. The point is: bits are never sent on a "simulated wire", because ns-3 always works with groups of bytes. The lowest block we consider is a chunk, which may be a segment of a datagram (or a whole datagram). PointToPoint and CSMA models works on a whole datagram at a time.
This is important for delay and error models: the same delay is applied to the whole chunk and the error model applies to the chunk. The channel gives the chunk to the receiver (applying a delay) and the receiver will decide (applying a statistical model) if the chunk contains erroneous bits.
However, bits are not actually "sent".... that's what the manual was trying to state. This is valid for all the ns-3 models, not only PointToPoint.

And now, should you use PointToPoint or csma ? Dunno (but I'd use PointToPoint).
What I don't understand is if you plan to model each link in your mesh network as a separate csma (or P2P) link or you're modeling the whole mesh as a single link.
In the first case, the P2P makes more sense as it's closer to an Ethernet (counterintuitive!) - the reason is: P2P ss full duplex, while Csma is half duplex (!!!).
In the second case it doesn't really matter, as ling as the aggregate behaviour is respected. Still.. full duplex.
In short, I'd use P2P.

For the future readers: THIS OPINION MAY CHANGE IF AND WHEN WE'LL HAVE A PRECISE ETHERNET MODEL.
- I'm writing in bold, italic and all-caps on purpose - if anybody will come in 2 years saying that they have read this, I'll be able to classify them as brainless idiots :)

The last point you may need to think of is the actual ns-3 behaviour.
In a Discrete-Event simulator the time advances when an Event is executed. During the event execution, the time doesn't advance.
By comparison, in a real systems the time advances continuously. There is a time advance for checksum calculation (for example), while ns-3 doesn't consider this.
As a rule of thumb, ns-3 only advances the time when there's something to transmit. CPU processing time is never modeled (the discussion is very long on this point, there are good reasons to not model that).
The real-time feature in ns-3 "only" ties the Event start time with the real-time clock. The Event end-time (depending on CPU, memory, disk, etc.) is not considered.
If you schedule two events, let's say at T1 and T2=T1-x, and your computer needs more than x to complete the first event, the T2 deadline will not be met, and there will be a slip in the simulation. If the scheduler is "hard", this will raise an error, if it isn't only a warning.
Anyway, the point is: mind when you change a parameter: ns-3 time works differently.

About the tradeoffs between real-time, simulations and reproducibility, I totally agree with you (and with the fact that stakeholders are more impressed by nice-looking interfaces).

Cheers,

T.
Reply all
Reply to author
Forward
0 new messages