Explanation of packet transmission duration calculation mechanism over wifi Module

227 views
Skip to first unread message

Kevin Tewouda

unread,
Apr 27, 2015, 3:35:36 AM4/27/15
to ns-3-...@googlegroups.com
Hello to you,
I ran a series of simulations on the wireless module, where I increase the number of nodes each time. At the end I calculates the average packet transmission time of the mac layer of a transmitter node to the physical layer of a receiver node. I posted these averages
on a graph ((I have attached the image) and note that the length increases with the number of nodes and then decreases. Could someone explain this phenomenon.
Sincerely.
img1.png

Kevin Tewouda

unread,
Apr 27, 2015, 3:41:51 AM4/27/15
to ns-3-...@googlegroups.com
Sorry I put the wrong picture, here's the good. (same as it is the same phenomenon)
img3.png

Konstantinos

unread,
Apr 27, 2015, 4:39:26 AM4/27/15
to ns-3-...@googlegroups.com
I understand that you calculate this average time based on the average received packets. You see that at some point you hit a maximum, and then it starts decrease. 
Check the number of received packets, are they decreased (or at least their rate of decreased as you may have more packets in total but you also have more nodes transmitting)?
Then I think you can answer your question yourself.

Kevin Tewouda

unread,
Apr 27, 2015, 5:20:09 AM4/27/15
to ns-3-...@googlegroups.com
All nodes emit in my simulations.
Indeed the number of received packets decrease from the 17th simulation.
Besides, I have a new question. I found that after more than 255 nodes, the nodes do not emit according BSM messages. I use the BSM-class application.J'ai first thought it was a problem of IP addresses, but prioiri, he should not have seen the following lines:
Ipv4AddressHelper addressAdhoc;
addressAdhoc.SetBase ("10.1.0.0", "255.255.0.0");
Can you have an idea of the problem?
Sincerely.


Le lundi 27 avril 2015 09:35:36 UTC+2, Kevin Tewouda a écrit :

Konstantinos

unread,
Apr 27, 2015, 5:47:51 AM4/27/15
to ns-3-...@googlegroups.com
There is no limitation in the number of nodes in BSM-application for generating traffic. 
Check the problem somewhere else. Potentially in the synchronous start time of several applications (something well discussed in the list for wireless scenarios).

Kevin Tewouda

unread,
Apr 27, 2015, 8:27:17 AM4/27/15
to ns-3-...@googlegroups.com
"The synchronous start time of several applications". I have only one application (BSM). Do you have another idea, because i don't.
Best regards.


Le lundi 27 avril 2015 09:35:36 UTC+2, Kevin Tewouda a écrit :

Konstantinos

unread,
Apr 27, 2015, 8:41:11 AM4/27/15
to ns-3-...@googlegroups.com
But you have several nodes.... You have to add some randomization in the begining of the applications.

Kevin Tewouda

unread,
Apr 29, 2015, 11:03:59 AM4/29/15
to ns-3-...@googlegroups.com
hello,
in fact I understood part of the problem. Let me explain, I changed my BSM application class to add a tag with the id of the transmitter node.
...
MyTag tag2;
tag2.SetSimpleValue((uint8_t)txNodeId);
Packet-> AddPacketTag (tag2);
...
 
My NS3 script, I watch all the nodes which send a package via this command
Config::Connect ("/NodeList/*/DeviceList/*/Phy/State/Tx", MakeCallback (&WifiPhyStats::PhyTxTrace, m_wifiPhyStats));
Then in my method, I write something like this
void
WifiPhyStats :: PhyTxTrace (std :: string context, Ptr <const Packet> packet, WifiMode mode, WifiPreamble preamble, uint8_t txpower)
{
...
out3 << context << << "Sender:" << (int) tag2.GetSimpleValue ()
    
<< "Actual_time:" << (Simulator :: Now ()) getSeconds () << "send_time". << Timestamp.GetDouble () << "packet_id:" << (int) tag3.GetIdValue ()
    
<< "\ N" << Packet-> ToString () << "\ n";
}
When I look at the file I rens me that I have something like this:
/NodeList/259/DeviceList/0/Phy/State/Tx  Sender:3 actual_time:1.12441 send_time:1.10606e+09 packet_id:89

How is it possible that I do not have the same value after Nodelist and at sender?


Le lundi 27 avril 2015 09:35:36 UTC+2, Kevin Tewouda a écrit :

Konstantinos

unread,
Apr 29, 2015, 11:16:38 AM4/29/15
to ns-3-...@googlegroups.com
I can't answer without the source code.
In the printing method how tag2 and tag3 are initialized? 

Kevin Tewouda

unread,
Apr 30, 2015, 5:34:00 AM4/30/15
to ns-3-...@googlegroups.com
Hello,
here is the code in my class MyTag:

TypeId
MyTag::GetTypeId (void)
{
  static TypeId tid = TypeId ("ns3::MyTag")
    .SetParent<Tag> ()
    .AddConstructor<MyTag> ()
    .AddAttribute ("SimpleValue",
                   "A simple value",
                   EmptyAttributeValue (),
                   MakeUintegerAccessor (&MyTag::GetSimpleValue),
                   MakeUintegerChecker<uint16_t> ())
  ;
  return tid;
}
TypeId
MyTag::GetInstanceTypeId (void) const
{
  return GetTypeId ();
}
uint32_t
MyTag::GetSerializedSize (void) const
{
  return 2;
}
void
MyTag::Serialize (TagBuffer i) const
{
  i.WriteU16 (m_simpleValue);
}
void
MyTag::Deserialize (TagBuffer i)
{
  m_simpleValue = i.ReadU16 ();
}
void
MyTag::Print (std::ostream &os) const
{
  os << "v=" << (uint32_t)m_simpleValue;
}
void
MyTag::SetSimpleValue (uint16_t value)
{
  m_simpleValue = value;
}
uint16_t
MyTag::GetSimpleValue (void) const
{
  return m_simpleValue;
}

and that is how i use it
MyTag tag2;
tag2.SetSimpleValue((uint16_t)txNodeId);
packet->AddPacketTag(tag2);

I actually understood the problem for the different numbering. Before, I used a uint8_t variable to store the id, or a uint8_t can only store 255 values, which is why I had different values after NodeList, and at the sender. I switched to uint16_t, and I have the same values now.
By cons I do not always understand the formation time of a packet from the application layer to the physical layer increases and then decreases with time, I really need an explanation please.

Best regards.

Le lundi 27 avril 2015 09:35:36 UTC+2, Kevin Tewouda a écrit :

Konstantinos

unread,
Apr 30, 2015, 5:51:09 AM4/30/15
to ns-3-...@googlegroups.com
Hi,

Please explain the problem more clearly. What do you mean you do not understand the formation time from application to phy?
You said you are using BSM application. If you read the code of the BsmApplication you can find some randomized delays introduced

293  // every BSM must be scheduled with a tx time delay
294  // of +/- (5) ms. See comments in StartApplication().
295  // we handle this as a tx delay of [0..10] ms
296  // from the start of the pktInterval boundary
297  uint32_t d_ns = static_cast<uint32_t> (m_txMaxDelay.GetInteger ());
298  Time txDelay = NanoSeconds (m_unirv->GetInteger (0, d_ns));
299 
300  // do not want the tx delay to be cumulative, so
301  // deduct the previous delay value. thus we adjust
302  // to schedule the next event at the next pktInterval,
303  // plus some new [0..10] ms tx delay
304  Time txTime = pktInterval - m_prevTxDelay + txDelay;
305  m_prevTxDelay = txDelay;

In addition, from Application to PHY there are several layers that could introduce delay, most notable the MAC layer with the corresponding 'access delay' which is random. 

Regards,
K.
Reply all
Reply to author
Forward
0 new messages