Veins EDCA results

622 views
Skip to first unread message

Marie Lisle

unread,
May 28, 2012, 11:19:58 AM5/28/12
to omn...@googlegroups.com
Hi guys!

I am using Veins 2.0-RC2 and Omnetpp-4.2.2 I am working with Vein's 802.11P project. I am trying to test the EDCA. To that end, I am running the Veins example. However, I have several doubts:

- When the data is sent? Analyzing the code, it seems that each time the node receives a beacon, a data packet is sent. I do not know if it is possible to configure the data channels. 
- What's the meaning of beacon priority? 
-  I have analyzed the results of the article presented at VTC Spring 2012 "On the Necessity of Accurate IEEE 802.11p Models for IVC Protocol Simulation", but I'm not able to reproduce the same results. How can I measure the delay and the throughput? Should I measure the difference between the simTime() and the msg_>creationTime() at the application layer?


On the other hand, I have tried to measure the delay when establishing different beacon priorities, but the results do not make any sense (see attach file). I do not know If I'm measuring the delay correctly. Is there any version of the MAC 1609.4 where the delay and throughput are recorded?

Thanks! 
beacon_delay.png

Marie Lisle

unread,
May 28, 2012, 12:05:25 PM5/28/12
to omn...@googlegroups.com
I just forgot to put the configuration file I'm using :P:

*.node[*].applType = "TestWaveApplLayer"
*.node[*].appl.debug = false

*.node[*].appl.headerLength = 256 bit
*.node[*].appl.sendBeacons = true
*.node[*].appl.dataOnSch = true
*.node[*].appl.sendData = true
*.node[*].appl.dataLengthBits = 1000 bit
*.node[*].appl.beaconInterval = 0.01s
*.node[*].appl.maxOffset = 0.005s

*.node[0].appl.beaconPriority = 0
*.node[0].appl.dataPriority =0
*.node[1].appl.beaconPriority =1
*.node[1].appl.dataPriority =1
*.node[2].appl.beaconPriority = 2
*.node[2].appl.dataPriority = 2
*.node[3].appl.beaconPriority = 3
*.node[3].appl.dataPriority = 3

David Eckhoff

unread,
May 28, 2012, 3:08:47 PM5/28/12
to omn...@googlegroups.com
Hello Marie,

> - When the data is sent? Analyzing the code, it seems that each time
> the node receives a beacon, a data packet is sent. I do not know if
> it is possible to configure the data channels.

This is only an example of Veins. If and when data packets are sent is
dependent on the used application. We added a TestApplicationLayer so
there's at least something happening. A 'default' data channel is
configurable, and the data channel can be changed during runtime. (see
the ned for the waveapplicationLayer and the AppToMac interface)

> - What's the meaning of beacon priority?

The beacon priority is an application layer priority that is mapped to
an Access Category (AC_VO, AC_VI, AC_BK, AC_BE) at the mac layer. Right
now, its just a 1:1 mapping of 0-3 to the access categories (see
mapPriority function in the mac layer)

> - I have analyzed the results of the article presented at VTC Spring
> 2012 "*On the Necessity of Accurate IEEE 802.11p Models for IVC
> Protocol Simulation", *but I'm not able to reproduce the same
> results. How can I measure the delay and the throughput? Should I
> measure the difference between the simTime() and the
> msg_>creationTime() at the application layer?

simTime() and msg->creationTime() are good for measuring end to end
delays. but please be careful: Delay is heavily dependent on channel
load. In order to reproduce same results you'd need the same channel
load and internal queue states. In the paper you cited, we basically
showed that theres a problem with beacon frequencies > 10Hz (delays of
<= 54ms) and that the alternating access scheme is heavily impacting
throughput (data and beacons respectively can only be sent in 50% of the
slots). This is something that has been confirmed and indedepently shown
in different publications.

> On the other hand, I have tried to measure the delay when
> establishing different beacon priorities, but the results do not
> make any sense (see attach file). I do not know If I'm measuring the
> delay correctly. Is there any version of the MAC 1609.4 where the
> delay and throughput are recorded?

This looks strange. Are you sure the labels are right (maybe shifted)?
Also, a delay of almost 5ms seems to be too high. When are the messages
created? If they are created during the guard interval of 1609.4, you're
introducing additional delays. If you enable the debug output (#define
DBG..) in the mac layer you can exactly see whats happening. That is,
edca timings and so on.

Greetings, David

>
> Thanks!
>
> --
> Sent from the OMNeT++ mailing list. To configure your membership,
> visit http://groups.google.com/group/omnetpp

Marie Lisle

unread,
May 31, 2012, 4:54:13 PM5/31/12
to omn...@googlegroups.com
Thanks David! I really appreciate your help. However, I'm still struggling with the delay measurements. I haven't been able to obtain valid results. I've check the labels and the mapPriority thousands of times but I haven't found my error yet. I'm starting to think that I'm not measuring the delay where I'm suppose to.

To measure the delay, I've added to the method onData on the.TestWaveAppLaver.cc:

if (std::string(wsm->getName()) == "data") {
        switch(wsm->getPriority()){
        case 0:
            dataDelayVector0.record((simTime()-wsm->getCreationTime()).dbl());
            break;
        case 1:
            dataDelayVector1.record((simTime()-wsm->getCreationTime()).dbl());
            break;
        case 2:
            dataDelayVector2.record((simTime()-wsm->getCreationTime()).dbl());
            break;
        case 3:
            dataDelayVector3.record((simTime()-wsm->getCreationTime()).dbl());
            break;
        }

    }

I've been checking the way the delay is recorded in the 802.11e in the inetmanet framework. Should I measure the delay in the Mac1609_4.cc file? I've tried to do it in this Mac layer, but I'm not able to access neither  to the priority nor the name fields there. How did you measure the delay? Right now the delay Im getting is way too high (it even exceeds the 54 ms) and the ecdf that I'm getting does not look like the ones you plotted in your paper (I attach my results for 10Hz).

Thanks again!!!
delay_ecdf.png

David Eckhoff

unread,
Jun 4, 2012, 4:56:46 AM6/4/12
to omn...@googlegroups.com
Hello Marie,

On 05/31/2012 10:54 PM, Marie Lisle wrote:
> To measure the delay, I've added to the method onData on
> the.TestWaveAppLaver.cc:
> [...]

That looks about right. It doesn't matter whether you check it in the
Mac or App Layer as the packets are handed up without any additional delay.
However, you are measuring data packets, and not beacons. When are data
packets created? If they are created upon reception of a beacon message,
you will always have high delays, because at these times the control
channel is active and the mac has to wait for the sch interval to become
active.

If you really want to debug, you can either enable the DBG output in the
mac or add some debug output yourself:
Print the time when a beacon message is created, print the time when the
mac layer is accessing the channel. print the time when the phylayer is
receiving a packet, and the time when the mac is receiving one. Received
beacons will cause the onBeacon method in the applayer to run.
For an easy setup, just add 2 hosts, and make one host send only one
packet, while the other is doing nothing but trying to receive this packet.

You can access all fields in the mac layer by decapsulating the
macpacket there - but as i said, there is no need to.


> Right now the delay Im getting is way too high (it
> even exceeds the 54 ms) and the ecdf that I'm getting does not look like
> the ones you plotted in your paper (I attach my results for 10Hz).

Your plot has an x-axis from 0.02 ms to 0.045ms (very low delay). I'm
not sure how you plotted it and with what priorities (looks like a
mixture of 4 priorities), but when you have packets with 50ms delay
(creating them in the wrong channel interval) the plots gets pushed
together, causing it to look more like the one from the paper.


--
Dipl.-Inf. Univ. David Eckhoff
Computer Networks and Communication Systems
University of Erlangen, Germany
Phone: +49 9131 85-27627 / Fax: +49 9131 85-27409
mailto:eck...@cs.fau.de
http://www7.cs.fau.de/~eckhoff/

Marie Lisle

unread,
Jun 7, 2012, 12:19:03 PM6/7/12
to omn...@googlegroups.com
Thanks David. I'm trying to debug the application but still no success. It would be awesome if you could help me with two doubts that I have:

1) Where and how did you record the delay? I'm trying to record the delay in the Mac1609_4 class in the handleSelfMsg method, but I'm getting 0s delay for all the ACs, no matter the beacon frequency.

2) It seems to me like there is some kind of mistake when using the createQueue method. Now it's like 
     myEDCA[type_CCH]->createQueue(9,CWMIN_11P,CWMAX_11P,AC_BK);
when I'd put it like: 
    myEDCA[type_CCH]->createQueue(9,CWMAX_11P,CWMIN_11P,AC_BK);
is that a bug?

Thanks again for your support!!
Marie


David Eckhoff

unread,
Jun 7, 2012, 10:03:28 PM6/7/12
to omn...@googlegroups.com
Hello Marie,

Am 07.06.2012 18:19, schrieb Marie Lisle:
> Thanks David. I'm trying to debug the application but still no success.
> It would be awesome if you could help me with two doubts that I have:
>
> 1) Where and how did you record the delay? I'm trying to record the
> delay in the Mac1609_4 class in the handleSelfMsg method, but I'm
> getting 0s delay for all the ACs, no matter the beacon frequency.

Please refer to my previous mails, in which I explained where to record
the delay of message. SelfMessage is probably not the place where you
want to record the delay. Just record it on the Applayer and you're fine.

> 2) It seems to me like there is some kind of mistake when using the
> createQueue method. Now it's like
> myEDCA[type_CCH]->createQueue(9,CWMIN_11P,CWMAX_11P,AC_BK);
> when I'd put it like:
> myEDCA[type_CCH]->createQueue(9,CWMAX_11P,CWMIN_11P,AC_BK);
> is that a bug?

No, this is not a bug, but there's something wrong here which doesn't
affect the model but the readibility of the code. I seem to have mixed
up the order of the parameters, but the error gets cancelled out because
of line 476. I will fix this in the next version, but the contention
window sizes still get set correctly right now. Thank you for pointing
it out.

Greetings,
David

--
Dipl.-Inf. Univ. David Eckhoff
Computer Networks and Communication Systems
University of Erlangen, Germany
Phone: +49 9131 85-27627 / Fax: +49 9131 85-27409
mailto:eck...@informatik.uni-erlangen.de
http://www7.informatik.uni-erlangen.de/~eckhoff/
Reply all
Reply to author
Forward
0 new messages