AFDX End-to-End Transmission time - Additional network hops shows no delay...why? :/

42 views
Skip to first unread message

Shelly_wick

unread,
Nov 9, 2019, 5:48:42 AM11/9/19
to OMNeT++ Users
Hi Everyone:

I would really appreciate some help with understanding why the end-to-end transmission time does not increase with the inclusion of additional modules to act on the packets passing through the network.

Please see the attachment for timing reference...the transmission ends at the "Traffic Sink", and as the attachments show, the transmission time is the same after 21 hops and after 27 hops...I am not sure why this is so, surely by submitting the network traffic to additional processing, then the transmission time must be longer, no?

At each module, the traffic is "held" until a MAC is calculated, then released afterwards - this is the delay that I was hoping would show in the logs. The calculation is as below:

void MACVerification::handleMessage(cMessage *msg)
{
    AFDXMessage *afdxMsg = check_and_cast<AFDXMessage *>(msg);

    subkeys_swin(subkey1_swin, subkey2_swin, masterKey_swin);//call to key schedule function
    msgLen_swin = 20;//Chaskey input_swin is 20 bytes
    chaskey_swin(hash_swin, plaintext_swin, masterKey_swin, subkey1_swin, subkey2_swin);//pointer to returned chaskey mac calculation

    std::cout << "\nMAC VERIFIED." << endl;

    send(msg,"out");
}

Thanks for your assistance!
AFDX_NormalHops.PNG
AFDX_MoreHops.PNG
AFDX_NewModules.PNG

Rudolf Hornig

unread,
Nov 11, 2019, 8:01:10 AM11/11/19
to OMNeT++ Users
In a discrete event simulator, you have to explicitly advance the simulation clock and state when the sent message should arrive to the destination. In your code, you receive a message then do some CRC calculation and then send out the message without specifying any delay. As you are not specifying manually, how long the processing took (in simulation time), and you are sending it out immediately, you would not get any delay in the full message transmission time either. In this case you are modeling the CRC processor like it would have INFINITE processing speed. This may or may not be what you want.

To model the actual processing time in any node, you should use the sendDelayed() function instead of send()

Venesa Watson

unread,
Nov 11, 2019, 10:17:22 AM11/11/19
to OMNeT++ Users

Hi Rudolf:

Thanks. I did end up using the sendDelayed() function, but I was trying to avoid this, as I wanted a true value for the time taken to calculate the MAC. I thought this would have been reflected in the elog file, but alas, that is not the case.

I guess that now I will have to find arguements to support the delay value that I have used to represent this MAC calculation.

Thanks!

Venesa Watson

unread,
Nov 11, 2019, 10:18:05 AM11/11/19
to OMNeT++ Users

Hi Rudolf:

Thanks. I did end up using the sendDelayed() function, but I was trying to avoid this, as I wanted a true value for the time taken to calculate the MAC. I thought this would have been reflected in the elog file, but alas, that is not the case.

I guess that now I will have to find arguements to support the delay value that I have used to represent this MAC calculation.

Thanks!


Am Montag, 11. November 2019 14:01:10 UTC+1 schrieb Rudolf Hornig:

Rudolf Hornig

unread,
Nov 13, 2019, 6:09:01 AM11/13/19
to OMNeT++ Users
It is always your decision what to include in your model and what not. You can also implement a queue/server pair inside the MAC and parametrize it to mirror the MAC computation time. 

On the other hand, CPU computations are usually on a few orders of magnitude smaller than delays caused by communication so they are usually does not worth modeling. (check the samples/queueingnet example to see how to create queueing models)
Reply all
Reply to author
Forward
0 new messages