I am new to AADL modeling and using OSATE to run Latency analysis on a multicore configuration. My model is simple I have 2 cores (C1, C2) instantiated from the same implementation running a process (P1, P2) and a thread (T1, T2) each. Both threads are periodic running at the same rate (R1). The cores are connected by a generic bus implementation. I have created data ports on both threads sending data from T1->T2. I created a flow from T1->P1->C1->generic bus->C2->P2->T2. I am able to instantiate the model without any errors. I am running into issues with the OSATE latency analysis tool. I chose SS, MF, ET, EQ, and DQL options. Not modeling ARINC653 partitions.
My goal is to model latency when critical data is sent from core1 to core2 in a synchronous system.
The latency report provides the following information:
1) First Sampling time for T1 defined as a source that is zero
2) Processing time for T1 which it gets from Compute_Execution_Time
3) Generic bus transmission time (calculated correctly)
4) Generic bus queue and sampling protocol time (both zero as no data was provided)
5) Connection has P1.t1.tx_data->P2.t2.rcv_data no sampling or queuing latency. (tried defining connection as Immediate, Delayed or Sampling but no change; queuing latency is always zero since I disabled queues)
6) Sampling time for T2 defined as a sink is non-zero which is the problem
a) Set port connection Timing =>Sampling, Min method is sampling, Min Actual is zero, and Max Actual is a weird value. After reading the latency paper Section 4.2 I thought the T2 sampling will be the processing of T1 and any connection delay but the value report provides is not what I expected.
b) Set port connection Timing =>Delayed, Min method is delayed sampling, Min Actual is greater than Max Actual value. How can Min be greater than Max actual? I was expecting a frame delayed value; get T1 period+ bus tx time but Min Actual is less than T1 period(R1) and max actual is less than half of T2 period(R1).
c) Set port connection Timing =>Immediate; No issue
7) Processing time for T2 which it gets from Compute_Execution_Time and is correct
I will appreciate if someone can explain how the OSATE tool calculates the sampling for T2 thread. Does the tool have a bug? I tried changing the "Compute_Execution_Time" property for both T1 and T2 but got similar results.
What am I doing wrong in the model? Or a bug in OSATE tool?
Thanks