Hello,
I am doing researches on latency in LTE networks with NS3 for quite a while. The scheduler seems to play one of the most important roles, so I played with a few parameters and noticed some strange behaviour that I would like to mention. I used RR Scheduler for Up- and Downlink:
1. Question:
In case of an uplink request, the Uplink RR Scheduler checks CQI value for every RB that is available for one UE. As far as I understood correct, if there is only one RB with CQI = 0, the scheduler will drop all RBs, since the modulation scheme for all RBs is fixed to the worst CQI of all the RBs. Uplink resource allocation will stop for this UE until an SRS update informs about new CQI values which can take up to 320ms. Am I right that Buffer size is not taken in mind for Uplink RB scheduling? And does it really make sense to drop all RBs if there is only one RB with a bad CQI value?
2. Question:
As far as I know from Lte, I would expect Uplink Timing in NS3 in this way: Between UL or DL Scheduling command and the corresponding user data on PUSCH or PDSCH is a delay of 4ms. I can confirm this with NS3, although I don’t understand why it takes extra 5ms from uplink request to buffer report. With HARQ and ARQ switched on, I would expect ACK or NACK 4ms later after data reception. I cannot confirm this in NS3. I studied the code and it seems that Ack/Nack is scheduled like any other user data, causing same Uplink- and Downlink latency, what takes all in all 11ms in uplink instead of 4ms.
Can anybody
clarify these issues or help me understanding the theory behind?
Thank you in advance!
Hello,
I am doing researches on latency in LTE networks with NS3 for quite a while. The scheduler seems to play one of the most important roles, so I played with a few parameters and noticed some strange behaviour that I would like to mention. I used RR Scheduler for Up- and Downlink:
1. Question:
In case of an uplink request, the Uplink RR Scheduler checks CQI value for every RB that is available for one UE. As far as I understood correct, if there is only one RB with CQI = 0, the scheduler will drop all RBs, since the modulation scheme for all RBs is fixed to the worst CQI of all the RBs. Uplink resource allocation will stop for this UE until an SRS update informs about new CQI values which can take up to 320ms. Am I right that Buffer size is not taken in mind for Uplink RB scheduling? And does it really make sense to drop all RBs if there is only one RB with a bad CQI value?
2. Question:
As far as I know from Lte, I would expect Uplink Timing in NS3 in this way: Between UL or DL Scheduling command and the corresponding user data on PUSCH or PDSCH is a delay of 4ms. I can confirm this with NS3, although I don’t understand why it takes extra 5ms from uplink request to buffer report. With HARQ and ARQ switched on, I would expect ACK or NACK 4ms later after data reception. I cannot confirm this in NS3. I studied the code and it seems that Ack/Nack is scheduled like any other user data, causing same Uplink- and Downlink latency, what takes all in all 11ms in uplink instead of 4ms.
I already tried to implement some improvements for RR uplink scheduler (see attached file). I made some slight changes, just having the buffer size in mind of the UE and stop allocation of RBs if number of RBs is sufficient. It seems to improve the scheduler a lot (see plots as attached files), but it creates new problems when HARQ is enabled: After a while I am getting the error:
assert failed. cond="m_rxonBuffer[ m_vrR.GetValue () ].m_byteSegments.size () == 1", msg="Too many segments. PDU Reassembly process didn't work", file=../src/lte/model/lte-rlc-am.cc, line=816Since I had no Idea why my changes had influence on the HARQ process, I stopped improving the scheduler.
As far as I now RR and PF Downlink Scheduler do have the same problems. They don’t take the buffer size in mind. I also wonder why PF scheduler does not update the lastAveradesThroughput of the user while RBs are allocated. This can cause one UE getting all RBs in a TTI and as a consequence a large increase in latency for all users.
To HARQ:
Yes I read everything I could find about HARQ in NS3 doc. But I never saw some timing diagrams. I will give you an example:
One UE is sending a packet (100 Byte) at time 2051 milliseconds to a server on the internet. Server is reflecting the packet (no additional latencies). This is a summary of the logs (lines marked with * are missing in the logs when ARQ and HARQ is disabled):
Uplink:
2051 Application logs Uplink request
2056 UlMacStats entry (mcs = 2, size =137)
2062 UlRxPhyStats entry
2062 Server logs reception
Downlink
2062 Server logs transmission
2063 DlMacStats entry (mcs = 18, size = 967)
*2064 DlMacStats entry (mcs = 18, size = 967)
2065 DlTxPhyStats entry
2065 Application logs reception
*2066 DlTxPhyStats entry
*2072 UlMacStats entry (mcs = 6, size = 325)
*2078 UlTxPhyStats entryThis is my interpretation, please correct me if I am wrong: At 2056, Uplink scheduler decided about Uplink resources and send Uplink grant DCI 0 at 2058 (noticed through debugger), 4 frames later at 2062, UE sends uplink data. At 2063 Server schedules Downlink data, at 2064 Server schedules Downlink ACK, at 2065 user Data is send, at 2066 ACK is send, at 2072 Uplink ACK from UE is scheduled, at 2078 ACK is send.
Here is my problem: It seems that ACK goes through the Uplink scheduler and is not send automatically after 4 Subframes. This would be against specifications and causes latency to increase.
What do you think?
Hi Nicola,
I already tried to implement some improvements for RR uplink scheduler (see attached file). I made some slight changes, just having the buffer size in mind of the UE and stop allocation of RBs if number of RBs is sufficient. It seems to improve the scheduler a lot (see plots as attached files), but it creates new problems when HARQ is enabled: After a while I am getting the error:
assert failed. cond="m_rxonBuffer[ m_vrR.GetValue () ].m_byteSegments.size () == 1", msg="Too many segments. PDU Reassembly process didn't work", file=../src/lte/model/lte-rlc-am.cc, line=816
Since I had no Idea why my changes had influence on the HARQ process, I stopped improving the scheduler.
As far as I now RR and PF Downlink Scheduler do have the same problems. They don’t take the buffer size in mind.
I also wonder why PF scheduler does not update the lastAveradesThroughput of the user while RBs are allocated. This can cause one UE getting all RBs in a TTI and as a consequence a large increase in latency for all users.
To HARQ:
Yes I read everything I could find about HARQ in NS3 doc. But I never saw some timing diagrams. I will give you an example:
One UE is sending a packet (100 Byte) at time 2051 milliseconds to a server on the internet. Server is reflecting the packet (no additional latencies). This is a summary of the logs (lines marked with * are missing in the logs when ARQ and HARQ is disabled):
Uplink:
2051 Application logs Uplink request
2056 UlMacStats entry (mcs = 2, size =137)
2062 UlRxPhyStats entry
2062 Server logs reception
Downlink
2062 Server logs transmission
2063 DlMacStats entry (mcs = 18, size = 967)
*2064 DlMacStats entry (mcs = 18, size = 967)
2065 DlTxPhyStats entry
2065 Application logs reception
*2066 DlTxPhyStats entry
*2072 UlMacStats entry (mcs = 6, size = 325)
*2078 UlTxPhyStats entryThis is my interpretation, please correct me if I am wrong: At 2056, Uplink scheduler decided about Uplink resources and send Uplink grant DCI 0 at 2058 (noticed through debugger), 4 frames later at 2062, UE sends uplink data. At 2063 Server schedules Downlink data, at 2064 Server schedules Downlink ACK, at 2065 user Data is send, at 2066 ACK is send, at 2072 Uplink ACK from UE is scheduled, at 2078 ACK is send.
Here is my problem: It seems that ACK goes through the Uplink scheduler and is not send automatically after 4 Subframes. This would be against specifications
and causes latency to increase.
Hi Nicola,
I am referring to Ack/Nack Timing of HARQ Process, defined in section 10.2 “Uplink HARQ-ACK timing” of TS 136.213:
Therefore Uplink ACKs are not scheduled at the MAC layer, but send autonomous after 4 subframes. In Section 10.1.2.1 it is also written, that for ACK/NACK the PUCCH is used. Therefore I don´t understand why ACKs are scheduled and handled as User data.
Sure you are right, RR and PF are not latency optimized schedulers, but all in all, having not the buffer size in mind is far away from reality I guess. Do you have an Idea what can cause the error that is produced with my RR changes? If I could, I would really like to fix this.
Hi Nicola,
I am referring to Ack/Nack Timing of HARQ Process, defined in section 10.2 “Uplink HARQ-ACK timing” of TS 136.213:
For FDD, the UE shall upon detection of a PDSCH transmission in subframe n-4 intended for the UE and for which an HARQ-ACK shall be provided, transmit the HARQ-ACK response in subframe n.
Therefore Uplink ACKs are not scheduled at the MAC layer, but send autonomous after 4 subframes. In Section 10.1.2.1 it is also written, that for ACK/NACK the PUCCH is used. Therefore I don´t understand why ACKs are scheduled and handled as User data.
Sure you are right, RR and PF are not latency optimized schedulers, but all in all, having not the buffer size in mind is far away from reality I guess.
Do you have an Idea what can cause the error that is produced with my RR changes? If I could, I would really like to fix this.
Hi Nicola,
I reworked my timing problem and and I now can confirm, HARQ Acks a scheduled correct over control channel after 4ms.
The delayed scheduled ACKs I saw in the logs were created by the ARQ process in the RLC layer. To complete this: HARQ ACKs are send automatic after 4ms over Control Channel, ARQ ACKs are scheduled and transmitted as User Data and thus share radio resources with User Data.
I will now work on a modified Uplink RR Scheduler. If I can solve the problem, I will post the modifications.