Hello together,
I’m part of a research group which is working with the nr module to develop a beam management framework. Our implementation now significantly differs from the current version of the NR module. During high throughput simulations with multiple UEs in a multi-gNB environment we observed some anomalies.
I was able to replicate the anomaly with the latest version of NS3 (v3.3.6.1)
and the NR module (v2.2) utilizing the cttc-nr-demo.cc example. I modified the
example to the following setup:
20 gNBs with each 1 UE attached
udpPacketSize = 1400
lambda = 89000 -> which results in approximately 1Gb/s throughput
Numerology 3 was utilized with a 400MHz bandwidth in the 28GHz band
The simulation results in the following throughput values:
Flow 1 Throughput: 613.145120 Mbps
Flow 2 Throughput: 987.204960 Mbps
Flow 3 Throughput: 991.051040 Mbps
Flow 4 Throughput: 747.034400 Mbps
Flow 5 Throughput: 748.728960 Mbps
Flow 6 Throughput: 991.051040 Mbps
Flow 7 Throughput: 991.051040 Mbps
Flow 8 Throughput: 991.032000 Mbps
Flow 9 Throughput: 491.841280 Mbps
Flow 10 Throughput: 114.773120 Mbps
Flow 11 Throughput: 111.441120 Mbps
Flow 12 Throughput: 114.735040 Mbps
Flow 13 Throughput: 111.403040 Mbps
Flow 14 Throughput: 114.716000 Mbps
Flow 15 Throughput: 111.403040 Mbps
Flow 16 Throughput: 114.754080 Mbps
Flow 17 Throughput: 111.422080 Mbps
Flow 18 Throughput: 114.735040 Mbps
Flow 19 Throughput: 111.441120 Mbps
Flow 20 Throughput: 114.716000 Mbps
UE1 – UE8 show a throughput above 600 Mbps and some close to the specified target value. I’m aware, that the maximum throughput per gNB is limited to about 1.6Gb/s that’s why I only attached 1 UE per gNB. UE9 experiences a throughput of only 491 Mbps. The cause for the UEs not attaining the target throughput is not known (UE1, UE4, UE5) and we have not observed a similar behavior in our own framework.
UE10 – UE20 only show a throughput of slightly more than 110 Mbps. This is the same behavior we observed with our framework. By increasing the point-to-point data rate to 500GB/s, the throughput for UE9 and UE10 is increased but lowered for UE11-UE20 to around 35Mbps. Lowering the P2P data rate leads to a throughput decrease for the lower UEs and an increase to about 395Mbps for the higher UEs. As you can see, there seems to be some relation to the P2P module.
By changing the target throughput to around 400 Mbps, all UEs are able to maintain the target, especially UE10-UE20 as well. Testing 50 UEs with each 400 Mbps leads to a similar behavior. For such a simulation, the throughput for UE23 and higher is significantly below the target.
I tried tracing down the behavior but got stuck at certain points. It seems, that in NrMacSchedulerNs3::ComputeActiveUe the buffer is not as full as for the high throughput UEs. The buffer is filled in LteRlcUm::DoTransmitPdcpPdu. This function is less frequently called for the affected UEs. On the packet origin side (remote host) it seems that packets are regularly send to all UEs. We suspected that a potential buffer overflow causes this issue. The adaption of:
Config::SetDefault ("ns3::LteRlcUm::MaxTxBufferSize", UintegerValue (999999999));
however, seems not to prevent the issue. This was investigated as it solved another throughput related question in this forum.
I would be glad for any hints/tips if you experience something similar or found a solution to this issue. I’m currently trying to replicate this for a LTE-only setup and will share my conclusions about that :)