Groups keyboard shortcuts have been updated
Dismiss
See shortcuts

Rx Packet become 0 when more than 10 UE

463 views
Skip to first unread message

Hans

unread,
Jul 19, 2021, 5:57:20 AM7/19/21
to 5G-LENA-users
Hi,  sorry for asking this question.
I run code cttc-nr-demo.cc.
I set up the number of UE to 24 UEs which means 12 UE for flow lowlat and 12 for flow voice. Other parameters I didn't make any changes
I noticed that for flow lowlat UE don't receive any packets. May I know why?

Flow 1 (1.0.0.2:49153 -> 7.0.0.2:1234) proto UDP
  Tx Packets: 6000
  Tx Bytes:   768000
  TxOffered:  10.240000 Mbps
  Rx Bytes:   0
  Throughput:  0 Mbps
  Mean delay:  0 ms
  Mean jitter: 0 ms
  Rx Packets: 0
Flow 2 (1.0.0.2:49154 -> 7.0.0.3:1234) proto UDP
  Tx Packets: 6000
  Tx Bytes:   768000
  TxOffered:  10.240000 Mbps
  Rx Bytes:   0
  Throughput:  0 Mbps
  Mean delay:  0 ms
  Mean jitter: 0 ms
  Rx Packets: 0
Flow 3 (1.0.0.2:49155 -> 7.0.0.4:1234) proto UDP
  Tx Packets: 6000
  Tx Bytes:   768000
  TxOffered:  10.240000 Mbps
  Rx Bytes:   0
  Throughput:  0 Mbps
  Mean delay:  0 ms
  Mean jitter: 0 ms
  Rx Packets: 0
Flow 4 (1.0.0.2:49156 -> 7.0.0.5:1234) proto UDP
  Tx Packets: 6000
  Tx Bytes:   768000
  TxOffered:  10.240000 Mbps
  Rx Bytes:   0
  Throughput:  0 Mbps
  Mean delay:  0 ms
  Mean jitter: 0 ms
  Rx Packets: 0
Flow 5 (1.0.0.2:49157 -> 7.0.0.6:1234) proto UDP
  Tx Packets: 6000
  Tx Bytes:   768000
  TxOffered:  10.240000 Mbps
  Rx Bytes:   0
  Throughput:  0 Mbps
  Mean delay:  0 ms
  Mean jitter: 0 ms
  Rx Packets: 0
Flow 6 (1.0.0.2:49158 -> 7.0.0.7:1234) proto UDP
  Tx Packets: 6000
  Tx Bytes:   768000
  TxOffered:  10.240000 Mbps
  Rx Bytes:   0
  Throughput:  0 Mbps
  Mean delay:  0 ms
  Mean jitter: 0 ms
  Rx Packets: 0
Flow 7 (1.0.0.2:49159 -> 7.0.0.8:1234) proto UDP
  Tx Packets: 6000
  Tx Bytes:   768000
  TxOffered:  10.240000 Mbps
  Rx Bytes:   0
  Throughput:  0 Mbps
  Mean delay:  0 ms
  Mean jitter: 0 ms
  Rx Packets: 0
Flow 8 (1.0.0.2:49160 -> 7.0.0.9:1234) proto UDP
  Tx Packets: 6000
  Tx Bytes:   768000
  TxOffered:  10.240000 Mbps
  Rx Bytes:   0
  Throughput:  0 Mbps
  Mean delay:  0 ms
  Mean jitter: 0 ms
  Rx Packets: 0
Flow 9 (1.0.0.2:49161 -> 7.0.0.10:1234) proto UDP
  Tx Packets: 6000
  Tx Bytes:   768000
  TxOffered:  10.240000 Mbps
  Rx Bytes:   0
  Throughput:  0 Mbps
  Mean delay:  0 ms
  Mean jitter: 0 ms
  Rx Packets: 0
Flow 10 (1.0.0.2:49162 -> 7.0.0.11:1234) proto UDP
  Tx Packets: 6000
  Tx Bytes:   768000
  TxOffered:  10.240000 Mbps
  Rx Bytes:   0
  Throughput:  0 Mbps
  Mean delay:  0 ms
  Mean jitter: 0 ms
  Rx Packets: 0
Flow 11 (1.0.0.2:49163 -> 7.0.0.12:1234) proto UDP
  Tx Packets: 6000
  Tx Bytes:   768000
  TxOffered:  10.240000 Mbps
  Rx Bytes:   0
  Throughput:  0 Mbps
  Mean delay:  0 ms
  Mean jitter: 0 ms
  Rx Packets: 0
Flow 12 (1.0.0.2:49164 -> 7.0.0.13:1234) proto UDP
  Tx Packets: 6000
  Tx Bytes:   768000
  TxOffered:  10.240000 Mbps
  Rx Bytes:   0
  Throughput:  0 Mbps
  Mean delay:  0 ms
  Mean jitter: 0 ms
  Rx Packets: 0
Flow 13 (1.0.0.2:49165 -> 7.0.0.14:1235) proto UDP
  Tx Packets: 6000
  Tx Bytes:   7680000
  TxOffered:  102.400000 Mbps
  Rx Bytes:   2385920
  Throughput: 31.812267 Mbps
  Mean delay:  208.111786 ms
  Mean jitter:  0.220467 ms
  Rx Packets: 1864
Flow 14 (1.0.0.2:49166 -> 7.0.0.15:1235) proto UDP
  Tx Packets: 6000
  Tx Bytes:   7680000
  TxOffered:  102.400000 Mbps
  Rx Bytes:   1669120
  Throughput: 22.254933 Mbps
  Mean delay:  236.635553 ms
  Mean jitter:  0.357707 ms
  Rx Packets: 1304
Flow 15 (1.0.0.2:49167 -> 7.0.0.16:1235) proto UDP
  Tx Packets: 6000
  Tx Bytes:   7680000
  TxOffered:  102.400000 Mbps
  Rx Bytes:   2384640
  Throughput: 31.795200 Mbps
  Mean delay:  208.180064 ms
  Mean jitter:  0.220505 ms
  Rx Packets: 1863
Flow 16 (1.0.0.2:49168 -> 7.0.0.17:1235) proto UDP
  Tx Packets: 6000
  Tx Bytes:   7680000
  TxOffered:  102.400000 Mbps
  Rx Bytes:   2384640
  Throughput: 31.795200 Mbps
  Mean delay:  208.162207 ms
  Mean jitter:  0.220505 ms
  Rx Packets: 1863
Flow 17 (1.0.0.2:49169 -> 7.0.0.18:1235) proto UDP
  Tx Packets: 6000
  Tx Bytes:   7680000
  TxOffered:  102.400000 Mbps
  Rx Bytes:   2384640
  Throughput: 31.795200 Mbps
  Mean delay:  208.144350 ms
  Mean jitter:  0.220505 ms
  Rx Packets: 1863
Flow 18 (1.0.0.2:49170 -> 7.0.0.19:1235) proto UDP
  Tx Packets: 6000
  Tx Bytes:   7680000
  TxOffered:  102.400000 Mbps
  Rx Bytes:   2384640
  Throughput: 31.795200 Mbps
  Mean delay:  208.126493 ms
  Mean jitter:  0.220505 ms
  Rx Packets: 1863
Flow 19 (1.0.0.2:49171 -> 7.0.0.20:1235) proto UDP
  Tx Packets: 6000
  Tx Bytes:   7680000
  TxOffered:  102.400000 Mbps
  Rx Bytes:   2385920
  Throughput: 31.812267 Mbps
  Mean delay:  208.218928 ms
  Mean jitter:  0.220467 ms
  Rx Packets: 1864
Flow 20 (1.0.0.2:49172 -> 7.0.0.21:1235) proto UDP
  Tx Packets: 6000
  Tx Bytes:   7680000
  TxOffered:  102.400000 Mbps
  Rx Bytes:   2385920
  Throughput: 31.812267 Mbps
  Mean delay:  208.201071 ms
  Mean jitter:  0.220467 ms
  Rx Packets: 1864
Flow 21 (1.0.0.2:49173 -> 7.0.0.22:1235) proto UDP
  Tx Packets: 6000
  Tx Bytes:   7680000
  TxOffered:  102.400000 Mbps
  Rx Bytes:   2385920
  Throughput: 31.812267 Mbps
  Mean delay:  208.183214 ms
  Mean jitter:  0.220467 ms
  Rx Packets: 1864
Flow 22 (1.0.0.2:49174 -> 7.0.0.23:1235) proto UDP
  Tx Packets: 6000
  Tx Bytes:   7680000
  TxOffered:  102.400000 Mbps
  Rx Bytes:   2385920
  Throughput: 31.812267 Mbps
  Mean delay:  208.165357 ms
  Mean jitter:  0.220467 ms
  Rx Packets: 1864
Flow 23 (1.0.0.2:49175 -> 7.0.0.24:1235) proto UDP
  Tx Packets: 6000
  Tx Bytes:   7680000
  TxOffered:  102.400000 Mbps
  Rx Bytes:   2385920
  Throughput: 31.812267 Mbps
  Mean delay:  208.147500 ms
  Mean jitter:  0.220467 ms
  Rx Packets: 1864
Flow 24 (1.0.0.2:49176 -> 7.0.0.25:1235) proto UDP
  Tx Packets: 6000
  Tx Bytes:   7680000
  TxOffered:  102.400000 Mbps
  Rx Bytes:   2385920
  Throughput: 31.812267 Mbps
  Mean delay:  208.129643 ms
  Mean jitter:  0.220467 ms
  Rx Packets: 1864


  Mean flow throughput: 15.505067
  Mean flow delay: 105.266924
   

Hans

unread,
Jul 20, 2021, 7:28:56 AM7/20/21
to 5G-LENA-users
Does anyone know?

Katerina Koutlia

unread,
Jul 20, 2021, 7:39:14 AM7/20/21
to Hans, 5G-LENA-users
Hi,

I think you have too many UEs and the low latency most probably cannot be scheduled due to the fact that there are not enough resources for all these UEs.
Please reduce the number of UEs and try to find the limit.

BR,
Kat

--
You received this message because you are subscribed to the Google Groups "5G-LENA-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to 5g-lena-user...@googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/msgid/5g-lena-users/a3658e9c-b13b-4d70-9ef5-7b14dad30954n%40googlegroups.com.

Katerina Koutlia

unread,
Jul 22, 2021, 3:55:35 AM7/22/21
to 5G-LENA-users
Hi again,

Since I see you have 12 as LowLat and 12 as voice, I don't think the problem is the number of UEs. 
Could you please check that the configuration of LowLat doesn't have any error?

If you want you can share your script with us so that we can have a look.

BR,
Kat

Message has been deleted

Eoin O'Reilly

unread,
Jul 23, 2021, 12:06:26 PM7/23/21
to 5G-LENA-users
  Hi All,

I have found similar, see attached. I am trying to find the limits and running with a varying degree of. For 20 all have RX packets, for 40 UEs 12 flows have RX packets and for 80 11 have RX packets. I am running the latest NR version and the only other change is that I have increased the SrsPeriodicity value to 320 lte-enb-rrc.cc file.

./waf --run "cttc-nr-demo --ueNumPergNb=80 --packetSizeBe=100" is the command being used with the ue value being changed. Running = 60 at the moment to determine when the 0 RX Packets issue occurs. 

If I run with both ULL and BE I found the following - 

UEs = 35 - no RX packets for ULL, 12 of the BE flows have RX packets
UEs = 40 - no RX packets for ULL, 12 BE with RX packets
UEs = 250 - no RX packets for ULL, 25 BE with RX packets.  (./waf --run "cttc-nr-demo --ueNumPergNb=250")  

Thanks, Eoin.
LowLat Only_UE 20_40 and 80.txt

Eoin O'Reilly

unread,
Jul 23, 2021, 3:56:22 PM7/23/21
to 5G-LENA-users
Hi All,

Just to follow on from my last post. If I have move than 22 UEs it results in flows with no RX Packets. All flows will have RX packets up to a value of 22 UEs (running with default gNb = 1). For 23 UEs only 11 will have RX packets. 

It seems the max value of flows with RX packets is 25 which I found with 250 UEs for the majority my other tests the value was 12. 

For tests with both ULL and BE the ULL flows with UE greater than 22 all had 0 RX packets and the BE flows would have some RX packets so if it is resourced based it appears to be biased towards BE in mixed cases and limited when BE packet size reduced to ULL size (100). 

Thanks, Eoin.  

Message has been deleted

Hans

unread,
Jul 25, 2021, 5:12:03 AM7/25/21
to 5G-LENA-users
Hi, Kat. Thank you for answering my question.
I think I didn't change anything except the number of UEs.
But here I attached the code. Thank you for your time

cttc-nr-demo.cc

Eoin O'Reilly

unread,
Aug 3, 2021, 4:08:48 AM8/3/21
to 5G-LENA-users
Hi All,

Just curious to see if there are any suggested resolutions to this issue. I changed approach slightly and and now running across multiple gNb's and have successfully run the following - 
./waf --run "cttc-nr-demo --gNbNum=5 --ueNumPergNb=20 --packetSizeBe=100

When I upped to 40 UEs/gNb I once again see flows with 0 RX packets. So a slight improvement in that I got 100 flows with RX packets but still no indication why I then see flows with 0 RX packets. 

Is this a limitation on the model or an issues arising with how I am running the code? Again running with the latest NR code and  I have increased the SrsPeriodicity value to 320 lte-enb-rrc.cc file.

The reason I am trying to run with the most UE possible is I am trying to model mMTC traffic, which is why I am also only running with a single packet size. 

Many thanks, Eoin

Sandra Lagén

unread,
Aug 3, 2021, 5:25:28 AM8/3/21
to Eoin O'Reilly, 5G-LENA-users
Hi Eoin,
Can you try to use the PF scheduler?
nrHelper->SetSchedulerTypeId (TypeId::LookupByName ("ns3::NrMacSchedulerTdmaPF"));
RR has no memory, and so if all UEs are active and there are many UEs, it allocates resources (within each slot) to part of the UEs. Instead PF has memory and schedules all UEs, through different slots.
Anyway, as you use only single packet per UE, RR should also work, since once the part of UEs are served, the remaining UEs should be scheduled... But I don't know, maybe it is worth to try the scheduler type. Another option is to randomize the applications start times.
Let us know.
BR,
Sandra

Missatge de Eoin O'Reilly <eoi...@gmail.com> del dia dt., 3 d’ag. 2021 a les 10:08:

Eoin O'Reilly

unread,
Aug 3, 2021, 5:32:23 AM8/3/21
to 5G-LENA-users
Hi Sandra,

That makes sense but unfortunately my coding skills are limited and I have been running everything from the command line (bar the simple change to SrsPeriodicity in lte-enb-rrc.cc file). Is there an easy way (Dummy's guide if you will) to implementing your suggestion?

Thanks, Eoin. 

Sandra Lagén

unread,
Aug 3, 2021, 5:47:49 AM8/3/21
to Eoin O'Reilly, 5G-LENA-users
Add this line nrHelper->SetSchedulerTypeId (TypeId::LookupByName ("ns3::NrMacSchedulerTdmaPF"));
after the NrHelper is created. In the cttc-nr-demo example you can do this when there is commented "Case (i): Attributes valid for all the nodes"
BR,
Sandra


Missatge de Eoin O'Reilly <eoi...@gmail.com> del dia dt., 3 d’ag. 2021 a les 11:32:

Eoin O'Reilly

unread,
Aug 3, 2021, 5:52:48 AM8/3/21
to 5G-LENA-users
Thank you Sandra,

Sorry, just to be clear (as history shows my ability to break things if I don't ask!) I am adding the line to the cttc-nr-demo.cc file and then running as normal via ./waf ?  

Thanks, Eoin

Sandra Lagén

unread,
Aug 3, 2021, 5:54:51 AM8/3/21
to Eoin O'Reilly, 5G-LENA-users
yes!

Missatge de Eoin O'Reilly <eoi...@gmail.com> del dia dt., 3 d’ag. 2021 a les 11:52:

Eoin O'Reilly

unread,
Aug 3, 2021, 8:14:50 AM8/3/21
to 5G-LENA-users
Hi Sandra,

Thank you for confirming. I added the line as instructed - 
"  /*
   *  Case (i): Attributes valid for all the nodes
   */
    
  // suggested change to address 0 RX packets issue with UE > 23
  nrHelper->SetSchedulerTypeId (TypeId::LookupByName ("ns3::NrMacSchedulerTdmaPF"));
  // Beamforming method
  idealBeamformingHelper->SetAttribute ("BeamformingMethod", TypeIdValue (DirectPathBeamforming::GetTypeId ()));

  // Core latency
  epcHelper->SetAttribute ("S1uLinkDelay", TimeValue (MilliSeconds (0)));"

The updated code compiles without error but unfortunately when I ran demo again with >22 UEs I an still seeing no more than 12 UEs with RX packets. 

I just ran ./waf --run "cttc-nr-demo --ueNumPergNb=23" to check as that gets the quickest result. 

I assume the logic remains the same in the case of  "./waf --run "cttc-nr-demo --gNbNum=5 --ueNumPergNb=40 --packetSizeBe=100" we still face the same issue with RR having no memory. 

Is there something else I need to change? 

Thanks, Eoin. 

Eoin O'Reilly

unread,
Aug 5, 2021, 1:48:09 AM8/5/21
to 5G-LENA-users
Hi All,

The issue, as originally suggested by Hans, seems tied to when there are more than 22 UEs. I can increase the number of flows by varying the amount of gNbs but if the number of UEs is greater than 22 there will always be flows with 0 RX packets. 

Is this by accident or design? 

Thanks, Eoin.

Eoin O'Reilly

unread,
Aug 6, 2021, 3:23:07 PM8/6/21
to 5G-LENA-users
Hi All,

I made further changes to the code to see if that would resolve the issue. I set the packet size to 50 for both  ULL and Be and I also set the numerology to 4 for both. I then ran again for 23 UEs (./waf --run "cttc-nr-demo --ueNumPergNb=23"
) but the result is the same with the first 12 flows showing 0 RX bytes and the remaining around 467766 bytes per flow (flows 13 - 23).

  // Traffic parameters (that we will use inside this script):
  // changing packet size from 100 to 50
  uint32_t udpPacketSizeULL = 50;
  // changing packet size from 1252 to 50
  uint32_t udpPacketSizeBe = 50;
  uint32_t lambdaULL = 10000;
  uint32_t lambdaBe = 10000;

  // Simulation parameters. Please don't use double to indicate seconds; use
  // ns-3 Time values which use integers to avoid portability issues.
  Time simTime = MilliSeconds (1000);
  Time udpAppStartTime = MilliSeconds (400);

  // NR parameters. We will take the input from the command line, and then we
  // will pass them inside the NR module.
  // changing to 4 to ensure same on both flows
  uint16_t numerologyBwp1 = 4;
  double centralFrequencyBand1 = 28e9;
  double bandwidthBand1 = 100e6;
  uint16_t numerologyBwp2 = 4;
  double centralFrequencyBand2 = 28.2e9;
  double bandwidthBand2 = 100e6;
  double totalTxPower = 4;

What is different within the ULL configuration that would cause this?

Thanks, Eoin.

Qun Wang

unread,
Jul 13, 2022, 11:41:45 AM7/13/22
to 5G-LENA-users
Hi All,

I encountered a similar problem, I find out that after I change the bandwidth from 100e6 to 400e6, the number is not 0. I also find that for 18 users, the minimum bandwidth is around 140e6. Have you found the reason behind this?

Thanks, Claud.

Katerina Koutlia

unread,
Aug 5, 2022, 9:40:57 AM8/5/22
to 5G-LENA-users
Try to confir that you are not saturating the network. If the generated data rate is higher than the the supported one by the used BW, then there will be flows with 0.

BR,
Kat
Message has been deleted

Kent S Huns

unread,
Jul 7, 2024, 11:56:21 AM7/7/24
to 5G-LENA-users
Hello,

I found the priority which is determined by PF scheduler are all the same value in whole simulation time.
And this is because the time average throughput, the denominator of PF priority, is initialized in every TTI.

Now this algorithm acts like RoundRobin. (PDSCH symbols, up to 14 per TTI, are distributed in the order of the UEs with the largest numerator values)
So I'd recommend commenting out the initialization. 

Thanks, Kent
PFscheduler.jpg

Kent Huns

unread,
Nov 20, 2024, 10:55:05 AM11/20/24
to 5G-LENA-users
Hello, 

Let me share my code about this topic.
I modified RR/PF/QoS schedulers not to reset each UE's priority.
This code enables to distribute resources to all UEs over time even if there are so many UEs.

* RoundRobin is based on how many times each UE has been selected. I'm afraid it will collapse when the count (uint64_t) overflows.
* ProportionalFairness & QoS-aware don't initialize "m_avgTputDL/UL" not to forget priority.
myRRandPF_on_5G-LENAv3.2.zip

Gabriel Ferreira

unread,
Nov 21, 2024, 5:26:35 AM11/21/24
to 5G-LENA-users
Hi Kent. Thanks for the patches. I'm going to check them, update tests and hopefully merge them by nr-3.4 or 3.5. It is better if you open a merge request.

But we actually need to handle overflow in RR, otherwise the overflowed UE could monopolize the resources (since it magically has 0 allocated resources when others have a ton).
The easiest way is to reset counters for all UEs. Could cause a slight unfairness in the long-term, but nothing too crazy.

PF and QoS are easier.

Kent Huns

unread,
Nov 21, 2024, 9:36:37 AM11/21/24
to 5G-LENA-users
Hi Gabriel,
Thank you for good response. I sent a merge request.
Yes, now each UE has "m_dlRRcount" and they are counted up individually.
When resetting them, they all need to be set to 0 at the same time. But I couldn't think through that process.

This is my first time using Gitlab, and I found it more difficult than this patch...
Reply all
Reply to author
Forward
0 new messages