[LENA] not linear relationship of "DL rate" and "# of allocated PRBs"

128 views
Skip to first unread message

Xing Xu

unread,
May 10, 2016, 5:17:05 PM5/10/16
to ns-3-users
Dear all,

I got an interesting (and confusing) observation. But I guess this is an easy question for LTE expert. At a high level, I would guess that "# of PRBs allocated to UE" would be linear to "the achieved DL rate". But that's not what I observed in my experiment below.

Experiment configuration:
In LENA, I setup one BS and one UE (there is no interference or fading). I have setup one RH UDP sender, and a P2P link connects the RH and the BS. I vary and control UE's downlink requirement (DL rate) by asking RH UDP sender to send (a lot more than) enough data and configuring that P2P link's capacity to limit the requirement. So that capacity value is the DL requirement for the UE.

For different capacity configurations, I count # of PRBs allocated to that UE per second. I did this by modifying proportional-fair scheduler (pf-ff-mac-scheduler.cc), and maintaining a counter for each UE (rnti) and log each UE's allocated # of PRBs.
(More detail: in pf-ff-mac-scheduler.cc, for each PRB, it calculates the maximum rcqi and this PRB goes to the UE with highest rcqi. I add 1 for the UE with maximum rcqi for every PRB. I clear counters for every second to get # of PRBs per second).

Result & question:
Here is my question, I would guess that "# of PRBs allocated to UE" would be linear to "the DL rate" (the capacity of that P2P link), because besides the requirement, other settings are the same, so I assume each PRB can support the same rate (same "bits per PRB per second". But the result is not linear (though the rate performance is as expected, equals to the capacity value I configured). See below table, I have different link capacity (translate to different rate), but the number of PRB allocated is not linear after 0.5Mbits/s (from 0.1 to 0.4, they are linear). I wonder, e.g., at 0.5Mbits/s setting, why BS has to allocate a lot more PRBs to this UE suddenly?


Some other information (not sure if they are relevant):
 - the distance between this UE and the BS is only 1000m. 
 - for all the configurations, the requirements cannot congest the BS (means that the achieved DL rate is as expected, equals to the capacity of the P2P link), because the capacity (DL rate requirement) is low and the MCS/SINR is good.
 - I found that every 1 second there are in total 12000 PRBs.

Any clues?

Thanks,
Xing

Marco Miozzo

unread,
May 11, 2016, 11:44:02 AM5/11/16
to ns-3-...@googlegroups.com
Hi,

the non-linearity might be due to the non-linearity of the TB dimension as function of the number of RB, see table 7.1.7.2.1-1 of TS 36.213.
Of course it depends also on the scheduler you are using.

My two cents,
marco.



--
Posting to this group should follow these guidelines https://www.nsnam.org/wiki/Ns-3-users-guidelines-for-posting
---
You received this message because you are subscribed to the Google Groups "ns-3-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ns-3-users+...@googlegroups.com.
To post to this group, send email to ns-3-...@googlegroups.com.
Visit this group at https://groups.google.com/group/ns-3-users.
For more options, visit https://groups.google.com/d/optout.

Xing Xu

unread,
May 11, 2016, 2:20:13 PM5/11/16
to ns-3-users
Thanks Marco.

But I just checked, by logging rbgSize in pf-ff-mac-scheduler.cc:
(achievableRate += ((m_amc->GetTbSizeFromMcs (mcs, rbgSize) / 8))

It's constant, it is always 2 in all my configurations, and thus won't affect "per PRB rate".

I'm pretty sure about this, because in the past, I used this PRB counting method to tune user's rate, and it works pretty well w/o doing anything about rbgSize.

Any other reasons you can think of?

Best,
Xing

Marco Miozzo

unread,
May 12, 2016, 3:23:00 AM5/12/16
to ns-3-...@googlegroups.com
achievableRate is calculated on RBG basis, just for sake of evaluating the fairness. At the end the actual TB size is evaluated later in code, according to the final number of RBGs assigned to a UE:

int tbSize = (m_amc->GetTbSizeFromMcs (newDci.m_mcs.at (j), RgbPerRnti * rbgSize) / 8); // (size of TB in bytes according to table 7.1.7.2.1-1 of 36.213)

Here according to RgbPerRnti you might experience the non linearity of the table 7.1.7.2.1-1 of TS 36.213.

Best,
marco.

Xing Xu

unread,
May 12, 2016, 11:14:42 AM5/12/16
to ns-3-...@googlegroups.com
Thanks, Marco, really appreciate your help.

I'm new to cellular network/scheduler/ns3, I've been expertimented on "proportional fair" a lot but I only understand the PF formular that determines the UE to assign current PRB, so my apology if I posted a lot of nonsense to you.

I'll look into the code you talk about, but if we focus on the my counter, it still shows something strange: at least, the scheduler *tries* to assign PRBs to users with different rate requirement, and *that number of PRBs is not linear to the requirement rate".

This is counter-intuitive, right? If you look at my table, requirement of 0.1 got 144 PRBs but requirement of 1.0 got 5400 PRBs, this is not linear at all! Assume the same "rate per PRB", requirement of 1.0 got ~5 times of requirement of 0.1. And I wonder what's the reason for that?

I did some other experiments (but I didn't summarize the data). I observed that this PRB counter also depends on the congestion level of the BS (number of active users), e.g., currently it is super inefficient to allocate 5400 PRBs to requirement of 1.0, but when the congestion of BS is higher (more UEs), that number will drop, and eventually I believe it will become 144*10, the linear case, and gives us the consistant number of "rate per PRB". I wonder whether this non-linear thing is related to inefficient PRB allocation under low congestion level case?

Thanks again, I'll look at the code you pointed to. 
You received this message because you are subscribed to a topic in the Google Groups "ns-3-users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ns-3-users/-Ob0oMqaCfE/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ns-3-users+...@googlegroups.com.

Xing Xu

unread,
May 12, 2016, 8:20:10 PM5/12/16
to ns-3-users
Hi, Marco, I ran some new experiments and have a new post, I explained it from another perspective but essentially the same issue: https://groups.google.com/forum/#!topic/ns-3-users/Wg7xjVQN-uc

I realized something just now and I don't know if this may help explain the problem: I don't know if this issue is due to the fact that the BS is not congested at all. In my experiment in both posts, the traffic would not congest BS, BS can support way higher rate. Is it possible that when there is no congestion, BS will leverage PRB in an inefficient way so that the "rate per PRB" would be different for different cases? In other words, if I somehow congest the BS and re-do the experiment, maybe the "rate per PRB" is a fixed value because BS will maximally leverage each PRB under congestion situation?

A relevant question, in my counting method, there are always 12000 PRBs per second. However, in my new post, I found that, when the BS is not congested (not congested means that BS can support way higher rate), those 12000 PRBs have been already allocated to these 2 UEs. Does this make sense, and does it help explain the problem?

Thanks,

Xing Xu

unread,
May 16, 2016, 4:56:14 PM5/16/16
to ns-3-users
Hi, Marco, I figured out the issue. For short, the scheduler will try to scheduler and serve UE if such UE has TX queue size greater than 0, but if the PRB can transmit more than this UE's TX queue size, it will become relatively inefficient. E.g., for a light user, if it's queue size is 1000B but given its MCS value each PRB can send 2000B, then it's efficiency is pretty low, ~50%.


On Thursday, May 12, 2016 at 12:23:00 AM UTC-7, Marco wrote:
Reply all
Reply to author
Forward
0 new messages