Hi,
I've been working for a while with ns-3 for a project with an industrial partner, and as part of the project's task we were meant to validate the ns-3 lte module in a set of reference 3GPP scenarios (in particular wrt Case 1 from TS 36.814). For this task I've used a modified version of the lena-dual-stripe.cc code (available on github
here - needs to be renamed to lena-dual-stripe.cc for attached python code to work) to adapt it to the 3GPP requirements/parameters (if you're interested you can find more information about the validation process in an article
here), as we needed to use the EPC and TCP/UDP functionalities of ns-3 in order to fully replicate the required scenario. As end result, we obtained results that quite closely resemble 3GPP values (which represent an aggregate of 17 industrial simulators) with regard to UE SINR and throughput.
This validation was only the initial starting point of our project, as its main focus is on frequency reuse algorithms (another feature that ns-3's LTE module has). After obtaining some rather counterintuitive results in terms of UE throughput performance, we went back to the validation scenario (3GPP Case 1 from TS 36.814). Here, it turns out that while the UE SINR for frequency reuse-based algorithms is better (e.g, see attached the SINR output for hard reuse vs full reuse in fig0.tif) , the actual obtained UE throughput is worse for frequency-reuse based algorithms (e.g., throughput output for hard reuse vs full reuse in fig1.tif), even for edge UEs (where frequency reuse should bring the highest gain).
In essence, there is no point on the CDF we obtained (fig1.tif) where the hard reuse algorithm performs better than the full frequency reuse algorithm (note that these results are obtained with the proportional fair scheduler, which is the only scheduler compatible with frequency reuse algorithms in ns-3, as opposed to the validation results from the paper which were obtained with the round-robin scheduler). Basically the results show no tradeoff between coverage and throughput between hard reuse and full reuse schemes, so they contradict the literature (and in fact the entire purpose of frequency reuse), as at least some edge users should have better throughput in the hard frequency reuse case than in the full reuse one.
I was wondering if you could explain this behaviour, or point me towards the source of the potential error/bug in these simulations. I've also attached the python code we use to configure parameters and trigger simulations (this needs to be placed in the ns-3 root folder, same one were waf is run from). Note that some of the scenario parameters are configured directly in the modified version of lena-dual-stripe.cc.
Would really appreciate your help!
Andrei