I have included my simulation code below, in case anyone with experience can spot any logical or implementation errors.
Since I'm here, I'll give you a preview of the tests I've done, in case you're interested. I added the option to enable beamforming, as I wanted to see if it would improve the number of lost packets, and spoiler alert: yes, it turns out that when I enable it, I no longer have any losses. However, I justified this because I have a large increase in gain, which increases the SINR, and since the packet drop is linked to this, they are probably no longer dropped. Another thing I am unsure about is the reason for the drop: whether they are lost in the air or whether they are received and then dropped.
Changing the distance of the UEs only changes the SINR of the individual UE, and not that of the other, as if the interference were not seen. Depending on how the distance is changed, the packet drop of the UE also changes, which makes sense if the loss on free space is working.
I hope I have been as clear as possible and that someone can help me in some way. But regardless, to you who are reading this, I thank you for taking the time to read it!
I look forward to any updates, and in the meantime, I will continue to work on it.
First of all, thank you for your reply.
By desynchronised antennas, I mean antennas that do not have their TTD aligned, with possible partial slot overlaps and, consequently, the generation of CLI. What I want to see in my final result is how even small desynchronisations lead to degradation of 5G transmission.
But before thinking about the CLI situation, I thought it would be better to understand how the ICI situation was being handled.
As for my simulation, I think I took the approach you suggested, which is to saturate the PDSCH and PUSCH, and I did this by inserting a host that sends and receives 1000 packets, using UDP apps. At that point, my idea was to insert the simplest possible scenario, i.e. one with a GNB and a user connected to it (GNB1 - UE1) and another GNB with its UE connected (GNB2 - UE2). This was done in a UMa scenario, without BF, with isotropic antennas and a 3GPP channel. These antennas are 200 m apart (gnb1: 0;0;0 - gnb2 0;200;0) and I first tried to put the two UEs in the same spot (0;100;0) and they have -12.558 and -12.3736 dB as SINR. If I move UE towards gnb1 and I have a situation where UE1: 0;50;0 and UE2: 0;100;0, I expect both to improve because one gets closer, reducing path loss, and the other suffers less ICI interference. The values they obtain are SINR -11.205 and -12.3721.
So there has been an improvement, but it is practically imperceptible in terms of ICI. And so I was wondering if it made sense that, even though we were now moving away from it, the improvement was so fleeting.
Another test I did was to place the UE 1 exactly where the gnb2 was, so that all its power would interfere with the UE2. What I got was that the SINR of the UE2 rightly dropped to -21 dB, while that of the UE1 obviously dropped even further because it moved away, reaching -40 dB.
So I was also wondering if these values made sense to you and if there was anything you would set to try to visualise these values better. Oh, right, I should point out that this data was from the UL.
I also tried inserting BF, and when I do so, the situation improves, bringing the SINR above 8 dB. Here too, I am not satisfied because the ideal BF works for me, but when I put in the real one, it gives me “died <signal.SIGSEGV: 11>”.
Thank you again for your time!
Hi! Thanks again for the reply.
So, regarding the SINR, in the end I enabled the traces through the PHY-layer helper, and through them I was able to observe the SINR I was referring to. From what I understand, the SINR is calculated at the slot level, not at the RB level. Also, from what I have seen in the calculation, what happens is that we have:
SINR = (useful received power) / (all received powers − useful received power + noise), all of this at the slot level.
The useful received power also takes the scenario into account, including its path loss.
However, once the position is set, the SINR remains fixed. So when you talk about stochastic behavior, are you telling me that depending on where the nodes are placed, with the same settings, the SINR changes in an unpredictable way (or rather, partially predictable based on how parameters and positions are set), but that at a fine-grained level you cannot know exactly how movement will affect the SINR itself?
However, once a fixed position is assigned to all nodes, the SINR should no longer vary at this point, because everything becomes deterministic. Is that correct? If so, it would make sense that my SINR values are constant, since my nodes are fixed.
As for desynchronization, I am aware that this is a simulator that, by default, assumes that this issue does not need to be handled. All antennas assume that the TDDs are aligned, and that is fine for me. I am not so much interested in simulating desynchronized antennas during the actual synchronization procedure, but rather in studying only the effects of poor synchronization. So, for example, when I have one antenna transmitting perfectly synchronized with all nearby antennas, but there is one that, instead of starting its slots at time 0, starts them with a delay of some number of nanoseconds, thereby generating ICI + CLI as interference.
Now, I do not know whether it is feasible to introduce some kind of offset that makes an antenna always start its slots slightly later; I have not yet really dug into this issue. For now, I was mainly interested in seeing whether both CLI and ICI were actually observed, and it seems that both are handled, because when I set two antennas with patterns such as: DDDUD and DDDDU. In the simulation where the patterns were aligned, I had −12 dB SINR, while in the one with offset patterns I obtained −18 dB. This is probably because the interfering signal is no longer the one from the UE that is transmitting, but from the gNB antenna which, by transmitting, naturally sends more power and thus overwhelms the signal.
So, at this point, I would mainly be interested in understanding whether, in your opinion, it is feasible as an idea to add a temporal offset to make an antenna start later in transmitting and receiving information (thus effectively adding a temporal offset to its TDD).
Additionally, do you have any idea why ideal beamforming works for me, whereas the realistic one generates the error mentioned above? I thought that, more or less, the way both of them operate would be roughly the same.
Thanks again in advance for your time!
Stefano Biccari
--
Posting to this group should follow these guidelines https://www.nsnam.org/wiki/Ns-3-users-guidelines-for-posting
---
You received this message because you are subscribed to the Google Groups "ns-3-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ns-3-users+...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/ns-3-users/765e2088-dcb1-409f-878a-6f436c2f656bn%40googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/ns-3-users/CAOFwKy-5CM_oJXJTKjovd0CiRXOTjF3okJATdLN5jWj93nBe3A%40mail.gmail.com.
Stefano Biccari
To view this discussion visit https://groups.google.com/d/msgid/ns-3-users/c3fa50e4-2297-419f-b914-f69b0bb635a4n%40googlegroups.com.