I have included my simulation code below, in case anyone with experience can spot any logical or implementation errors.
Since I'm here, I'll give you a preview of the tests I've done, in case you're interested. I added the option to enable beamforming, as I wanted to see if it would improve the number of lost packets, and spoiler alert: yes, it turns out that when I enable it, I no longer have any losses. However, I justified this because I have a large increase in gain, which increases the SINR, and since the packet drop is linked to this, they are probably no longer dropped. Another thing I am unsure about is the reason for the drop: whether they are lost in the air or whether they are received and then dropped.
Changing the distance of the UEs only changes the SINR of the individual UE, and not that of the other, as if the interference were not seen. Depending on how the distance is changed, the packet drop of the UE also changes, which makes sense if the loss on free space is working.
I hope I have been as clear as possible and that someone can help me in some way. But regardless, to you who are reading this, I thank you for taking the time to read it!
I look forward to any updates, and in the meantime, I will continue to work on it.
First of all, thank you for your reply.
By desynchronised antennas, I mean antennas that do not have their TTD aligned, with possible partial slot overlaps and, consequently, the generation of CLI. What I want to see in my final result is how even small desynchronisations lead to degradation of 5G transmission.
But before thinking about the CLI situation, I thought it would be better to understand how the ICI situation was being handled.
As for my simulation, I think I took the approach you suggested, which is to saturate the PDSCH and PUSCH, and I did this by inserting a host that sends and receives 1000 packets, using UDP apps. At that point, my idea was to insert the simplest possible scenario, i.e. one with a GNB and a user connected to it (GNB1 - UE1) and another GNB with its UE connected (GNB2 - UE2). This was done in a UMa scenario, without BF, with isotropic antennas and a 3GPP channel. These antennas are 200 m apart (gnb1: 0;0;0 - gnb2 0;200;0) and I first tried to put the two UEs in the same spot (0;100;0) and they have -12.558 and -12.3736 dB as SINR. If I move UE towards gnb1 and I have a situation where UE1: 0;50;0 and UE2: 0;100;0, I expect both to improve because one gets closer, reducing path loss, and the other suffers less ICI interference. The values they obtain are SINR -11.205 and -12.3721.
So there has been an improvement, but it is practically imperceptible in terms of ICI. And so I was wondering if it made sense that, even though we were now moving away from it, the improvement was so fleeting.
Another test I did was to place the UE 1 exactly where the gnb2 was, so that all its power would interfere with the UE2. What I got was that the SINR of the UE2 rightly dropped to -21 dB, while that of the UE1 obviously dropped even further because it moved away, reaching -40 dB.
So I was also wondering if these values made sense to you and if there was anything you would set to try to visualise these values better. Oh, right, I should point out that this data was from the UL.
I also tried inserting BF, and when I do so, the situation improves, bringing the SINR above 8 dB. Here too, I am not satisfied because the ideal BF works for me, but when I put in the real one, it gives me “died <signal.SIGSEGV: 11>”.
Thank you again for your time!