REG : lteChannelAccessImpl in LTE/Wi-Fi coexistence module

75 views
Skip to first unread message

saumil shah

unread,
Apr 13, 2018, 10:50:32 AM4/13/18
to ns-3-users
Dear All,

I have been working on LTE/Wi-Fi coexistence module and i have understood two different lteChannelAccessImpl based on below post.


 As per above post I have understood the both channel access implementation, in default implementation MAC always schedules and those packets will be stored in queues define at PHY layer as per the code. For other implementation MAC schedules when channel is available. 

As this both implementations are designed how MAC will schedule not how PHY will grab the channel.
As i have checked the code for both implementation in let-enb-phy.cc file i found that for default channel implementation it is getting more transmit opportunity compared to when we set lteChannelAccessImpl=1. This is happening because during default implementation when we already have grant for channel then we are checking if this grant is enough to transmit CTRL or data frame and if this grant is not enough then we are again requesting for channel access. Below code is responsible for this operation.

              // channel access granted
              if (m_grantTimeout > m_ttiBegin)
                {
                  // there is data, check if grant is enough to transmit CTRL or/and DATA, if not ask for new grant
                  if ((IsNonEmptyPacketBurst() && m_grantTimeout-m_ttiBegin < Seconds(GetTti())) ||
                      (!IsNonEmptyPacketBurst() && IsNonEmptyCtrMessage() && (m_grantTimeout - m_ttiBegin < DL_CTRL_DELAY_FROM_SUBFRAME_START)))
                    {
                      
                      m_grantTimeout = m_ttiBegin;
                      RequestChannelAccess();
                    }

                }
But when we access channel with other implementation that  time we are not checking for any condition like this. In that implementation if we have DATA ot CTRL frame to send but grant is not enough to transmit that frame then we are not requesting channel access again. We are requesting channel access only when we don't have grant. This is also done in default implementation which seem right to me.

So as mentioned with that extra channel access request both implementation have different transmit opportunity. If we comment the highlighted RequestChannelAccess() in above code  then both implementation have almost same transmit opportunity. If we comment above code then also we will get good performance of LAA because of how MAC schedules.

It is one thing that both will channel access implementation implement different MAC scheduling scheme as explained by you in really nice way in earlier post.
But why are we giving more transmit opportunity in default implementation? Because by this way we are trying to give advantage to LAA cell using LBT. 

Can you provide some comments over this?

Best Regards
Saumil

Zoraze Ali

unread,
Apr 13, 2018, 12:06:50 PM4/13/18
to ns-3-users
Hi Saumil,

The right behavior is to ask for a new grant if it is not sufficient and release the channel. impl1 should do the same. If the LAA node would keep transmitting in spite of insufficient grant; it will exceed its alotted TxOp, which is not correct. The extra TxOP is well justifable because once LAA release the channel other LAA or WiFi node can take the channel.

Kind regards,
Zoraze 

saumil shah

unread,
Apr 13, 2018, 12:38:14 PM4/13/18
to ns-3-users
Hi Zoraze,

If i understood you are saying above behavior is correct. Let's say we are using txop=8 ms. It is very likely that LAA node receive grant at time x in middle of sub frame n. So first it will sent reservation signal (if it is enabled.) to reserve the channel till n+1 as to respect frame boundaries but this time will be part of grant time of 8 ms. So in this case for the when 8th sub frame will about to start the transmission in n+8 sub frame condition mentioned in above code will be true as m_grantimeout - m_ttiBegin will be less than 1 ms. So it will again request for the channel.

As LAA node was transmitting from last 7 ms wi-fi node was not transmitting and when LAA will request channel again most likely wi-fi will be in backoff stage so LAA will sense channel idle and grant of 8 ms will be granted to LAA node.

So here basically LAA node get grants 2 times consecutively which i found strange and this happened because of this extra RequestChannelAccess(). As per my understanding if for the last subframe LAA node does not have enough time than it should not transmit and release the channel that would be more friendly to other LAA or Wi-Fi node.

I am saying this because it is correct that in both implementation MAC schedules differently and in efficient one we get better performance as 2ms delay will avoided. But in both implementation LAA node should not get equal number of TXOP opportunity which should be not the case as per my understanding.

Best Regards
Saumil Shah

saumil shah

unread,
Apr 13, 2018, 12:40:56 PM4/13/18
to ns-3-users
--correction
But in both implementation LAA node should get equal number of TXOP.

saumil shah

unread,
Apr 13, 2018, 1:15:13 PM4/13/18
to ns-3-users
Hi Zoraze,

In continuation to above post  for lteChannelAccessImpl=1 below code is written.

          if (m_grantTimeout - m_ttiBegin >=  Seconds(GetTti()))
            {
              TransmitSubFrame();
              // because there is a delay between mac and phy, check if channel access will be granted in the future timestamp for which mac would be scheduling if SubframeIndication is triggered
              if (m_grantTimeout - m_ttiBegin >= MilliSeconds(m_macChTtiDelay) + Seconds(GetTti()))
                {
                  // trigger the MAC
                  m_enbPhySapUser->SubframeIndication (m_nrFrames, m_nrSubFrames);
                }
            }
          else // no grant, continue to wait for a grant
            {
              // if there is nothing to be transmitted shift the queue
              if (!IsNonEmptyCtrMessage() && (!IsNonEmptyPacketBurst()))
                {
                  GetControlMessages ();
                  GetPacketBurst();
                }
            }

I believe this else condition will be true when  m_grantTimeout - m_ttiBegin <  Seconds(GetTti()) same as what is being checked in default implementation. But here if we have data or control message than we are not releasing the channel and not even requesting the channel which we are doing in default implementation. I think because of this different between both implementation except MAC scheduling part number TXOP are different.

Does this mean in above code in else part we need to do modification for releasing the channel and requesting again?

Best Regards
Saumil Shah

zoraze ali

unread,
Apr 13, 2018, 2:29:50 PM4/13/18
to ns-3-...@googlegroups.com
Hi Saumil,

Please see inlines,

On Fri, Apr 13, 2018 at 6:38 PM, saumil shah <saum...@gmail.com> wrote:
Hi Zoraze,

If i understood you are saying above behavior is correct. Let's say we are using txop=8 ms. It is very likely that LAA node receive grant at time x in middle of sub frame n. So first it will sent reservation signal (if it is enabled.) to reserve the channel till n+1 as to respect frame boundaries but this time will be part of grant time of 8 ms. So in this case for the when 8th sub frame will about to start the transmission in n+8 sub frame condition mentioned in above code will be true as m_grantimeout - m_ttiBegin will be less than 1 ms. So it will again request for the channel.

As LAA node was transmitting from last 7 ms wi-fi node was not transmitting and when LAA will request channel again most likely wi-fi will be in backoff stage so LAA will sense channel idle and grant of 8 ms will be granted to LAA node.
 

So here basically LAA node get grants 2 times consecutively which i found strange and this happened because of this extra RequestChannelAccess(). As per my understanding if for the last subframe LAA node does not have enough time than it should not transmit and release the channel that would be more friendly to other LAA or Wi-Fi node.

   After releasing the channel, LAA will perform a random backoff, i.e., DIFS (43 usec) + some random backoff, at the end of this random backoff if WiFi is still not occupying the channel then it is fine if LAA is winning the channel again.  

I am saying this because it is correct that in both implementation MAC schedules differently and in efficient one we get better performance as 2ms delay will avoided. But in both implementation LAA node should not get equal number of TXOP opportunity which should be not the case as per my understanding.

Best Regards
Saumil Shah

--
Posting to this group should follow these guidelines https://www.nsnam.org/wiki/Ns-3-users-guidelines-for-posting
---
You received this message because you are subscribed to the Google Groups "ns-3-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ns-3-users+unsubscribe@googlegroups.com.
To post to this group, send email to ns-3-...@googlegroups.com.
Visit this group at https://groups.google.com/group/ns-3-users.
For more options, visit https://groups.google.com/d/optout.

saumil shah

unread,
Apr 13, 2018, 2:48:59 PM4/13/18
to ns-3-users
Hi Zoraze,

Yes i am completely agree with what you said and i realized the implementation is correct as LAA node is releasing the channel and then requesting again. So its giving fair chance to other LAA or Wi-Fi nodes.

But then it seems this implementation is also required when we don't use default ChannelAccessManager. Like i mentioned in last post it is not implemented there. Am i right?

Best Regards
Saumil Shah

zoraze ali

unread,
Apr 13, 2018, 2:49:41 PM4/13/18
to ns-3-...@googlegroups.com
Yes, if the control reaches to else condition and there is still data or control in the queue, whitout shifting the queues you should release the channel and ask for new grant. If the queue is empty simply shift the queue and release the channel. Its a same logic as implementation 2.

As a side note, since the code uses implementation 2 by default, its more mature than implementation 1. So, please update implementation 1. 

Kind regards,
Zoraze

zoraze ali

unread,
Apr 13, 2018, 2:56:55 PM4/13/18
to ns-3-...@googlegroups.com
On Fri, Apr 13, 2018 at 6:40 PM, saumil shah <saum...@gmail.com> wrote:
--correction
But in both implementation LAA node should get equal number of TXOP.

I dont think so. I would assume LAA with implementation 1 would need more TXOPs than implementation 2 to transmit equal amount of data. Since, it is wasting the TxOp time more due to 2 msec delay and even more if you have reservation signal enabled. 

saumil shah

unread,
Apr 13, 2018, 3:38:34 PM4/13/18
to ns-3-users
Hi Zoraze,

Thank you very much for clearing things out for me. I will update implementation  1 with the same logic as implementation 2. I realized before that in both implementation other than how they schedule also some difference is there. But i understood right implementation as wrong and wrong implementation as right.

I have explored non default implementation due the fact if i use UDP saturated traffic with implementation 2 (default) i get very high delays for LAA nodes because in that, MAC always schedules and those packets will be stored in queue defined as vector data type at PHY layer which act as infinite buffer as its capacity to store packets are very very high. So very old packets will be delivered first rather than being dropped at RLC layer as will happen in implementation 1 and we will get very high delay.

Best Regards
Saumil Shah

Zoraze Ali

unread,
Apr 13, 2018, 4:27:36 PM4/13/18
to ns-3-users


On Friday, April 13, 2018 at 9:38:34 PM UTC+2, saumil shah wrote:
Hi Zoraze,

Thank you very much for clearing things out for me. I will update implementation  1 with the same logic as implementation 2. I realized before that in both implementation other than how they schedule also some difference is there. But i understood right implementation as wrong and wrong implementation as right.
 
I would not say that the Implementation 1 is wrong but yes it is affected by this bug. Once you do the change it would be good if you can provide a patch to help others.
 

I have explored non default implementation due the fact if i use UDP saturated traffic with implementation 2 (default) i get very high delays for LAA nodes because in that, MAC always schedules and those packets will be stored in queue defined as vector data type at PHY layer which act as infinite buffer as its capacity to store packets are very very high. So very old packets will be delivered first rather than being dropped at RLC layer as will happen in implementation 1 and we will get very high delay.

Yes, I understand. It is the matter of preference and the KPI you are interested in. I know you dont have control over the queue at PHY but you do have control over RLC buffer.

Kind regards,
Zoraze
 

saumil shah

unread,
Apr 13, 2018, 4:51:54 PM4/13/18
to ns-3-users
Yes once i will fix this, i will surely provide the patch :)
Reply all
Reply to author
Forward
0 new messages