[LENA] Flow Monitor issue when counting UDP throughput in LTE

750 views
Skip to first unread message

Xing Xu

unread,
Apr 17, 2016, 9:38:56 PM4/17/16
to ns-3-users
Dear all,

I noticed an issue of Flow Monitor issue when counting UDP throughput in LTE.

I configured 1 BS and 30 UEs (and corresponding 30 RHs and 30 PointToPointHelper), UEs are the same (same location, same requirement controlled by P2PHelper), but they join the network one by one (there is a new UE joins the network in every 3 seconds), so the BS will has 1 UE, 2 UEs, 3 UEs... 30 UEs, and it becomes congested at some point. See below figure (also attached), each curve is one UE's throughput (there are, 30 curves), plotted from results of each UE's Flow Monitor. You see that the start time of each curve is different, indicating the "joining time" for each UE. Their sending rate (controlled by the bandwidth of the P2PHelper) is the highest throughput you see at very beginning (~220000B/s), when BS is not congested. But after ~30s, you see that the throughput drops due to more UE joins and thus congests the BS. (BTW, there is a special black dotted curve, which is the sum of all UEs' throughputs, and its scale is on the Y-axis of the right hand side.)


Till this point, the experiment works fine. The "Lost Packet" from Flow Monitor are always 0, even though Tx Packets are a lot more than the Rx Packets (I have a quick question here, can someone tell me what is the definition of the "Lost Packet" from the Flow Monitor? Packets not delivered in 10 seconds?) The issue is, notice the time after ~60s, those throughputs suddenly become 0 (a vertical line for each curve), one by one. This throughput going to 0 is because the Flow Monitor's output says that the "received packets" for each flow would not increase (no more received packets), instead, there are packet losses (if those packet losses are counted as "received packet", then there is no issue). Basically, Flow Monitor says that there is no new packets received for all the flows (but I don't think this is UDP issue, because if some UE's UDP flow dead, then other UEs would get more resources and then you won't see such perfect trend on the figure). I suspect that Flow Monitor somehow count packets as packet loss, and would not increase "received packet", and then lead to this issue.

I don't know what is the reason for this issue. If I use TCP instead of UDP, there is no such issue. 

Thank you for your help in advance. I attached the source code and the output of Flow Monitor.

I also want to know, is there another way to track UDP throughput in LENA? I am using output of Flow Monitor and analyze the output file and ploy by myself, and I really want to know if there are other methods to do this...

Thanks,
Xing

exp_base_enter_cong_ori.cc
plot_flow_mon.png
FlowMonitor.out

Tommaso Pecorella

unread,
Apr 18, 2016, 5:44:58 AM4/18/16
to ns-3-users
Hi Xing,

a quick answer (but I'd need to dig deeper into your simulations results).
The "packet loss" event in FlowMonitor is either an explicit drop (signalled by IP) or a "I haven't seen it in 10 seconds". Consider that if you don't see a packet in 10 seconds, then it's usually dropped at application level.

Now, a possible explanation for your graph is that the packets are experiencing a HUGE delay (more than 10 seconds) in a queue below IP (i.e., at netDevice level) and FlowMonitor considers them dropped.
TCP wouldn't do that, purely because it will self-adapt to the congested system.

You could try to make the FlowMonitor timeout larger. It won't fix all, but you'll see if this hypothesis is right,

Now, about the data and how to collect them, you could use UdpClient/UdpServer method, as is count the received packets at application level with the help of SeqTs header. Of course the delay is not to be considered in a saturated condition (it will not be stable), but at least you'll have the throughput.

Cheers,

T.

Xing Xu

unread,
Apr 18, 2016, 1:21:20 PM4/18/16
to ns-3-users
Your explanation is absolutely right! I enlarged that 10-seconds timer and then such issue is gone. Thanks!

Now, about UDP throughput, I guess I can just enlarge 10-seconds timer to 10000-seconds, and then use FlowMonitor to calculate downlink throughput. What do you think?

Thanks again.

Tommaso Pecorella

unread,
Apr 18, 2016, 1:50:59 PM4/18/16
to ns-3-users
Hi,

well, it's a bit unorthodox, but if you know what you're doing then it's all ok.
Just one warning: the 10 seconds rule is also to avoid memory issues. FlowMonitor keeps a record of every sent packet... until either it is 1) receiver, or 2) dropped explicitly, or 3) purged because it's too old (10 seconds originally).
When you increase the 10 seconds rule, you're also increasing the memory needed by FlowMonitor. As a consequence.... just keep an eye on that.

Cheers,

T.

Xing Xu

unread,
Apr 18, 2016, 2:09:11 PM4/18/16
to ns-3-users
About "unorthodox":
 - currently I'm doing research on providing different services to different users (e.g., I want to allocate more resources/PRBs to some *premium* users than other users), so I want to focus on downlink throughput. To do that, I want to use UDP throughput, because that is *pure* downlink throughput. TCP is more widely used, but I'll focus on TCP later because TCP throughput is not exactly downlink throughput (its uplink may affect sending rate). If current method can count UDP throughput nicely, I'll stick to that. 

Memory issue:
 - ACKed, thanks.

I have a related question though, do you know how to simulate "bufferbloat" in NS3? That is, I want to use aggressive link layer retransmission to hide network layer packet loss, so TCP sender never reduce its congestion window, and then lots of packets will be buffered at the BS. This is what current cellular systems are doing: hide network layer packet loss to let TCP sender send more and utilize wireless resources more efficiently.

Thanks.

Tommaso Pecorella

unread,
Apr 18, 2016, 2:51:21 PM4/18/16
to ns-3-users
The "unorthodox" isn't about using UDP, it's about the FlowMonitor packet deadline extension. As I said, if you know what you're doing, it's all right.

Bufferbloat... no idea in your case, mainly because you don't have losses, you have "just" delayed packets :)
What you want to do isn't to simulate Bufferbloat, but to simulate how to avoid Bufferbloat (hence, to maximize network utilization) and more "wireless-friendly" TCP variants.

Bufferbloat - check the latest TrafficControl module. I don't know how if and how it could be integrated in the LTE module, but it's definitely the way to go. ARED and CoDEL are available.
Wireless-friendly TCP... well, Natale just pushed some more models, so now you can choose between Vegas, NewReno, Veno, Westwood, Westwood+, HighSpeed, BIC, Hybla, and Scalable (I could have forgot some). I think CUBIC will be the next...

Cheers,

T.

Nat P

unread,
Apr 19, 2016, 3:31:49 AM4/19/16
to ns-3-users


Il giorno lunedì 18 aprile 2016 20:09:11 UTC+2, Xing Xu ha scritto:

I have a related question though, do you know how to simulate "bufferbloat" in NS3? That is, I want to use aggressive link layer retransmission to hide network layer packet loss, so TCP sender never reduce its congestion window, and then lots of packets will be buffered at the BS. This is what current cellular systems are doing: hide network layer packet loss to let TCP sender send more and utilize wireless resources more efficiently.


If you have more than 10 s of delay, you're already experiencing bufferbloat :-D

For LTE, you can jut use AM_MODE, and you have an infinite buffer between ENB and the UE. That's all for bufferbloat!

Nat 

Nat P

unread,
Apr 19, 2016, 3:31:49 AM4/19/16
to ns-3-users


Il giorno lunedì 18 aprile 2016 20:09:11 UTC+2, Xing Xu ha scritto:

I have a related question though, do you know how to simulate "bufferbloat" in NS3? That is, I want to use aggressive link layer retransmission to hide network layer packet loss, so TCP sender never reduce its congestion window, and then lots of packets will be buffered at the BS. This is what current cellular systems are doing: hide network layer packet loss to let TCP sender send more and utilize wireless resources more efficiently.


Nat P

unread,
Apr 19, 2016, 3:31:50 AM4/19/16
to ns-3-users


Il giorno lunedì 18 aprile 2016 20:09:11 UTC+2, Xing Xu ha scritto:

I have a related question though, do you know how to simulate "bufferbloat" in NS3? That is, I want to use aggressive link layer retransmission to hide network layer packet loss, so TCP sender never reduce its congestion window, and then lots of packets will be buffered at the BS. This is what current cellular systems are doing: hide network layer packet loss to let TCP sender send more and utilize wireless resources more efficiently.


Nat P

unread,
Apr 19, 2016, 3:31:50 AM4/19/16
to ns-3-users


Il giorno lunedì 18 aprile 2016 20:09:11 UTC+2, Xing Xu ha scritto:

I have a related question though, do you know how to simulate "bufferbloat" in NS3? That is, I want to use aggressive link layer retransmission to hide network layer packet loss, so TCP sender never reduce its congestion window, and then lots of packets will be buffered at the BS. This is what current cellular systems are doing: hide network layer packet loss to let TCP sender send more and utilize wireless resources more efficiently.


Nat P

unread,
Apr 19, 2016, 3:31:50 AM4/19/16
to ns-3-users


Il giorno lunedì 18 aprile 2016 20:09:11 UTC+2, Xing Xu ha scritto:

I have a related question though, do you know how to simulate "bufferbloat" in NS3? That is, I want to use aggressive link layer retransmission to hide network layer packet loss, so TCP sender never reduce its congestion window, and then lots of packets will be buffered at the BS. This is what current cellular systems are doing: hide network layer packet loss to let TCP sender send more and utilize wireless resources more efficiently.


Nat P

unread,
Apr 19, 2016, 3:31:51 AM4/19/16
to ns-3-users


Il giorno lunedì 18 aprile 2016 20:09:11 UTC+2, Xing Xu ha scritto:

I have a related question though, do you know how to simulate "bufferbloat" in NS3? That is, I want to use aggressive link layer retransmission to hide network layer packet loss, so TCP sender never reduce its congestion window, and then lots of packets will be buffered at the BS. This is what current cellular systems are doing: hide network layer packet loss to let TCP sender send more and utilize wireless resources more efficiently.


Nat P

unread,
Apr 19, 2016, 3:31:54 AM4/19/16
to ns-3-users


Il giorno lunedì 18 aprile 2016 20:09:11 UTC+2, Xing Xu ha scritto:

I have a related question though, do you know how to simulate "bufferbloat" in NS3? That is, I want to use aggressive link layer retransmission to hide network layer packet loss, so TCP sender never reduce its congestion window, and then lots of packets will be buffered at the BS. This is what current cellular systems are doing: hide network layer packet loss to let TCP sender send more and utilize wireless resources more efficiently.


Nat P

unread,
Apr 19, 2016, 3:31:54 AM4/19/16
to ns-3-users


Il giorno lunedì 18 aprile 2016 20:09:11 UTC+2, Xing Xu ha scritto:

I have a related question though, do you know how to simulate "bufferbloat" in NS3? That is, I want to use aggressive link layer retransmission to hide network layer packet loss, so TCP sender never reduce its congestion window, and then lots of packets will be buffered at the BS. This is what current cellular systems are doing: hide network layer packet loss to let TCP sender send more and utilize wireless resources more efficiently.


Nat P

unread,
Apr 19, 2016, 3:31:55 AM4/19/16
to ns-3-users


Il giorno lunedì 18 aprile 2016 20:09:11 UTC+2, Xing Xu ha scritto:

I have a related question though, do you know how to simulate "bufferbloat" in NS3? That is, I want to use aggressive link layer retransmission to hide network layer packet loss, so TCP sender never reduce its congestion window, and then lots of packets will be buffered at the BS. This is what current cellular systems are doing: hide network layer packet loss to let TCP sender send more and utilize wireless resources more efficiently.


Nat P

unread,
Apr 19, 2016, 3:31:55 AM4/19/16
to ns-3-users


Il giorno lunedì 18 aprile 2016 20:09:11 UTC+2, Xing Xu ha scritto:

I have a related question though, do you know how to simulate "bufferbloat" in NS3? That is, I want to use aggressive link layer retransmission to hide network layer packet loss, so TCP sender never reduce its congestion window, and then lots of packets will be buffered at the BS. This is what current cellular systems are doing: hide network layer packet loss to let TCP sender send more and utilize wireless resources more efficiently.


Xing Xu

unread,
Apr 19, 2016, 1:10:56 PM4/19/16
to ns-3-users
Thanks (for your multiple responses!), Nat.

Yeah for the 10s delay case I'm using UDP instead of TCP, which I control the sender, so sending enough packets will generate Bufferbloat. However, for TCP, it's trickier because I don't directly control sender, and if there are packet losses (current actual cellular systems hide that), then we won't see Bufferbloat, and to me, that's not realistic simulation.

Tommaso Pecorella

unread,
Apr 19, 2016, 1:42:55 PM4/19/16
to ns-3-users
I beg to differ.

Actual wireless systems don't hide packet losses, they avoid them altogether. I know that you know it, I write this to avoid misunderstandings.

Actual wireless systems do two things:
1) HARQ (Hybrid ARQ) at MAC level, and
2) AMC (Adaptive Modulation and Coding).
The first allows a fast retransmission of lost packets (at the expense of some delay), the second tries to keep the PER (Packet ERror probability) below a target threshold, typically extremely low.

The net effect at L3/L4 is that you see a channel with a variable bitrate, a variable delay and a negligible error rate.

Cheers,

T.
Reply all
Reply to author
Forward
0 new messages