Dear All,
I would like to see the aggregate uplink throughput in a simple BSS (one AP and some STAs) versus the number of stations. There are thus sources (generating similar saturation traffic, starting with a small delay from each other to bypass the known bug) installed on STAs and sinks (on different ports) on the server. Based on the known facts about 802.11, it is expected to see a drop in the aggregate uplink throughput when the number of stations increases (Aruba for instance reports 50% TCP aggregate throughput loss going from 10 STAs to 100 STAs). However, I could not see a meaningful drop with TCP in NS3 (with UDP, the drop is observed event though not 50%). I tried different TCP traffic generation scenarios (Bulk instead of OnOff, sockets instead of sources, etc) but still no meaningful drop is observed, and I may see even a higher throughput with 50 STAs instead of just 1 STA.
It may be related to the internal mechanism of TCP in NS3, so I would like to know if you have any comment on that or if you have any part of a code where we can see a drop in the aggregate uplink throughput versus the number of stations. Thank you in advance.
Regards
Arash
Dear Nat
Thank you for the follow-up.
Yes, I measure the application layer throughput (let’s call
it goodput) on the sink, adding all STAs’ throughput (coming on different ports), as you could see below:
double totalThroughput=0;
for (int jStas=0;jStas<nSTAs;jStas++)
{
uint32_t totalPacketsThrough = DynamicCast<PacketSink> (serverApps.Get(jStas))->GetTotalRx();
double throughput = (double) totalPacketsThrough * 8 / (simulationTime * 1000000.0); //Mbit/s
totalThroughput=totalThroughput+throughput;
}
Note that instead of putting all the sinks on different ports of the access point, I also tried the scenario where the sinks are installed on different nodes of the CSMA network (bridge) connected to the AP. But the results where the same.
Thank you.
Arash
Dear Nat
Thank you for the follow-up.
Yes, I measure the application layer throughput (let’s call it goodput) on the sink, adding all STAs’ throughput (coming on different ports), as you could see below:
double totalThroughput=0;
for (int jStas=0;jStas<nSTAs;jStas++)
{
uint32_t totalPacketsThrough = DynamicCast<PacketSink> (serverApps.Get(jStas))->GetTotalRx();
double throughput = (double) totalPacketsThrough * 8 / (simulationTime * 1000000.0); //Mbit/s
totalThroughput=totalThroughput+throughput;
}
Note that instead of putting all the sinks on different ports of the access point, I also tried the scenario where the sinks are installed on different nodes of the CSMA network (bridge) connected to the AP. But the results where the same.
Thank you.
Arash
Hello Nat,I increased the Initial Congestion Window usingConfig::SetDefault ("ns3::TcpSocket::InitialCwnd", UintegerValue (15));And also increased the simulation time from 5 seconds to even 50 seconds. Still, the same results are observed, as you could find below. I am not sure but it seems that TCP with only one STA does not really saturate the link, so when the number of station increases, instead of observing a aggregate throughput degradation, we see an increase.
Regarding the re-transmission that you mentioned, my first guess was that GetTotalRx by mistake counts even the duplicated received packets (some packets are thus counted two time in the throughput calculation), and the number of received packets, with more STAs, increases. We therefore see a higher aggregate uplink throughput while some of them are just repeated packets. However, looking at the PCAP file, I did not see any duplicated packet received by the AP (sink).I appreciate your follow-up and I am looking forward to your comments.
Number of STAs 1Total Throughput (Using GetTotalRx) 24.7664 Mbit/sMean Throughput (Per STA, Using Flows) 25.6559 Mbit/sNumber of STAs 60Total Throughput (Using GetTotalRx) 25.4153 Mbit/sMean Throughput (Per STA, Using Flows) 0.447434 Mbit/s
Hello Nat,I increased the Initial Congestion Window usingConfig::SetDefault ("ns3::TcpSocket::InitialCwnd", UintegerValue (15));And also increased the simulation time from 5 seconds to even 50 seconds. Still, the same results are observed, as you could find below. I am not sure but it seems that TCP with only one STA does not really saturate the link, so when the number of station increases, instead of observing a aggregate throughput degradation, we see an increase.
Regarding the re-transmission that you mentioned, my first guess was that GetTotalRx by mistake counts even the duplicated received packets (some packets are thus counted two time in the throughput calculation), and the number of received packets, with more STAs, increases. We therefore see a higher aggregate uplink throughput while some of them are just repeated packets. However, looking at the PCAP file, I did not see any duplicated packet received by the AP (sink).I appreciate your follow-up and I am looking forward to your comments.
Number of STAs 1Total Throughput (Using GetTotalRx) 24.7664 Mbit/sMean Throughput (Per STA, Using Flows) 25.6559 Mbit/sNumber of STAs 60Total Throughput (Using GetTotalRx) 25.4153 Mbit/sMean Throughput (Per STA, Using Flows) 0.447434 Mbit/s
Hello Nat,
Thank you very much for the instructive comments.
Regarding the saturation. I wanted to say if the link has the capacity of 25-26 Mbps (observed with 60 to 100 STAs), while one STA does not reach that throughput and reports 24Mbps, for instance.
It is a single channel of 20MHz, working at the PHY rate of 65Mbps. Therefore, around 30Mbps throughput for UDP and 25Mbps for TCP is expected for one STA. I did not set any delay (RTT), so it should be the default value coming from the fixed loss propagation model, for STAs all located 1m far from the AP.
In fact my analysis is not against TCP or UDP. What I would like to see is a significant drop in the aggregate uplink throughput when the number of STAs increases (supposed to be a basic fact in 802.11), due to collisions and longer contention periods. I should be able to see this trend for both TCP and UDP, and the problem now is that I cannot observe such a trend especially for TCP.
You could kindly have a look at Figure EC2-2 Page 16 of the following document which reports a throughput drop of 60Mbps to 30Mbps going from 10 STAs to 100 STAs.
http://www.arubanetworks.com/assets/vrd/Aruba_VHD_VRD_Engineering_Configuration_Guide.pdf
Coming back to your example, it means that if one STA has 24Mbps saturation throughput, 100 STAs should have each (much) less than 24/100 because some airtime is now wasted for increased collisions.
I have set the MSS to 1448 byte instead of default 536 byte and I attached my code if you like to have a look at other parameters. I further investigate the loss at the WiFi and TCP levels, as you advised.
Best regards,
Arash
Dear Nat,
Thank you very much. I spent the whole day playing with TCP parameters. I changed the Buffer Size as you advised and also used BulkSendHelper. It did not help and still the same throughput is reported for one STA, and thus no degradation in aggregate uplink throughput when the number of STAs increases.
I am not sure if it helps or not, but the throughput naturally depends also on the sender (STA) congestion window. I manually set it to a higher initial value but Pcap capture shows that always there is one Ack (Ap->STA) for every two transmitted TCP packets (STA->AP). I am not sure how to force TCP to send more segments while there is no packet loss (naturally buffer size and InitialCwnd are selected to be high enough).
Config::SetDefault ("ns3::TcpSocket::SegmentSize", UintegerValue (payloadSize));
Config::SetDefault ("ns3::TcpSocket::InitialCwnd", UintegerValue (20));
Config::SetDefault("ns3::TcpSocket::SndBufSize", UintegerValue (65535));
Config::SetDefault("ns3::TcpSocket::RcvBufSize", UintegerValue (65535));
Anyway, thank you again and I appreciate your time and your kind consideration.
Best regards,
Arash
Dear Nat,
Thank you very much. I spent the whole day playing with TCP parameters. I changed the Buffer Size as you advised and also used BulkSendHelper. It did not help and still the same throughput is reported for one STA, and thus no degradation in aggregate uplink throughput when the number of STAs increases.
I am not sure if it helps or not, but the throughput naturally depends also on the sender (STA) congestion window. I manually set it to a higher initial value but Pcap capture shows that always there is one Ack (Ap->STA) for every two transmitted TCP packets (STA->AP). I am not sure how to force TCP to send more segments while there is no packet loss (naturally buffer size and InitialCwnd are selected to be high enough).
Config::SetDefault ("ns3::TcpSocket::SegmentSize", UintegerValue (payloadSize));
Config::SetDefault ("ns3::TcpSocket::InitialCwnd", UintegerValue (20));
Config::SetDefault("ns3::TcpSocket::SndBufSize", UintegerValue (65535));
Config::SetDefault("ns3::TcpSocket::RcvBufSize", UintegerValue (65535));
Anyway, thank you again and I appreciate your time and your kind consideration.
Best regards,
Arash
Dear Nat,
Thank you very much for the pursuit. That is a summary of what I did and what I observed.
I plotted the cWnd and except an initial drop, it was always increasing (almost linear for 10-20ms simulation).
First, I played with TcpSocket::DelAckCount. I saw that the value ‘1’ decreases the throughput. I tried different values and finally the best results were for 8, which increased the throughput from 24.5 Mbps to 27.5 Mbps. I thus put
Config::SetDefault("ns3::TcpSocket::DelAckCount", UintegerValue (8));
I observed that the RTT is around 50ms (which was surprising because in a real WiFi network, I can get the RTT of 2ms!). As the data rate cannot go higher than 65Mbps PHY rate, I found that the best value for the buffer size is 128K. In fact, I tried larger and much larger values (increasing the CWND at the same time) and, but they either did not improve the results, or even worsened the throughput.
Config::SetDefault("ns3::TcpSocket::SndBufSize", UintegerValue (128*1024)); // 128K
Config::SetDefault("ns3::TcpSocket::RcvBufSize", UintegerValue (128*1024)); // 128K
By these values, the throughput increased to 27.8 Mbps.
Finally and based on the value selected for buffer size (128K/1448 Byte), I tried different values for CWND, and any value between 60 to 100 gave me the same results of improving the throughput from 27.8 to 28.6 Mbps.
It is motivating to see this throughput improvement without MAC aggregations. However, the trend that I am looking for, which is an aggregate uplink throughput drop when the number of STAs increases, cannot be seen yet. Still, with all these modifications, I see the same aggregate throughput of 25Mbps when the number of STAs is 50 or 100 (In other words, those improvements are for every STA).
I am not sure, but maybe the limitation is not in TCP. Probably, the WiFi implementation has some limitations which cannot address the increased back offs and contentions in scenarios with a large number of STAs.
Thank you again for your advice and please let me know if you have any comment on the results.
Regards,
Arash