high speed kernel protocols using DCE

151 views
Skip to first unread message

Siddharth

unread,
Apr 16, 2014, 12:25:10 PM4/16/14
to ns-3-...@googlegroups.com
Hi all,

I was wondering if anyone had been working with high speed tcp protocols (CUBIC, Scalable, HighSpeed...) available in the linux kernel using DCE.

 I have been trying to test the different protocols in speeds in the range of 400 Mbps and have been having issues with very low values of goodput in the presence of multiple flows. The details of the scenario that reproduces the low results are given as follows.

I have been using bake' s dce-linux-dev module. To set different protocols that can be chosen via system calls in the test script, use protocols_enabler.patch which is a diff of the kernel's arch/sim/defconfig. Once patched, a bake.py build is performed in order to build and link the latest changes.

The test script can be found as a patch to dce-tcp-ns3-nsc-comparison.cc and simulation parameters used are summarized below. Socket buffers had been set to high values to accomodate such high speeds, window scaling option set to 8 and sack disabled.
Access speeds: 1 Gbps
Bottleneck speeds: 400 Mbps
queue size: 2 MB
simulation time: 600 seconds
access delays: 10ms
bottleneck delay: 10ms
transport protocol: scalable
Traffic type: bulk send

With the above values, a goodput rate of ~5 Mbps was seen for each flow which seem to be unusually low. Is there something I am missing out or has anybody else run into similar issues?

Thanks,
Siddharth
protocols_enabler.patch
test_script.patch

Hajime Tazaki

unread,
Apr 27, 2014, 9:06:03 AM4/27/14
to ns-3-...@googlegroups.com

hi, sorry to be late.

the datarate parameter for right links is still using 5Mbps
with your diff. is this the cause of the case ?

http://code.nsnam.org/ns-3-dce/file/311753953c54/example/dce-tcp-ns3-nsc-comparison.cc#l182

-- Hajime

At Wed, 16 Apr 2014 09:25:10 -0700 (PDT),
Siddharth wrote:
>
> [1 <multipart/alternative (7bit)>]
> [1.1 <text/plain; UTF-8 (7bit)>]
> --
> You received this message because you are subscribed to the Google Groups "ns-3-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to ns-3-users+...@googlegroups.com.
> To post to this group, send email to ns-3-...@googlegroups.com.
> Visit this group at http://groups.google.com/group/ns-3-users.
> For more options, visit https://groups.google.com/d/optout.
> [1.2 <text/html; UTF-8 (quoted-printable)>]
>
> [2 protocols_enabler.patch <text/x-diff; US-ASCII (7bit)>]
> diff --git a/arch/sim/defconfig b/arch/sim/defconfig
> index be021a2..f4de831 100644
> --- a/arch/sim/defconfig
> +++ b/arch/sim/defconfig
> @@ -81,10 +81,18 @@ CONFIG_INET_XFRM_UDP_ENCAP_NATT=y
> CONFIG_INET_DIAG=m
> CONFIG_INET_TCP_DIAG=m
> CONFIG_TCP_CONG_ADVANCED=y
> -CONFIG_TCP_CONG_BIC=m
> -CONFIG_TCP_CONG_CUBIC=m
> -CONFIG_TCP_CONG_WESTWOOD=m
> -CONFIG_TCP_CONG_HTCP=m
> +CONFIG_TCP_CONG_BIC=y
> +CONFIG_TCP_CONG_CUBIC=y
> +CONFIG_TCP_CONG_WESTWOOD=y
> +CONFIG_TCP_CONG_HTCP=y
> +CONFIG_TCP_CONG_HYBLA=y
> +CONFIG_TCP_CONG_VEGAS=y
> +CONFIG_TCP_CONG_VENO=y
> +CONFIG_TCP_CONG_ILLINOIS=y
> +CONFIG_TCP_CONG_YEAH=y
> +CONFIG_TCP_CONG_LP=y
> +CONFIG_TCP_CONG_SCALABLE=y
> +CONFIG_TCP_CONG_HSTCP=y
> # CONFIG_DEFAULT_BIC is not set
> # CONFIG_DEFAULT_CUBIC is not set
> # CONFIG_DEFAULT_HTCP is not set
> @@ -93,6 +101,7 @@ CONFIG_TCP_CONG_HTCP=m
> # CONFIG_DEFAULT_VENO is not set
> # CONFIG_DEFAULT_WESTWOOD is not set
> CONFIG_DEFAULT_RENO=y
> +# CONFIG_DEFAULT_VENO=y
> CONFIG_DEFAULT_TCP_CONG="reno"
> CONFIG_IPV6=y
> CONFIG_IPV6_PRIVACY=y
> [3 test_script.patch <text/x-diff; US-ASCII (7bit)>]
> diff -r 73285fae30f9 example/dce-tcp-ns3-nsc-comparison.cc
> --- a/example/dce-tcp-ns3-nsc-comparison.cc Sun Nov 10 00:37:22 2013 +0900
> +++ b/example/dce-tcp-ns3-nsc-comparison.cc Tue Apr 15 17:36:06 2014 -0500
> @@ -35,12 +35,14 @@
> std::string sock_factory = "ns3::LinuxTcpSocketFactory";
> int m_seed = 1;
> double startTime = 4.0;
> -double stopTime = 20.0;
> +double stopTime = 604.0;
> int m_nNodes = 2;
> bool enablePcap = false;
> -std::string m_pktSize = "1024";
> +std::string m_pktSize = "1500";
> bool m_frag = false;
> -bool m_bulk = false;
> +bool m_bulk = true;
> +uint32_t queue_size = 2000000;
> +std::string transProt = "scalable";
>
> int
> main (int argc, char *argv[])
> @@ -143,8 +145,8 @@
> }
>
> PointToPointHelper pointToPoint;
> - pointToPoint.SetDeviceAttribute ("DataRate", StringValue ("5Mbps"));
> - pointToPoint.SetChannelAttribute ("Delay", StringValue ("1ns"));
> + pointToPoint.SetDeviceAttribute ("DataRate", StringValue ("1Gbps"));
> + pointToPoint.SetChannelAttribute ("Delay", StringValue ("10ms"));
>
> Ipv4AddressHelper address;
> Ipv4InterfaceContainer interfaces;
> @@ -160,9 +162,14 @@
> }
>
> // bottle neck link
> - pointToPoint.SetDeviceAttribute ("DataRate", StringValue ("2Mbps"));
> - pointToPoint.SetChannelAttribute ("Delay", StringValue ("100ms"));
> - dev1 = pointToPoint.Install (NodeContainer (routers.Get (0), routers.Get (1)));
> + PointToPointHelper pointToPoint_bottleneck;
> + pointToPoint_bottleneck.SetDeviceAttribute ("DataRate", StringValue ("400Mbps"));
> + pointToPoint_bottleneck.SetChannelAttribute ("Delay", StringValue ("10ms"));
> + pointToPoint_bottleneck.SetQueue ("ns3::DropTailQueue",
> + "Mode", StringValue ("QUEUE_MODE_BYTES"),
> + "MaxBytes", UintegerValue (queue_size));
> +
> + dev1 = pointToPoint_bottleneck.Install (NodeContainer (routers.Get (0), routers.Get (1)));
> if (m_frag)
> {
> dev1.Get (0)->SetMtu (1000);
> @@ -171,11 +178,11 @@
> // bottle neck link
> Ptr<RateErrorModel> em1 =
> CreateObjectWithAttributes<RateErrorModel> ("RanVar", StringValue ("ns3::UniformRandomVariable[Min=0.0,Max=1.0]"),
> - "ErrorRate", DoubleValue (0.05),
> + "ErrorRate", DoubleValue (0.0),
> "ErrorUnit", EnumValue (RateErrorModel::ERROR_UNIT_PACKET)
> );
> dev1.Get (1)->SetAttribute ("ReceiveErrorModel", PointerValue (em1));
> -
> +
> address.SetBase ("10.1.0.0", "255.255.255.0");
> address.Assign (dev1);
>
> @@ -193,11 +200,32 @@
>
> Ipv4GlobalRoutingHelper::PopulateRoutingTables ();
>
> +
> if (m_stack.find ("dce") != std::string::npos)
> {
> LinuxStackHelper::PopulateRoutingTables ();
> dceManager.Install (nodes);
> + for (int i=0;i<m_nNodes;i++){
> stack.SysctlSet (nodes, ".net.ipv4.conf.default.forwarding", "1");
> + stack.SysctlSet(lefts.Get(i), ".net.ipv4.tcp_congestion_control", transProt);
> + stack.SysctlSet (lefts.Get(i), ".net.ipv4.tcp_sack", "0");
> + stack.SysctlSet (lefts.Get(i), ".net.ipv4.tcp_dsack", "0");
> + stack.SysctlSet (rights.Get(i), ".net.ipv4.tcp_sack", "0");
> + stack.SysctlSet (rights.Get(i), ".net.ipv4.tcp_dsack", "0");
> + stack.SysctlSet (lefts.Get(i), ".net.ipv4.tcp_adv_win_scale", "8");
> + stack.SysctlSet (lefts.Get(i), ".net.ipv4.tcp_rmem", "4096 87830 500108864");
> + stack.SysctlSet (lefts.Get(i), ".net.ipv4.tcp_wmem", "4096 65536 500108864");
> + stack.SysctlSet (lefts.Get(i), ".net.core.wmem_max", "500108864");
> + stack.SysctlSet (lefts.Get(i), ".net.ipv4.tcp_mem", "8388608 8388608 8388608");
> + stack.SysctlSet (lefts.Get(i), ".net.core.rmem_max","500108864");
> + stack.SysctlSet (lefts.Get(i), ".net.core.netdev_max_backlog", "25000000");
> + stack.SysctlSet (rights.Get(i), ".net.ipv4.tcp_rmem", "4096 87830 500108864");
> + stack.SysctlSet (rights.Get(i), ".net.ipv4.tcp_wmem", "4096 65536 500108864");
> + stack.SysctlSet (rights.Get(i), ".net.core.wmem_max", "500108864");
> + stack.SysctlSet (rights.Get(i), ".net.ipv4.tcp_mem", "8388608 8388608 8388608");
> + stack.SysctlSet (rights.Get(i), ".net.core.rmem_max","500108864");
> + stack.SysctlSet (rights.Get(i), ".net.core.netdev_max_backlog", "25000000");
> + }
> }
>
> // dceManager.RunIp (lefts.Get (0), Seconds (0.2), "route add default via 10.0.0.2");

Siddharth

unread,
Apr 29, 2014, 12:59:48 AM4/29/14
to ns-3-...@googlegroups.com
Hello Hajime,

Oops, I had left that out and for sure that was impacting the low throughput in the attached patch, sorry about that. But my issue was not related to this as I was actually using a different testing scenario for my simulations and had only used the above patch to report the issue.

After running more tests between the time I reported the issue and now, I think I might have to recheck my post processing as I see pretty good goodputs while using the packet sink's GetTotalRx().

I will investigate further and see revert back if I still think there might be an issue on the DCE side.

Cheers,
Siddharth
Reply all
Reply to author
Forward
0 new messages