MPTCP ns-3-dce multiple interfaces on server and client

115 views
Skip to first unread message

Lawrence

unread,
Nov 3, 2016, 12:26:49 PM11/3/16
to ns-3-users
Hello,

I have set up mptcp with ns-3-dce.
I want to simulate traffic for a very simple topology using dce-iperf-mptcp :


                           __________R__________
                (eth0) /                    O                    \ (eth0)
         client                              U                               server
               (eth1) \__________T___________/ (eth1)
                                               E
                                               R

I would like to have at least two interfaces on each side of communication.

With the default configurations, I only get one interface on each side.

I saw the code at ns-3-dce/example/dce-iperf-mptcp.cc

I understand that I need to handle the devices on each node :

https://github.com/direct-code-execution/ns-3-dce/blob/master/example/dce-iperf-mptcp.cc#L55  :

      devices1 = pointToPoint.Install (nodes.Get (0), routers.Get (i));

How do I get multiple devices on each node ?
Do I add more nodes and for each node added do an "Install" as shown in the code ?

Is it possible to do it with something like LinuxStackHelper::SysctlSet()  ?

N.B. Bad at C programming.

Lawrence

unread,
Nov 7, 2016, 8:00:52 AM11/7/16
to ns-3-users
Hello,

I have verified my mptcp configurations :

net.mptcp.mptcp_path_manager = "fullmesh"
net.mptcp.mptcp_scheduler = "default"

I have also forced these in the dce-iperf-mptcp.cc file :

  // debug
  stack
.SysctlSet (nodes, ".net.mptcp.mptcp_debug", "1");
 
// mptcp full-mesh path-manager
  stack
.SysctlSet (nodes, ".net.mptcp.mptcp_path_manager", "fullmesh");
 
// mptcp default scheduler
  stack
.SysctlSet (nodes, ".net.mptcp.mptcp_scheduler", "default");

As you can see, the debug is also activated.
However after the simulation with dce-iperf-mptcp, in the pcap files generated, I do not see a full mesh. I see for example,
10.1.0.1 talk to 10.2.0.1 and 10.1.1.1 talk to 10.2.1.1
What about the communications (10.1.0.1, 10.2.1.1) and (10.1.1.1, 10.2.0.1) ?
Basically I don't see a fullmesh.
I see other two ip addresses : 10.1.0.2 and 10.1.1.2 but they cannot reach the network.
I guess that's because the routes have not been defined for them; but what about (10.1.0.1, 10.2.1.1) and (10.1.1.1, 10.2.0.1) ?
Here is the log generated in $HOME/ns-3-dce/files-0/var/log/messages :

<5>Linux version 4.1.0+ (lawrence@lawrence) (gcc version 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04.3) ) #0 Thu Oct 20 18:33:51 CEST 2016
<6>NET: Registered protocol family 16
<6>NET: Registered protocol family 2
<6>default registered
<6>default registered
<6>MPTCP: Stable release v0.89.0-rc<6>TCP established hash table entries: 512 (order: 0, 4096 bytes)
<6>TCP bind hash table entries: 512 (order: 0, 4096 bytes)
<6>TCP: Hash tables configured (established 512 bind 512)
<6>UDP hash table entries: 128 (order: 0, 4096 bytes)
<6>UDP-Lite hash table entries: 128 (order: 0, 4096 bytes)
<6>NET: Registered protocol family 10
<6>nsc: IPv6 over IPv4 tunneling driver
<6>fullmesh registered
<6>ndiffports registered
<6>binder registered
<6>roundrobin registered
<3>net/mptcp/mptcp_ctrl.c: mptcp_alloc_mpcb: created mpcb with token 0x811aef13
<3>net/mptcp/mptcp_ctrl.c: mptcp_add_sock: token 0x811aef13 pi 1, src_addr:10.1.0.1:37655 dst_addr:10.2.0.1:5001, cnt_subflows now 1
<3>net/mptcp/mptcp_ctrl.c: mptcp_add_sock: token 0x811aef13 pi 2, src_addr:0.0.0.0:0 dst_addr:0.0.0.0:0, cnt_subflows now 2
<3>net/mptcp/mptcp_ipv4.c: mptcp_init4_subsockets: token 0x811aef13 pi 2 src_addr:10.1.1.1:0 dst_addr:10.2.0.1:5001
<3>net/mptcp/mptcp_ctrl.c: mptcp_add_sock: token 0x811aef13 pi 3, src_addr:0.0.0.0:0 dst_addr:0.0.0.0:0, cnt_subflows now 3
<3>net/mptcp/mptcp_ipv4.c: mptcp_init4_subsockets: token 0x811aef13 pi 3 src_addr:10.1.0.1:0 dst_addr:10.2.1.1:5001
<3>net/mptcp/mptcp_ctrl.c: mptcp_add_sock: token 0x811aef13 pi 4, src_addr:0.0.0.0:0 dst_addr:0.0.0.0:0, cnt_subflows now 4
<3>net/mptcp/mptcp_ipv4.c: mptcp_init4_subsockets: token 0x811aef13 pi 4 src_addr:10.1.1.1:0 dst_addr:10.2.1.1:5001
<3>net/mptcp/mptcp_ctrl.c: mptcp_del_sock: Removing subsock tok 0x811aef13 pi:2 state 7 is_meta? 0
<3>net/mptcp/mptcp_ctrl.c: mptcp_del_sock: Removing subsock tok 0x811aef13 pi:3 state 7 is_meta? 0
<3>net/mptcp/mptcp_ctrl.c: mptcp_close: Close of meta_sk with tok 0x811aef13
<3>net/mptcp/mptcp_ctrl.c: mptcp_del_sock: Removing subsock tok 0x811aef13 pi:4 state 7 is_meta? 0
<3>net/mptcp/mptcp_ctrl.c: mptcp_del_sock: Removing subsock tok 0x811aef13 pi:1 state 7 is_meta? 0
<3>net/mptcp/mptcp_ctrl.c: mptcp_sock_destruct destroying meta-sk

There seems to be a route defined for (10.1.1.1, 10.2.0.1) and (10.1.0.1, 10.2.1.1)

What should I do to have a full mesh ?

Lawrence

Matt Anonyme

unread,
Nov 7, 2016, 10:05:29 AM11/7/16
to ns-3-users
The way the topolgy is designed you can't have a fullmesh, linux will try to create the subflows but not all will succeed. For instance
Client eth0 can't reach Server eth1 in this context, so you should see a syn sent several times without answer, and then the subflow dropped.


Le jeudi 3 novembre 2016 17:26:49 UTC+1, Lawrence a écrit :

Lawrence

unread,
Nov 7, 2016, 10:57:24 AM11/7/16
to ns-3-users
With my simple topology, two vms (with mptcp linux kernel implementation) and two different routing tables configured as indicated here : http://multipath-tcp.org/pmwiki.php/Users/ConfigureRouting   I could have a fullmesh topology ( I implemented it in gns3 and got nice results in wireshark :D )

I believe the routing rules in dce-iperf-mptcp.cc are doing the same thing. What is the topology implemented ? Where do I need to change the code to get my topology ?

Sorry, I may sound too demanding   ^^

Matt

unread,
Nov 7, 2016, 11:05:03 AM11/7/16
to ns-3-users
That's because in the DCE topology, each router has is directly linked
to only one of the client and server interface. You should add links
between the router and all the interfaces then.

Nb: if have a setup working with 2 VMs, I am curious as to why you use
DCE ? for comparison ? or do you do throughput testing ? (in which
case VMs might give bad results).
> --
> Posting to this group should follow these guidelines
> https://www.nsnam.org/wiki/Ns-3-users-guidelines-for-posting
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "ns-3-users" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/ns-3-users/WKY7l93kGrY/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> ns-3-users+...@googlegroups.com.
> To post to this group, send email to ns-3-...@googlegroups.com.
> Visit this group at https://groups.google.com/group/ns-3-users.
> For more options, visit https://groups.google.com/d/optout.

Lawrence

unread,
Nov 7, 2016, 11:43:08 AM11/7/16
to ns-3-users
Actually my goal is to be able to choose a flow among the available flows, on which I want the communication to take place and also to be able to change back to default fullmesh. This API https://irtf.org/anrw/2016/anrw16-final16.pdf is the answer but will be available later.

I am trying to see how I can exploit the socket options to do something similar by changing the source code. Compiling the kernel every time would be very tedious and take a lot of time. Using DCE is faster for debugging.

I now understand the topology implemented in DCE (should have read the code earlier :P )

So as you said, to get fullmesh I need to add links between the router and the interfaces. I'll try that by changing the dce-iperf-mptcp.cc code.

Lawrence
Reply all
Reply to author
Forward
0 new messages