Mesh protocol with IPv6 not working.

159 views
Skip to first unread message

vahid Saber

unread,
Feb 20, 2018, 5:01:47 AM2/20/18
to ns-3-users
Hello everyone,
I copied mesh.cc from the respective example folder to scratch, ran it and it works.
Then, I just changed the addressing scheme to use IPv6 instead of IPv4 and it is not working. I tried to trace control flow and couldn't detect any anomaly. Theoretically, It should work out of the box independent of addressing.
Could you please share any hints that can give me a lead to further analysis please?

Attaching original test file, the modified file and the diff, if helps.

Thank you
Vahid
mesh6.cc
mesh.cc
Screenshot from 2018-02-20 10-57-30.png

Tommaso Pecorella

unread,
Feb 20, 2018, 10:14:30 PM2/20/18
to ns-3-users
Hi,

you're right, it should, but it doesn't.

I can't right now dig it much, but the problem seems to be in how the devices are (not) forwarding broadcasts.
The problem is that, while IPv4 use multicast (group) address in range 01-00-5E-00-00-00 through 01-00-5E-7F-FF-FF, IPv6 uses 33-33-00-00-00-00 to 33-33-FF-FF-FF-FF.
These are not passed to the MeshNetDevice - don't ask me why.

I'd be extremely happy if you could further dig into this and help us fixing the issue.

I opened a bug on our tracker:

Thanks,

T.

vahid Saber

unread,
Feb 21, 2018, 2:55:14 AM2/21/18
to ns-3-...@googlegroups.com
Hi Tommaso,
Ok,I will try to find a workaround for my current problem and free up some time to work on this bug.
thanks,
Vahid


--
Posting to this group should follow these guidelines https://www.nsnam.org/wiki/Ns-3-users-guidelines-for-posting
---
You received this message because you are subscribed to a topic in the Google Groups "ns-3-users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ns-3-users/cZgGJYc4LAY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ns-3-users+unsubscribe@googlegroups.com.
To post to this group, send email to ns-3-...@googlegroups.com.
Visit this group at https://groups.google.com/group/ns-3-users.
For more options, visit https://groups.google.com/d/optout.

vahid Saber

unread,
Feb 21, 2018, 3:32:40 AM2/21/18
to ns-3-...@googlegroups.com
There is another problem with mesh.cc example that I really don't know if it is a problem or a matter of settings. It has to do with packet loss and overkill slowness which could be due to signalling.... Let's explain it in another thread.

vahid Saber

unread,
Feb 25, 2018, 7:23:17 AM2/25/18
to ns-3-...@googlegroups.com
Hello Tommaso,
I am looking at mesh implementation. Did a few runs with some loggings enabled(attached) .Here are what I checked sporadically.  I need some directions where to look:

1. The key place to compare a working ipv4 version with ipv6 is : When a udp packet is sent, in IpvXInterface::Send, the program checks is ARP is required, then for ipv4, (else to being a multicast) it makes an ARP lookup. In ipv6 is does a ND. The point is, in ND, it first looks into an empty ND cache which fail. Therefore, unlike ipv4's successful arp, ipv6 lookup is is not resolved.

So from here I need to know if I should dig down(to mesh protocols impl.) or look up in ipv6 protocol?
I work as a programmer in financial sector. This is sort of my afternoon hobby and I will be happy to do it. Just give me more insight and also let me know where to look. I will do it.

2. Just to give more information, I also looked into why the ND cache was empty. This cache is populated in places like HandleNS and HandleRS (Icmpv6L4Protocol) , which I believe are indirect callbacks of methods like SendNS and SendRS. They, handlers, are invoked via Icmpv6L4Protocol::Receive(). From the logs, I can see this Receive method is called only once in the middle of my 10 seconds simulation. So, it is obvious that the cache would be subsequently empty all the time and no ND lookup attempt were successful.

If you please give me a direction where to look further, I will be happy to do it.

Thank you.
Vahid



log6.txt
mesh6.cc

vahid Saber

unread,
Mar 1, 2018, 5:27:20 PM3/1/18
to ns-3-...@googlegroups.com
A short update and a question:

- I the mesh implementation there is a HWMP routing protocol implantation. It has, among other things, a type of routing table which stores reactive routes. this reactive routing table is a map whose key is the normal Mac48Address ((normal is my invented term, please correct me)of the destination. For example from 00:00:00:00:00:01 to 00:00:00:00:00:09. So, when using Ipv6, a lot of attempts to lookup are made for -key- addresses which, based on Tommaso's comment, looks to be multicast. For example looking for 33:33:ff:00:00:01. This causes the packet to bypass all the if cases and -else- be eventually opted for unicast forward to a multicast destination!
My C++ is better than my NS3 but this is most likely a direct or indirect result of a bug which needs to be fixed.

Now a question:
- In the implementation (hwmp-protocol.cc for instance) I see a lot of if (destination == Mac48Address::GetBroadcast ()), Can this be true in case of Ipv6?  I mean, since ipv6 uses multicast and not (And ipv4 uses ARP and ipv6 uses ND...), shouldn't we differentiate between how the destination address belonging to two versions is compared to broadcast/multicast?

Note:
The journey of an echo packet has been logged and attached for your reference. 
 
log1.txt

Tommaso Pecorella

unread,
Mar 4, 2018, 1:03:52 AM3/4/18
to ns-3-users
Hi,

yes, you're right about the fact that the cache is empty. As a matter of fact, that is the main issue: ICMPv6 packets are not sent/received successfully.

I'd suggest to start by checking if they are sent. Activate the ASCII logs and they should tell you if they're sent.
Next check if they are received. That could be the main issue.
As a matter of fact, ICMPv6 ND packets (the ones you're looking for) are sent to multicast MAC addresses, and these could be mis-filtered by the mesh NetDevice.

If you have more insights (or questions), don't hesitate to write. I'm quite busy these days but I look at the group when I can.

Cheers,

T.
To unsubscribe from this group and all its topics, send an email to ns-3-users+...@googlegroups.com.
Message has been deleted

vahid Saber

unread,
Mar 13, 2018, 5:22:32 PM3/13/18
to ns-3-...@googlegroups.com
Hi Again,
I worked some more :

Problem:
Reconfirmed that the key(Mac48Address) used in HWMP's reactive routing table seems to be the issue. Although Mesh L2 is IP agnostic, by way of implementation, the data coming from higher layers have proved to make a difference. In IPv4 scenario the net device's address is used as the key to store and retrieve the routing table entries. However, in IPv6, net deice address is used to store the entries and multicast address of the solicited address (originated from ICMP L4) is eventually used for retrieval. This will naturally cause lookup failure all the time.

Solution:
We need a Mac48Address key that is somehow independent of whether it was a net device address or a multicast version of some other address. I realized that the lower three bytes of all such addresses are always the same, and unique to an acceptable scale. So I opted for that key and both versions of mesh example (IPv4 and IPv6) worked correctly.
This solution requires minimal change and is very less invasive.

Kindly find the diff sketch attached. If you like the approach, I can initiate a code review.

Regards
Vahid


To unsubscribe from this group and all its topics, send an email to ns-3-users+unsubscribe@googlegroups.com.
key.patch
Reply all
Reply to author
Forward
0 new messages