Sending messages to the mesh-local prefix from the border router

447 views
Skip to first unread message

Stuart Longland

unread,
May 22, 2018, 2:11:16 AM5/22/18
to openthre...@googlegroups.com
Hi all,

So in my networking system, I've been sending messages to ff02::1 with
NTP traffic going to ff02::101 (as per standards); thinking that would
reach all nodes as it does on a conventional Ethernet based IPv6 network.

Seems that only goes as far as the nodes that can directly hear the NCP.
This is a problem for broadcasting NTP traffic across the mesh. The
correct scoped prefix I should be using should be in the ff03::/16 subnet.

It appears that I am able to contact the border router via ff03::2 from
the nodes just fine (my resource discovery registration and syslog code
now use it)… but trying to send to any mesh-local multicast address from
the border router results in either the request being rejected (unknown
host), or if I use `wpantund add-route -l 16 ff03::`, my frames just go
into the bit bucket.

Is there a way to expose the mesh-local multicast scope to the border
router so that it can be used by programs like ntp to keep the mesh time
in sync?
--
_ ___ Stuart Longland - Systems Engineer
\ /|_) | T: +61 7 3535 9619
\/ | \ | 38b Douglas Street F: +61 7 3535 9699
SYSTEMS Milton QLD 4064 http://www.vrt.com.au

Yakun Xu

unread,
May 25, 2018, 1:13:57 AM5/25/18
to openthread-users
Hi Stuart,

Sorry for the late. ff02::/16 is link local scope, it's reasonable only direct neighbors receive the NTP traffic.I think we should use realm local scope multicast address(i.e. ff03::/16). And I don't think we should add route in wpantund. Could you please share more details about the network topology, as well as the addresses you are using for testing?

Stuart Longland

unread,
May 25, 2018, 1:43:51 AM5/25/18
to openthre...@googlegroups.com
On 25/05/18 15:13, 'Yakun Xu' via openthread-users wrote:
> Sorry for the late. /ff02::/16/ is link local scope, it's reasonable
> only direct neighbors receive the NTP traffic.I think we should use
> realm local scope multicast address(i.e. /ff03::/16/).

Yeah, I guess this is a misunderstanding on my part… I was thinking the
mesh network behaved (from the IP perspective) like a typical L2 network
(much as IEEE 802.11 does), thus all nodes would see ff02::/16 messages.

> And I don't think
> we should add route in wpantund. Could you please share more details
> about the network topology, as well as the addresses you are using for
> testing?

Turns out, it was just me getting confused about `ping6` refusing to
ping those addresses… if I do:

$ ping6 -I wpan0 ff03::2

it works. As for NTP; it again was down to me misunderstanding the
configuration file format… I had

broadcast ff02::101 ttl 1
broadcast ff03::101 ttl 1

thinking NTP would send to both… it ignored the latter option. If I
remove the ff02::101 address from the configuration; it sends to both,
and so all nodes receive the time.

It's a bit difficult to test this fully, because all the nodes can hear
each-other, they sort of like to skip the middle man and go direct.

I'm still not certain what the impact of the TTL field is, whether hops
between Thread routers are considered or whether this field only
decrements when it passes through a border router.

Ordinarily on a standard TCP/IP network, it would decrement as it passes
through each L3 router between each subnet. Here though, the entire
mesh uses the one subnet, so while it is passing through "routers", they
aren't behaving in the usual L3 router sense, they're more like L2 switches.

Would a TTL of 1 still traverse multiple Thread routers on the mesh or
would it behave like the link-local case before, only going to immediate
neighbours?

Jonathan Hui

unread,
May 25, 2018, 2:01:38 PM5/25/18
to Stuart Longland, openthread-users
For realm-local multicast (i.e. ff03::), the IPv6 Hop Limit field is decremented by each Thread Router that forwards the message.

For scopes larger than realm-local, the multicast message is actually tunneled across the Thread network using IPv6-in-IPv6 encapsulation.  The inner IPv6 header is the application-generated IPv6 header and is preserved as it traverses the Thread network.

Hope that helps.

--
Jonathan Hui

--
You received this message because you are subscribed to the Google Groups "openthread-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openthread-use...@googlegroups.com.
To post to this group, send email to openthre...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openthread-users/9d2c9d96-7f65-e8c8-b1df-894bb8588106%40vrt.com.au.
For more options, visit https://groups.google.com/d/optout.

Stuart Longland

unread,
May 28, 2018, 10:49:10 PM5/28/18
to openthre...@googlegroups.com
On 26/05/18 04:01, Jonathan Hui wrote:
> For realm-local multicast (i.e. ff03::), the IPv6 Hop Limit field is
> decremented by each Thread Router that forwards the message.

Ahh okay, so really this means I need to know how many hops to expect
between the node I'm sending on and a node that may have syslogd listening.

On conventional networks, I'd be reaching for `traceroute6`, which I
believe works by reducing the TTL and watching for a "host unreachable"
ICMP message. Does OpenThread have a traceroute-workalike function or
would I need to implement that myself?

> For scopes larger than realm-local, the multicast message is actually
> tunneled across the Thread network using IPv6-in-IPv6 encapsulation. 
> The inner IPv6 header is the application-generated IPv6 header and is
> preserved as it traverses the Thread network.

Presumably there's some logic there to prevent the encapsulating
datagrams going around around in circles, which is the problem I got
when I tried sending to ff03::2 with a TTL of 255.

If I tell the NCP to add ff05::1 to its list of multicast addresses via
wpantund, it looks as if it will remember it across a reset, but the
setting does not survive a reboot of the border router.

Is there a way to get wpantund to remember this setting across reboots?

Jonathan Hui

unread,
May 31, 2018, 4:31:24 PM5/31/18
to Stuart Longland, openthread-users
OpenThread does not currently implement a traceroute-like utility.

Do you really need to adjust the Hop Limit value?  Or would setting the Hop Limit to some appropriately large value (e.g. 64) be sufficient?

Thread multicast forwarding has duplicate detection in order to avoid forwarding loops.  Realm-local multicast messages include the MPL Option header in an IPv6 Hop-by-Hop Option header.  The MPL Option header includes a sequence value generated by the originating device and receivers maintain a cache of recently received values.  If you are seeing forwarding loops with realm-local multicast, we should take a closer look.

wpantund does not currently implement a persistence mechanism for multicast address subscriptions.  We currently rely on additional linux services to help ensure that the Thread interface is properly configured.

--
Jonathan Hui

--
You received this message because you are subscribed to the Google Groups "openthread-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openthread-use...@googlegroups.com.
To post to this group, send email to openthre...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages