IPV6 ping not working but IPV4 is ok - BeagleBone - any ideas

535 views
Skip to first unread message

Michał Poterek

unread,
Oct 31, 2021, 10:08:51 PM10/31/21
to openthread-users
I cannot get ping to ipv6  response on my beaglebone and end devices.  IPV4 is working fine

PING 64:ff9b::808:808(64:ff9b::808:808) 56 data bytes
From fdde:ad11:11de:0:9da6:e2f:2468:751e: icmp_seq=1 Destination unreachable: Address unreachable
From fdde:ad11:11de:0:9da6:e2f:2468:751e: icmp_seq=2 Destination unreachable: Address unreachable

My routing table on beagle bone router

Kernel IPv6 routing table
Destination                    Next Hop                   Flag Met Ref Use If
localhost/128                  [::]                       U    256 2     0 lo
64:ff9b::/96                   [::]                       U    1   2     0 eth0
64:ff9b::/96                   [::]                       U    256 2     0 wpan0
64:ff9b::/96                   [::]                       U    1024 1     0 nat64
fdaa:bb:1::2/128               [::]                       U    256 2     0 nat64
fdde:ad11:11de::/64            [::]                       U    256 1     0 wpan0
fe80::/64                      [::]                       U    256 2     0 eth0
fe80::/64                      [::]                       U    256 1     0 nat64
fe80::/64                      [::]                       U    256 1     0 wpan0
[::]/0                         [::]                       !n   -1  1     0 lo
localhost/128                  [::]                       Un   0   3     0 lo
fdaa:bb:1::2/128               [::]                       Un   0   3     0 nat64
fdde:ad11:11de::/128           [::]                       Un   0   3     0 wpan0
fdde:ad11:11de:0:9da6:e2f:2468:751e/128 [::]                       Un   0   4     0 wpan0
fe80::/128                     [::]                       Un   0   4     0 eth0
fe80::/128                     [::]                       Un   0   3     0 nat64
fe80::/128                     [::]                       Un   0   3     0 wpan0
fe80::2824:f95:92ee:fc1/128    [::]                       Un   0   3     0 wpan0
fe80::288f:7c7f:21aa:d46a/128  [::]                       Un   0   2     0 wpan0
fe80::422e:71ff:fed9:1a04/128  [::]                       Un   0   4     0 eth0
fe80::ded3:5c9f:a743:40bb/128  [::]                       Un   0   2     0 nat64
ff00::/8                       [::]                       U    256 4     0 eth0
ff00::/8                       [::]                       U    256 1     0 nat64
ff00::/8                       [::]                       U    256 1     0 wpan0
[::]/0                         [::]                       !n   -1  1     0 lo

Stuart Longland

unread,
Nov 1, 2021, 8:38:50 PM11/1/21
to Michał Poterek, openthread-users
On Sun, 31 Oct 2021 19:08:51 -0700 (PDT)
Michał Poterek <superv...@gmail.com> wrote:

> I cannot get ping to ipv6 response on my beaglebone and end devices. IPV4
> is working fine
>
> PING 64:ff9b::808:808(64:ff9b::808:808) 56 data bytes
> From fdde:ad11:11de:0:9da6:e2f:2468:751e: icmp_seq=1 Destination
> unreachable: Address unreachable
> From fdde:ad11:11de:0:9da6:e2f:2468:751e: icmp_seq=2 Destination
> unreachable: Address unreachable
>
> My routing table on beagle bone router
>
> Kernel IPv6 routing table
> Destination Next Hop Flag Met Ref Use
> If
> localhost/128 [::] U 256 2 0
> lo
> 64:ff9b::/96 [::] U 1 2 0
> eth0
> 64:ff9b::/96 [::] U 256 2 0
> wpan0
> 64:ff9b::/96 [::] U 1024 1 0
> nat64

I'd be having a look at whether `tayga` is running properly, and/or
check your `iptables` firewall rules. Basically the NAT64 stuff is in two parts:

`tayga` does "stateless" IPv6 to IPv4 NAT, when a request for the 64::/96 subnet
is received by the kernel, it gets routed to a `tun` device managed by
`tayga`, and `tayga` basically "maps" the IPv6 source address to an
unused IPv4 address in some configured address space.

From there, the now IPv4 packet then gets passed back to the kernel,
where IP masquerade (statefully) NATs the outgoing request so that the
IPv4 reply can be routed back to the BeagleBone's egress interface,
back through `tayga` and eventually back to the node on the mesh.

`iptables-save` might give you some clues, but I'd expect that there's
some rule there that will be picking up the traffic leaving the egress
interface from `tayga`'s IPv4 NAT64 subnet that's intended to be
SNAT-ed or MASQUERADEd.

e.g. on my RevolutionPi, I have in `/etc/tayga.conf`:
```
#
# Dynamic pool prefix. IPv6 hosts which send traffic through TAYGA (and do
# not correspond to a static map or an IPv4-translatable address in the NAT64
# prefix) will be assigned an IPv4 address from the dynamic pool. Dynamic
# maps are valid for 124 minutes after the last matching packet is seen.
#
# If no unassigned addresses remain in the dynamic pool (or no dynamic pool is
# configured), packets from unknown IPv6 hosts will be rejected with an ICMP
# unreachable error.
#
# Optional.
#
dynamic-pool 192.168.255.0/24
```

then in my firewall, I see this rule:
```
# Generated by xtables-save v1.8.2 on Tue Nov 2 01:37:55 2021
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:DOCKER - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 192.168.255.0/24 -j MASQUERADE # ← this one!
-A POSTROUTING -o eth0 -j MASQUERADE
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A DOCKER -i docker0 -j RETURN
COMMIT
```
--
Stuart Longland (aka Redhatter, VK4MSL)

I haven't lost my mind...
...it's backed up on a tape somewhere.

Vijay Baskar

unread,
Nov 11, 2021, 6:22:16 PM11/11/21
to openthread-users
Hi Stuart,

I'm experiencing a similar issue. I have two CoAP servers - one for my end-use application (lets say on port 5683) and the other is for DFU (lets say on port 5686).
Since Tayga doesn't allow access to private IPv4 addresses (which is my backbone interface) using well known prefix ( 64:ff9b::/96) I had to use the other prefix (2001:db8:1:ffff::/96). With this prefix, I'm able to reach my application on 5683 and my application is able to reach back the end node on Thread via CoAP.
But, when the end node initiates CoAP requests to my second CoAP server running at 5686 (which listens to IPv4 traffic only), the request is received by the CoAP server, but the response comes from the NAT64 interface and doesn't get detected (?) by my end node.

Below is tcpdump showing request from end node reaching my CoAP server
14:23:15.961656 IP6 (hlim 64, next-header UDP (17) payload length: 38) fd11:22::caxx:c3xx:49xx:d2xx.5683 > 2001:db8:1:ffff::c0xx:1cxx.5686: [udp sum ok] UDP, length 30

14:23:15.963083 IP6 (hlim 62, next-header UDP (17) payload length: 19) fdaa:bb:1::1.5686 >  fd11:22::caxx:c3xx:49xx:d2xx.5683 : [udp sum ok] UDP, length 11

Please note that I've already added the prefix fd11:22::/64 to the default route of my rcp attached to the border router.

Any help is appreciated!

Kind regards,
Vijay

Stuart Longland

unread,
Nov 11, 2021, 10:55:35 PM11/11/21
to Vijay Baskar, openthread-users
On Thu, 11 Nov 2021 15:22:16 -0800 (PST)
Vijay Baskar <vijaykart...@gmail.com> wrote:

> But, when the end node initiates CoAP requests to my second CoAP server
> running at 5686 (which listens to IPv4 traffic only), the request is
> received by the CoAP server, but the response comes from the NAT64
> interface and doesn't get detected (?) by my end node.
>
> Below is tcpdump showing request from end node reaching my CoAP server
> 14:23:15.961656 IP6 (hlim 64, next-header UDP (17) payload length: 38)
> fd11:22::caxx:c3xx:49xx:d2xx.5683 > 2001:db8:1:ffff::c0xx:1cxx.5686: [udp
> sum ok] UDP, length 30
>
> 14:23:15.963083 IP6 (hlim 62, next-header UDP (17) payload length: 19)
> fdaa:bb:1::1.5686 > fd11:22::caxx:c3xx:49xx:d2xx.5683 : [udp sum ok] UDP,
> length 11
>
> Please note that I've already added the prefix fd11:22::/64 to the default
> route of my rcp attached to the border router.

Strange, so `tayga` has correctly forwarded the request out, received
the reply, but then made a whoopsie on the source IP address in the
response.

Either that, or something is masquerading the IPv6 traffic as
`fdaa:bb:1::1` erroneously. To rule out the latter, check the IPv6
firewall settings: `ip6tables-save` should not report any `MASQUERADE`
or `SNAT` rules on the `POSTROUTING` chain.

Vijay Baskar

unread,
Nov 12, 2021, 3:53:28 PM11/12/21
to openthread-users
Hi Stuart,

Thank you for your inputs. Let me check the iptables and get back if I still see this issue.

Kind regards,
Vijay
Reply all
Reply to author
Forward
0 new messages