ipv6 for internal network in 4.x?

109 views
Skip to first unread message

pixel fairy

unread,
May 27, 2017, 12:07:43 AM5/27/17
to qubes-devel
since qubes needs to adopt ipv6 eventually anyway, can we make the internal network v6?

v6 nat is the same as v4, but you would have to alert qubes when there is no external v6 route. this will also be true when there is no external v4, so its a problem that needs to be solved anyway.

one of the motivations is how easy it is to conflict with an existing v4 10.x network. and, of course, id like to finally have ipv6!

Peter Todd

unread,
May 27, 2017, 12:32:05 PM5/27/17
to pixel fairy, qubes-devel
Are you suggesting that VM's no longer have internal ipv4 addresses? You mean
via the ipv4-in-ipv6 address range or something else?

--
https://petertodd.org 'peter'[:-1]@petertodd.org
signature.asc

pixel fairy

unread,
May 28, 2017, 8:46:23 AM5/28/17
to qubes-devel, pixel...@gmail.com


Are you suggesting that VM's no longer have internal ipv4 addresses? You mean
via the ipv4-in-ipv6 address range or something else?

i was thinking dual stack and nat for both 4 and 6. my first thought was using the v6 addresses to internally address the vms, but that seems to be mostly done through vchan. proxy, firewall, and network vms, would need to support both anyway.

the only other way ive tried was nat64, and i remember hitting a problem with tls verification, but my setup could have been wrong. tried googling for "nat64 ssl" and "nat64 tls" and cant find anything on it.
 

Peter Todd

unread,
May 29, 2017, 10:46:01 AM5/29/17
to pixel fairy, qubes-devel
Right, with nat64 you're requiring the VM's and the software in them to use
IPv6 addresses, which get translated to IPv4. That's inevitably going to have
compatibility issues, as nat64 just isn't very common, and there's plenty of
software around that can only talk IPv4. I think a dual-stack arrangement is
much preferable to this, even if both IPv4 and IPv6 end up having to use NAT.

It's notable how the relative rarity of IPv6 NAT may be a problem - the IPv6
infrastructure wasn't designed with clients running multiple VM's at a time in
mind.
signature.asc

Patrik Hagara

unread,
May 29, 2017, 12:57:08 PM5/29/17
to qubes-devel, pixel fairy, Peter Todd
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
I'd like to mention the relative complexity of the IPv6 specification
(and by extension, its implementations) as a reason against this
proposed change. For example, take a look at this list of CVEs
related to IPv6 [0]. Please also note that writing firewall rules
for IPv6 can be quite challenging at times.

Second, IPv6 was, in fact, designed with clients running multiple VMs
at a time in mind -- you're just supposed to delegate v6 addresses
from a /64 (or bigger) IPv6 prefix and not use a NAT mechanism.

While I do accept the fact that IPv6 support is neccessary, I don't
think the existing v6 network stack implementations are quite as
mature as the v4 ones (which have undergone extensive testing "in
production" over the last few decades) -- especially not mature
enough for use in a security-oriented OS.

Should you find yourself in an environment with only v6 connectivity,
having IPv6 stack available **only** in the untrusted net VM will
definitely come in handy, but IMO all the VMs downstream should be
using v4 (either via 4in6 [1] or similar transition mechanism).


Cheers,
Patrik



[0] https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=ipv6
[1] https://en.wikipedia.org/wiki/4in6
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2

iQIcBAEBCAAGBQJZLFKpAAoJEFwecd8DH5rl+QYP/3s9SmOPrKlPcO8FKFIIAWfW
+RMB4PRvj1uT2sfsuMHVJhLu0RWS0ZHo1Y2yatad10fC/r9hqq7j1WbHz6+9keMJ
9vdLV1jTbuGLaEqgDNM0DPapXNau1SA4qLbEs7EgkZKv0prBLysTLXUOMwEE9zzi
T7V23fQx/un4+qiH7qmuPBgoGspyJwo7Vb0bXmrFGPNYS4Y7+YCWay4t+gI77Bcq
Ynv0OkPLpf1CWO8MnYg0S4YRikxbRJr5Sk6UeJeB2RAW0fEGtirXLJ/Kc953Je+h
lHBiRGGW1nOrcw8PR4ZLe8h7nhGtcCJK5PkktLO/SZPrrCXEYRvb9e4FFX/FG4iQ
yILLE00LPwKGX4x2wr4G53JY6cGg9cUVX960CZzZaFp2Q5sTZiFT9BVT/jBMCCXV
TCPp9O/wRyIUtJufmSWPp0UjQvUefNR2FLNg4/gH357pyqOIu0+cP3EwYL77nyIJ
hbrEo9Vv5qL3NhE9oytgBJGf75OK87KGhvqgh4maUfZFOqyC+U5mTrnc7pcwapF0
1v5XZvX35uuHEj+AR+MSOrJmu1IVG3LexHbwmxGEqAQGYVav+oCZQwMb/JjuWZwC
zwiCZhpsbcoQ0fF6nJAT5skNShQDPVtGytKe4R3x39VSg3ngbNQe2CWhKnlpx6W8
T7jTwz0TSEdmo0I5fqoS
=WsSS
-----END PGP SIGNATURE-----
> --
> You received this message because you are subscribed to the Google Groups "qubes-devel" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to qubes-devel...@googlegroups.com.
> To post to this group, send email to qubes...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/qubes-devel/20170529144548.GA7082%40fedora-23-dvm.
> For more options, visit https://groups.google.com/d/optout.

pixel fairy

unread,
May 29, 2017, 6:39:33 PM5/29/17
to qubes-devel, pixel...@gmail.com, pe...@petertodd.org

On Monday, May 29, 2017 at 9:57:08 AM UTC-7, Patrik Hagara wrote:

I'd like to mention the relative complexity of the IPv6 specification
(and by extension, its implementations) as a reason against this
proposed change. For example, take a look at this list of CVEs
related to IPv6 [0]. Please also note that writing firewall rules
for IPv6 can be quite challenging at times.

against which proposed change? one is the standard dual stack with nat66, the other nat64, which as i already mentioned, wouldnt work for us as it breaks some protocols. iptables is being replaced with nftables, which applys the same rules to both, so i dont think there would be much added challenge in that, but there are more pitfalls. 

Second, IPv6 was, in fact, designed with clients running multiple VMs
at a time in mind -- you're just supposed to delegate v6 addresses
from a /64 (or bigger) IPv6 prefix and not use a NAT mechanism.

wasnt rfc 4389 supposed to address that? in our case, we want to hide
whats going on behind the netvm, but having this, or just a binat, would
be good for a vm that we want to seem outside it comepletely.
 
While I do accept the fact that IPv6 support is neccessary, I don't
think the existing v6 network stack implementations are quite as
mature as the v4 ones (which have undergone extensive testing "in
production" over the last few decades) -- especially not mature
enough for use in a security-oriented OS.

Should you find yourself in an environment with only v6 connectivity,
having IPv6 stack available **only** in the untrusted net VM will
definitely come in handy, but IMO all the VMs downstream should be
using v4 (either via 4in6 [1] or similar transition mechanism).

i like this idea.

we could also only enable and nat v6 in the vms that need it. but, this would add attack surface to the firewall vm.
something like 4n6 from app vm to netvm would mean added firewall rules to the netvm, increasing its complexity.
a separate v6 enabled firewallvm would be additional overhead, but maybe not enough to matter.
 
Reply all
Reply to author
Forward
0 new messages