Safer networking?

3,673 views
Skip to first unread message

Igor Bukanov

unread,
Oct 11, 2012, 5:19:22 AM10/11/12
to qubes...@googlegroups.com
One of potential problems with AppVM facing the Internet is that a
kernel bug in TCP may allow to compromise all the VMs. To avoid that
for a banking VM I did the following.

In the firewall VM I run tinyproxy and sshd configured to listen only
to localhost. For banking VM I disabled all the networking so the only
networking device in it is the loopback interface. Using the
https://wiki.qubes-os.org/trac/wiki/Qrexec and proxy support in the
openssh client I allowed the ssh from the banking VM to connect to
sshd in the firewall VM using a setup similar to [1]. Then I used the
ssh client to setup TCP port forwarding from the banking VM to
tinyproxy in the firewall VM. After that I just pointed the browser in
the banking VM to the the forwarded proxy port and could browse
without any problems.

AFAICS such setup should stop a kernel bug in TCP from propagating.
That is, to subvert the banking VM from the firewall VM that was
compromised using that bug one would need to find a bug in ssh client.
Without such bug in ssh the only thing that the subverted VM can do
that influences kernel networking code in the banking VM is frequency
and size of data payload that traveled over opened loopback socket
connection in the banking VM. I presume the exposure here should be
magnitudes less that that of code in the kernel dealing with generic
TCP client connections. Is it a correct assumption?

And if so perhaps Qubus should have an option of TCP port-forwarding
between VMs on its own that bypasses the kernel networking stack using
the simplest possible code that is easier to check than the whole ssh
client source?

[1] https://groups.google.com/forum/#!topic/qubes-devel/RFH1Vx99jsg/discussion

Joanna Rutkowska

unread,
Oct 11, 2012, 7:03:04 AM10/11/12
to qubes...@googlegroups.com, Igor Bukanov
On 10/11/12 11:19, Igor Bukanov wrote:
> One of potential problems with AppVM facing the Internet is that a
> kernel bug in TCP may allow to compromise all the VMs. To avoid that
> for a banking VM I did the following.
>

This statement is only true for "all the VMs" that share the same
FirewallVM (or NetVM) from which we start the attack. This doesn't need
to be true in general (we can have e.g. two different NetVMs -- one with
WiFi card assigned, and the other with the 3G modem assigned, or we can
have a completely network-disconnected VMs).

> In the firewall VM I run tinyproxy and sshd configured to listen only
> to localhost. For banking VM I disabled all the networking so the only
> networking device in it is the loopback interface. Using the
> https://wiki.qubes-os.org/trac/wiki/Qrexec and proxy support in the
> openssh client I allowed the ssh from the banking VM to connect to
> sshd in the firewall VM using a setup similar to [1]. Then I used the
> ssh client to setup TCP port forwarding from the banking VM to
> tinyproxy in the firewall VM. After that I just pointed the browser in
> the banking VM to the the forwarded proxy port and could browse
> without any problems.
>
> AFAICS such setup should stop a kernel bug in TCP from propagating.
> That is, to subvert the banking VM from the firewall VM that was
> compromised using that bug one would need to find a bug in ssh client.
> Without such bug in ssh the only thing that the subverted VM can do
> that influences kernel networking code in the banking VM is frequency
> and size of data payload that traveled over opened loopback socket
> connection in the banking VM. I presume the exposure here should be
> magnitudes less that that of code in the kernel dealing with generic
> TCP client connections. Is it a correct assumption?
>

Yeah, I think so.

> And if so perhaps Qubus should have an option of TCP port-forwarding
> between VMs on its own that bypasses the kernel networking stack using
> the simplest possible code that is easier to check than the whole ssh
> client source?
>
> [1] https://groups.google.com/forum/#!topic/qubes-devel/RFH1Vx99jsg/discussion
>

This is an interesting setup :) While one could argue that perhaps the
attack surface on the ssh client + qrexec might be comparable to that
over the Linux TCP/IP stack, I think that it's still a win because at
the worst case the attacker needs two different exploits to conduct the
AppVM1 -> FirewallVM -> AppVM2 attack.

But of course it's not very user friendly ;) I've been wondering for a
long time that it would perhaps make a lot of sense to have the service
VMs based on some non-Linux kernel, such as e.g. FreeBSD. In this case
we would achieve a similar advantage: two different exploits would be
needed (one against FreeBSD TCP/IP stack and the other against the Linux
TCP/IP stack). The difference is that it would just work seamlessly
without any need for extra configuration... Of course the "only" catch
is that somebody would need to create a *BSD-based template (which might
be cool anyways) :)

Nevertheless, I think we would be open to accept a patch for such
qrexec-based-tcp-forwarding from you. I think this should be implemented
in a similar way as the tor service we discussed with Abel recently.

joanna.

signature.asc

Igor Bukanov

unread,
Oct 11, 2012, 9:11:39 AM10/11/12
to Joanna Rutkowska, qubes...@googlegroups.com
On 11 October 2012 13:03, Joanna Rutkowska
<joa...@invisiblethingslab.com> wrote:
> Nevertheless, I think we would be open to accept a patch for such
> qrexec-based-tcp-forwarding from you. I think this should be implemented
> in a similar way as the tor service we discussed with Abel recently.

Is Qubus RPC resource heavy? I.e. is it OK to open hundreds parallel
channels between VMs? If so I can avoid the whole issue of
multiplexing multiple TCP streams into single channel and can just
open new channel per connection.

abb

unread,
Oct 11, 2012, 10:19:14 AM10/11/12
to qubes...@googlegroups.com
OpenSSH has couple features which might make your life easier a bit: 1) the client has built-in SOCKS server (-D option) and 2) the server can run in inetd mode.So it might be possible to use SSH client and server processes themselves as RPC client and server for qrexec. You will need a wrapper script to execute the client and (if you want to run sshd as non-root) custom config/keys for sshd. My 5 cents J.

Igor Bukanov

unread,
Oct 11, 2012, 10:52:55 AM10/11/12
to qubes...@googlegroups.com
On 11 October 2012 16:19, abb <a...@gremwell.com> wrote:
> OpenSSH has couple features which might make your life easier a bit: 1) the
> client has built-in SOCKS server (-D option) and 2) the server can run in
> inetd mode.So it might be possible to use SSH client and server processes
> themselves as RPC client and server for qrexec. You will need a wrapper
> script to execute the client and (if you want to run sshd as non-root)
> custom config/keys for sshd. My 5 cents J.

I tried that. However, for banking VM I need a proxy where I can
whitelist the hostnames for banks to protect myself from a convenience
of going to some random site after I am done with paying the bills.
Unfortunately openssh does not have support for such filtering. So I
ended up running tinyproxy.

abb

unread,
Oct 11, 2012, 10:58:58 AM10/11/12
to qubes...@googlegroups.com
Cool. Do you happened to have some scripts/configs left for this bare-ssh setup?
Message has been deleted

Igor Bukanov

unread,
Oct 12, 2012, 1:48:39 PM10/12/12
to qubes...@googlegroups.com
On 11 October 2012 16:58, abb <a...@gremwell.com> wrote:
> Cool. Do you happened to have some scripts/configs left for this bare-ssh
> setup?

See https://groups.google.com/forum/#!topic/qubes-devel/RFH1Vx99jsg/discussion
. I used initially that setup for work VM to connect via ssh to
companies' server and browse the intranet using SOCKS option is ssh.
But then I realized that I can just connect to sshd runing on the
localhost in firewall VM.

Note that I have not tried using sshd in inetd mode as it may cause a
big delay during the initial setup.

David Shafirov

unread,
Oct 12, 2012, 5:18:43 PM10/12/12
to qubes...@googlegroups.com

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On Thursday, October 11, 2012 11:03:12 AM UTC, joanna wrote:

> I've been wondering for a long time that it would perhaps make a
> lot of sense to have the service VMs based on some non-Linux
> kernel, such as e.g. FreeBSD. In this case we would achieve a
> similar advantage: two different exploits would be needed (one
> against FreeBSD TCP/IP stack and the other against the Linux
> TCP/IP stack). The difference is that it would just work seamlessly
> without any need for extra configuration... Of course the "only"
> catch is that somebody would need to create a *BSD-based
> template (which might be cool anyways) :)

I've been using pfSense VMs as VPN clients with Linux VMs, for
just that reason. What would be involved in creating a Qubes
template based on pfSense?
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://www.enigmail.net/

iQEcBAEBAgAGBQJQeIkbAAoJEFlW7P8okr2YawoH/iHjKv+6XeCzm4Rt8kXFXjmk
E25BUUtIp+YkWYwlbSxFgxzx+8Sh2eXwTl35BVJfq7pkDPsMfodgPmSELItroKCa
V+njRCD38oRvKxOfqSa6fhWXXQTYHXAw7HxMz2kr1F1U9ghmyX9IesB6Blep2zL7
zrG3NrrmeZMqi4+k5U0TqjM7UfDM6HsgMZ0xuC6h+7tBxK6bEUBXype7cizjjY8J
+LoK7CdtLJ8YRT0MPLddcUqs+Dp484V9kiIooWjZkBdu14QGpf03e7RL2JT/CLKR
Pej6bhqrDZNV8Q7Du0XQgXkn3Syd6vxCrR4ZyQ0KdstrofxmQRUyRhXHqXpOkoE=
=t8JY
-----END PGP SIGNATURE-----

Hakisho Nukama

unread,
Oct 12, 2012, 7:11:17 PM10/12/12
to qubes...@googlegroups.com
> --
>
>

I think that all the programs required by QubesOS to run on the
VM side should be ported to FreeBSD and then included into a
pfSense build.

Which programs would that be? They could be wrapped inside
the FreeBSD ports system with help from the mailing list. [1]

FreeBSD AMD64 supports only HVM domU for now, and
i386 support both HVM and PVM domU. [2]
Questions about Xen can be asked on the mailing list. [3]

Another problem could be the COW overlay.
Maybe doable with UnionFS. [4]

Best Regards,
Hakisho Nukama

[1} http://lists.freebsd.org/mailman/listinfo/freebsd-ports
[2] https://wiki.freebsd.org/FreeBSD/Xen
[3] http://lists.freebsd.org/mailman/listinfo/freebsd-xen
[4] http://man.freebsd.org/mount_unionfs

abb

unread,
Oct 12, 2012, 10:25:36 PM10/12/12
to qubes...@googlegroups.com


On Friday, October 12, 2012 11:18:52 PM UTC+2, David Shafirov wrote:

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On Thursday, October 11, 2012 11:03:12 AM UTC, joanna wrote:

> I've been wondering for a long time that it would perhaps make a
> lot of sense to have the service VMs based on some non-Linux
> kernel, such as e.g. FreeBSD. In this case we would achieve a
> similar advantage: two different exploits would be needed (one
> against FreeBSD TCP/IP stack and the other against the Linux
> TCP/IP stack). The difference is that it would just work seamlessly
> without any need for extra configuration... Of course the "only"
> catch is that somebody would need to create a *BSD-based
> template (which might be cool anyways) :)

I've been using pfSense VMs as VPN clients with Linux VMs, for
just that reason. What would be involved in creating a Qubes
template based on pfSense?

Nice idea.
 

Outback Dingo

unread,
Oct 12, 2012, 11:20:19 PM10/12/12
to qubes...@googlegroups.com
> --
>
>

You would be better served using openvswitch or open source free
vyatta as the firewall / router vm. I think while being a BSD
Advocate, and a previous pfsense developer/user, For the purposes of
Qubes pfsense would be bloated and not very easily integrated, as BSD
is somewhat lacking in the XEN space though its making progress. I do
actually run FreeBSD VMs in XEN/XCP and have for 2+ years. However
that being said, there is quite alot of overhead of pfSense for a vm.
plus theres the "added" applications and gui. It could be possible to
strip out alot of the uneeded things, but thats work in of itself.

Marek Marczykowski

unread,
Oct 17, 2012, 7:14:55 PM10/17/12
to qubes...@googlegroups.com, Igor Bukanov, Joanna Rutkowska
Qubes RPC already is multiplexing streams into single per-VM "pipe". There is
some limit on max concurrent channels, on Linux VM it is 256.
Each connection needs at least three processes (one in source VM, one in dom0
and one in dest VM).

--
Best Regards / Pozdrawiam,
Marek Marczykowski
Invisible Things Lab

signature.asc
Reply all
Reply to author
Forward
0 new messages