nested Qubes (Qubes within Qubes) can work - proof of concept

890 views
Skip to first unread message

Eric Shelton

unread,
Aug 22, 2015, 1:33:05 AM8/22/15
to qubes-devel
Basically, if libvirt can be updated to support nested HVM, the below will probably work by merely using a custom config file.

Inspired by the recent success with a graphics card passthrough that employed directly calling 'xl create', I thought I would try out a few things.  It turns out that by enabling nested HVM, it is possible to successfully run Qubes R3 rc2 within itself, including with networking.

Step 1: Create a domain using Qubes VM Manager.  In this example, I called it 'qtest'.

Step 2: Tweak install-qubesr3.cfg to match your particular paths, including to the Qubes ISO.

Step 3: Keep running 'sudo xl mem-set dom0 2200' and 'sudo xl -vvv create ./install-qubesr3.cfg' until qemu starts up successfully (it doesn't seem to know how to play nicely with memory ballooning).  This will bring up an SDL window - you will be running qubes upstream.

Step 4: Run the Qubes installer.  Automatic disk configuration will not work, you will have to manually create /boot and / partitions.  Standard partitions worked for me.

Step 5: After the first shutdown after the install, do step 3 again.  Have it create service and default domains.  At the end, you will get an error message.  Just close the window, click the Finish button, and log in.  Only dom0 will come up, sys-net still needs a little reconfiguration to work.

Step 6: Fixing things up:
  qvm-prefs -s sys-net pcidevs "['00:04.0']"   (might already be set to this)
  qvm-prefs -s sys-net pci_strictreset False     (this is what caused the error message)
  edit /boot/grub2/grub.cfg - set the Linux kernel command line to include 'modprobe=xen-pciback.passthrough=1 xen-pciback.hide=(00:04.0)'

At this point, you could reboot and run step 3 again. sys-net will even start up now, and you can set the network adapter for manual configuration. However, networking will not work, because qemu upstream runs in dom0, and can't get through to the real sys-firewall. So, not it's time to use qemu traditional in a stub domain.

Step 7:Tweak run-qubesr3.cfg to match your particular paths.

Step 8: Keep running 'sudo xl mem-set dom0 2200' and 'sudo xl -vvv create ./run-qubesr3.cfg' until qemu starts up successfully.

Problem: Qubes has replaced the display pipeline for its secure display setup, but the domain was started outside of the Qubes framework.
Solution: _right_ after step 8, run 'xl list' to get the domain IDs for the HVM domain (n) and its stub domain (n+1).  Then run these:
  sudo /usr/sbin/qubesdb-daemon <HVM domain ID> qtest
  sudo /usr/bin/qubes-guid -d <stub domain ID> -t <HVM domain ID> -N qtest -c 0x73d216 -i /usr/share/icons/hicolor/128x128/devices/appvm-green.png -l <HVM domain ID> -q

New problem: this clobbers the window manager.  However, you can see Qubes start up in its window and interact with it.  Now networking works just fine (manually configure networking if you did not before).  A rough proof of concept for running nested Qubes.

As I mentioned at the beginning, if libvirt can be updated to support the nestedhvm feature used in the attached xl config files, all of the nasty mucking about with 'xl create' can go away.  Steps 5 and 6 will still be necessary (although possibly only the strict reset part of Setp 6 is required) to deal with the reset issue.

Once nested HVM support is in place, hopefully Qubes devs will find some benefit of a Qubes within Qubes setup, and this becomes something more than a stupid VM trick.  Qubes starts up pretty fast in a VM, particularly the second time around.

Best,
Eric
install-qubesr3.cfg
run-qubesr3.cfg

Jeremias E.

unread,
Aug 22, 2015, 6:05:26 AM8/22/15
to qubes-devel
Hello,

nice work :-)

Best regards
  J. Eppler

Marek Marczykowski-Górecki

unread,
Aug 25, 2015, 10:58:05 PM8/25/15
to Eric Shelton, qubes-devel
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

On Fri, Aug 21, 2015 at 10:33:05PM -0700, Eric Shelton wrote:
> Basically, if libvirt can be updated to support nested HVM, the below will
> probably work by merely using a custom config file.
>
> Inspired by the recent success with a graphics card passthrough that
> employed directly calling 'xl create', I thought I would try out a few
> things. It turns out that by enabling nested HVM, it is possible to
> successfully run Qubes R3 rc2 within itself, including with networking.

Nice work! :)

> Step 1: Create a domain using Qubes VM Manager. In this example, I called
> it 'qtest'.
>
> Step 2: Tweak install-qubesr3.cfg to match your particular paths, including
> to the Qubes ISO.
>
> Step 3: Keep running 'sudo xl mem-set dom0 2200' and 'sudo xl -vvv create
> ./install-qubesr3.cfg' until qemu starts up successfully (it doesn't seem
> to know how to play nicely with memory ballooning).

You can touch /var/run/qubes/do-not-membalance just for starting the VM
- - while this file is present qmemman will not redistribute free memory.

> This will bring up an
> SDL window - you will be running qubes upstream.

Why can't the target qemu ("traditional", in stubdomain) be used for
installation? Some bugs? Missing features?

> Step 4: Run the Qubes installer. Automatic disk configuration will not
> work, you will have to manually create /boot and / partitions. Standard
> partitions worked for me.
>
> Step 5: After the first shutdown after the install, do step 3 again. Have
> it create service and default domains. At the end, you will get an error
> message. Just close the window, click the Finish button, and log in. Only
> dom0 will come up, sys-net still needs a little reconfiguration to work.
>
> Step 6: Fixing things up:
> qvm-prefs -s sys-net pcidevs "['00:04.0']" (might already be set to
> this)
> qvm-prefs -s sys-net pci_strictreset False (this is what caused the
> error message)
> edit /boot/grub2/grub.cfg - set the Linux kernel command line to include 'modprobe=xen-pciback.passthrough=1
> xen-pciback.hide=(00:04.0)'

I don't think xen-pciback.hide is needed - it should be automatically
generated by initramfs scripts (00:04.0 is network adapter, right?).

> At this point, you could reboot and run step 3 again. sys-net will even
> start up now, and you can set the network adapter for manual configuration.
> However, networking will not work, because qemu upstream runs in dom0, and
> can't get through to the real sys-firewall. So, not it's time to use qemu
> traditional in a stub domain.
>
> Step 7:Tweak run-qubesr3.cfg to match your particular paths.
>
> Step 8: Keep running 'sudo xl mem-set dom0 2200' and 'sudo xl -vvv create
> ./run-qubesr3.cfg' until qemu starts up successfully.
>
> Problem: Qubes has replaced the display pipeline for its secure display
> setup, but the domain was started outside of the Qubes framework.
> Solution: _right_ after step 8, run 'xl list' to get the domain IDs for the
> HVM domain (n) and its stub domain (n+1). Then run these:
> sudo /usr/sbin/qubesdb-daemon <HVM domain ID> qtest
> sudo /usr/bin/qubes-guid -d <stub domain ID> -t <HVM domain ID> -N qtest
> -c 0x73d216 -i /usr/share/icons/hicolor/128x128/devices/appvm-green.png -l
> <HVM domain ID> -q

I think sudo is not needed here.

> New problem: this clobbers the window manager. However, you can see Qubes
> start up in its window and interact with it. Now networking works just
> fine (manually configure networking if you did not before). A rough proof
> of concept for running nested Qubes.
>
> As I mentioned at the beginning, if libvirt can be updated to support the
> nestedhvm feature used in the attached xl config files, all of the nasty
> mucking about with 'xl create' can go away. Steps 5 and 6 will still be
> necessary (although possibly only the strict reset part of Setp 6 is
> required) to deal with the reset issue.
>
> Once nested HVM support is in place, hopefully Qubes devs will find some
> benefit of a Qubes within Qubes setup, and this becomes something more than
> a stupid VM trick. Qubes starts up pretty fast in a VM, particularly the
> second time around.
>
> Best,
> Eric
>




- --
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQEcBAEBCAAGBQJV3Ss1AAoJENuP0xzK19csShkH/RmFqCOBqRiBjMUd9GQyYO4O
p6GJZLHHmk8g+QtUzChL5jI4QsOFonb5dV/gWXwpchMdIhylvE3tADTm49XFv/Hb
nbrLAfUYUgMFVn6q3QClALqiEToLX1AFYpIpO5DbkW83S2GMD5VnJ9yoR5bqffNT
cBIwe6lr+D0B5U6J46A9VGoMmA3pdkfeyXyymET/LW9Wln2nZe3ujIWo/jp6GrLw
mPdrOcRajQ1uOEGf/LDt+fPl2dt+725AQ3nOGK95qG9+KbJHnzgHTPw4NS4+ulTR
nuQy7QGy6vHG0a+Nw3NNGqgbJXs4lyevF84whc+e4CkE3GKFQYXjhUwa8qN6S4k=
=RK41
-----END PGP SIGNATURE-----

Eric Shelton

unread,
Aug 25, 2015, 11:30:22 PM8/25/15
to qubes-devel, knock...@gmail.com
On Tuesday, August 25, 2015 at 10:58:05 PM UTC-4, Marek Marczykowski-Górecki wrote:
>  This will bring up an
> SDL window - you will be running qubes upstream.

Why can't the target qemu ("traditional", in stubdomain) be used for
installation? Some bugs? Missing features?

qemu upstream was the only environment where I could have both nested HVM and a workable GUI.  I suppose nested HVM might not be needed for the initial install, but my guess is that it would have been a problem in Step 5 when Qubes actually started up.

Also, this came out of experimenting with some other ideas.  Along the way, I realized that nested Qubes would be possible.  However, this was more of a proof of concept - demonstrating that the underlying pieces were there - and ended up being pretty hacky, without any real effort at polishing things up.
 
> Step 5: After the first shutdown after the install, do step 3 again.  Have
> it create service and default domains.  At the end, you will get an error
> message.  Just close the window, click the Finish button, and log in.  Only
> dom0 will come up, sys-net still needs a little reconfiguration to work.
>
> Step 6: Fixing things up:
>   qvm-prefs -s sys-net pcidevs "['00:04.0']"   (might already be set to
> this)
>   qvm-prefs -s sys-net pci_strictreset False     (this is what caused the
> error message)
>   edit /boot/grub2/grub.cfg - set the Linux kernel command line to include 'modprobe=xen-pciback.passthrough=1
> xen-pciback.hide=(00:04.0)'

I don't think xen-pciback.hide is needed - it should be automatically
generated by initramfs scripts (00:04.0 is network adapter, right?).

I will try that out.

Also, I think I have sorted out how to add nested HVM support to libvirt (and along the way, determined that libvirt's libxl driver does not implement the viridian config option).  If so, this all will become greatly simplified.  I will post shortly with any patches and my results.

Eric

Eric Shelton

unread,
Aug 26, 2015, 9:58:20 AM8/26/15
to qubes-devel, knock...@gmail.com
On Tuesday, August 25, 2015 at 11:30:22 PM UTC-4, Eric Shelton wrote:
> Step 6: Fixing things up:
>   qvm-prefs -s sys-net pcidevs "['00:04.0']"   (might already be set to
> this)
>   qvm-prefs -s sys-net pci_strictreset False     (this is what caused the
> error message)
>   edit /boot/grub2/grub.cfg - set the Linux kernel command line to include 'modprobe=xen-pciback.passthrough=1
> xen-pciback.hide=(00:04.0)'

I don't think xen-pciback.hide is needed - it should be automatically
generated by initramfs scripts (00:04.0 is network adapter, right?).

I will try that out.

As you said, xen-pciback.hide is not needed.

 
Also, I think I have sorted out how to add nested HVM support to libvirt (and along the way, determined that libvirt's libxl driver does not implement the viridian config option).  If so, this all will become greatly simplified.  I will post shortly with any patches and my results.

 I posted the patch for libvirt to its own thread at https://groups.google.com/forum/#!topic/qubes-devel/UzO0BsIfIow

So, now the process looks something like this:

Step 1: Rebuild libvirt with the patch.  It's pretty easy to to do in its own appvm - make sure it has plenty of disk space - maybe 40-60 GB.  Then run 'make qubes-sources', copy the patch into the qubes-src/core-libvirt/patches.qubes directory, and add the filename to the series.conf or such file in core-libvirt.  Then run 'make vmm-xen core-libvirt'.  When the core-libvirt part is done, you will have some RPM files.  You only need 11 of them; run 'sudo yum sreach libvirt' in dom0 to see what is needed.  Copy those to dom0, and install them with 'sudo yum reinstall *' or such.  Reboot.

Sorry, I don't have time to set out every detail of step 1 - you'll have to work it out.  Look for the page that tells how to build a Qubes R3 ISO.

Step 2: Create a new appvm for Qubes.  Make sure you give it enough memory (2 GB, at a bare minimum, I imagine).

Step 3: Run 'qvm-start <qubes-vm-name> --cdrom=/home/xxx/qubes.iso' (insert your Qubes iso path).  Immediately kill the VM after it starts - we just want to make sure the correct config file was generated.  Go to /var/lib/qubes/appvm/<qubes-vm-name>.  Make a copy of <qubes-vm-name>.conf

Step 4: Edit your copied .conf file, and add the following two lines to the <features> section:

    <hap/>
    <nestedhvm/>

Step 5: Run 'qvm-start <qubes-vm-name> --cdrom=/home/xxx/qubes.iso --custom-config=edited.conf'.  Now install Qubes.  You will have to partition the drive manually (qubes install does not like something about the virtual HD); for example, 500BM to /boot, and the remaining space to /, both standard, not LVM, partitions, worked for me.

Step 6: Edit your copied .conf file.  Remove the section for the last virtual drive (the qubes iso).  Run 'qvm-start <qubes-vm-name> --custom-config=edited.conf'.  Run the second stage of the install.  Create default and system vms.  You will get an error message - this is related to libvirt not being able to do a PCI reset of the emulated PCI network adapter.  Just close the error window, and log in.  Then bring up a dom0 console, and run:

qvm-prefs -s sys-net pcidevs "['00:04.0']"   (could be unnecessary - probably already be set to this)
qvm-prefs -s sys-net pci_strictreset False     (this is what caused the error message)

Step 7: Reboot.  Should now all be working!  Nested Qubes.

Known issues:
- Start menu does not have any shortcuts for any of the domains, nor does the GUI let you add any; probably because of the sys-net PCI reset error .  Workaround: run 'qvm-run <vm-name> gnome-terminal' to get a command line for a vm and run apps from there.

Best of luck,
Eric

Eric Shelton

unread,
Aug 27, 2015, 9:08:20 AM8/27/15
to qubes-devel, knock...@gmail.com
On Wednesday, August 26, 2015 at 9:58:20 AM UTC-4, Eric Shelton wrote:
So, now the process looks something like this:

Step 1: Rebuild libvirt with the patch.  It's pretty easy to to do in its own appvm - make sure it has plenty of disk space - maybe 40-60 GB.  Then run 'make qubes-sources', copy the patch into the qubes-src/core-libvirt/patches.qubes directory, and add the filename to the series.conf or such file in core-libvirt.  Then run 'make vmm-xen core-libvirt'.  When the core-libvirt part is done, you will have some RPM files.  You only need 11 of them; run 'sudo yum sreach libvirt' in dom0 to see what is needed.  Copy those to dom0, and install them with 'sudo yum reinstall *' or such.  Reboot.

Sorry, I don't have time to set out every detail of step 1 - you'll have to work it out.  Look for the page that tells how to build a Qubes R3 ISO.

Step 2: Create a new appvm for Qubes.  Make sure you give it enough memory (2 GB, at a bare minimum, I imagine).

Step 3: Run 'qvm-start <qubes-vm-name> --cdrom=/home/xxx/qubes.iso' (insert your Qubes iso path).  Immediately kill the VM after it starts - we just want to make sure the correct config file was generated.  Go to /var/lib/qubes/appvm/<qubes-vm-name>.  Make a copy of <qubes-vm-name>.conf

Step 4: Edit your copied .conf file, and add the following two lines to the <features> section:

    <hap/>
    <nestedhvm/>

Step 5: Run 'qvm-start <qubes-vm-name> --cdrom=/home/xxx/qubes.iso --custom-config=edited.conf'.  Now install Qubes.  You will have to partition the drive manually (qubes install does not like something about the virtual HD); for example, 500BM to /boot, and the remaining space to /, both standard, not LVM, partitions, worked for me.

Step 6: Edit your copied .conf file.  Remove the section for the last virtual drive (the qubes iso).  Run 'qvm-start <qubes-vm-name> --custom-config=edited.conf'.  Run the second stage of the install.  Create default and system vms.  You will get an error message - this is related to libvirt not being able to do a PCI reset of the emulated PCI network adapter.  Just close the error window, and log in.  Then bring up a dom0 console, and run:

qvm-prefs -s sys-net pcidevs "['00:04.0']"   (could be unnecessary - probably already be set to this)
qvm-prefs -s sys-net pci_strictreset False     (this is what caused the error message)

Step 7: Reboot.  Should now all be working!  Nested Qubes.

Known issues:
- Start menu does not have any shortcuts for any of the domains, nor does the GUI let you add any; probably because of the sys-net PCI reset error .  Workaround: run 'qvm-run <vm-name> gnome-terminal' to get a command line for a vm and run apps from there.

KVM appears to work in a nested HVM appvm.  Also VirtualBox, at least in a Linux host, appears to work.  These might enable running some things under Qubes that you might not be able to otherwise.

A warning: in Xen 4.4 (used by Qubes R3), nested HVM is a "tech preview" feature.  Also, http://wiki.xenproject.org/wiki/Nested_Virtualization_in_Xen reports:

"Using populate-on-demand (memory!=maxmem) or guest paging in an L1 hypervisor for an L2 guest may deadlock the L0 hypervisor.  This means an L1 admin can DOS the L0 hypervisor. This is a potential security issue; for this reason, we do not recommend running nested virtualization in production yet."

So, use at your own risk.

Eric

Joanna Rutkowska

unread,
Aug 31, 2015, 8:47:55 AM8/31/15
to Eric Shelton, qubes-devel, Marek Marczykowski
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On Fri, Aug 21, 2015 at 10:33:05PM -0700, Eric Shelton wrote:
Hello,

While admittedly a nice feature, we don't want to enable nested virtualization
support in the hypervisor, because IMHO it enlarges the attack surface on the
hypervisor due to extra complexity associated with each VMEXIT processing (see
some of our early work on nested virtualization from a few years back).

Perhaps we could make it a boot option with a big warning? Although I'm
slightly against even that...

Cheers,
joanna.
-----BEGIN PGP SIGNATURE-----

iQIcBAEBAgAGBQJV5EzyAAoJEDOT2L8N3GcY0AgP/1MWWDYTIo6t4h+G4eHgDxwT
xdLI7aIQto2nEoexiTYWe3WKP6WS98Ih/P2ZcTgyzhL5zGshYGOkhoVcc0ClrJPp
zWNsX1+pD+N4ZAOpt4QjpOFqP1LjNArx3xSwzexsjqjFEwlsW8w4T7YGyqoXVUE6
qVA1Wc1ITh+6QLp/dUYHCVM9fu2c9zpmIPbHRweP306N5PrhAf0c7y6nj7GqeJRx
UObuegBgeiLpMo4VQAy+vkacDvk1f4FKFCO6xjawQ2TswE+ZUi3pFUh/rrBU+sW3
0pk4ezWJvsKoFEKXZT38v9cGnQHtx74Io/MRJInJ/ykwvRBhg9x14OWX20bQQnSy
LQ2rrT07mDfHkQnwlAdzrNmQ2ZoP2CpnkspmjMx7ls9APlkzhGmPo5bnZKKUrwGp
8tQoZ0oFjpjBggPRgHatsnGDqjdgg1hV49RDstJEAZ/EdYvL4hIpZb89jqoFNup+
8uyHzNzIMPrYkM+AMpsxg+FBD03k0oombWm9VJv0CL23QIFB6vNPxXyKJjc46m10
E+ZwOVrXvQ3Ncfl2brDf7magoPSJ7pwjMahDyr4lx8QzL/xHJtNDrnTc39lcgn0v
1OM+8NzMxPSaBNP8qm/HAP0EcMDcXszVBJ2QeyNKTDL8L7Lea5WbbYgac+aE8C3f
TQPGdHbu5McBbEra5jW6
=vE/6
-----END PGP SIGNATURE-----

Eric Shelton

unread,
Aug 31, 2015, 8:54:53 AM8/31/15
to qubes-devel, knock...@gmail.com, marm...@invisiblethingslab.com, joa...@invisiblethingslab.com
I agree with the security concerns.  However, adding support to libvirt to pass the option through to XL does not mean it is enabled by default for for all HVM domains.  Instead, you have to use a custom config file to enable the feature.  Implemented this way, it would only be available as an expert, "use at your own risk," type of feature - you would have to explicitly go out of your way to make use of it.

Eric

Joanna Rutkowska

unread,
Aug 31, 2015, 8:57:46 AM8/31/15
to Eric Shelton, qubes-devel, marm...@invisiblethingslab.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

What we don't want is (default) Xen *hypervisor* (aka xen.gz) to be compiled
with nested virtualization support. libvirt/xl can have the support to ask Xen
for this, no problem.

joanna.
-----BEGIN PGP SIGNATURE-----

iQIcBAEBAgAGBQJV5E9CAAoJEDOT2L8N3GcYPjEQAJDrBqKVR3GFha+iBRVdSllX
qkeNyx5Gr57s7VvtnuYT+eQK5sGsRv2pegCtGsGv9dsboBGqYvOh7CAqRNQbOSmZ
sDFP//uJf1p5aWXdtl3LAAzuIrPGMS6X3oBwoV1hJA0tXeq9gVps5h7RBEDs18/a
ASm9fedhGvBJMNPlK1l6r3G1p1sfY8mWvou9e9FOvaDkmMQEwMELogIN1oYLO3Kl
QyJA0r4xdllyo2rLVbp0f5LQTj4BjMwBWx5XMAK7m5Q/VZRzbWYyjz4cgJR7nFDj
J38I0eaexRbdUA5fy7tz0nrVHxmmkiz3QhjPy0ZdrJA22R+St/u55nu6f6XK9rvk
CQS/6jUN7VFfzoeBkCXGrT4VuVnCEeAcOmKLfToa2lZ/vDFCPvGUd6vzffbg9LLB
hXelJFKHENnPm/B6g9U6FHc+o4yUcnrpgKgjXModegEmLQVRTFru3FG1wB5WPUll
ebkH+jNOmnYCjRKUF2xLm9VCGugrS1jkzRCxazXfOzqV+i/RE9JMm2bh6Cw9GD/F
GyqAmpHOCE0opLFLuuUmh6pXvlxDizamm7uQCK2mJ7mRSDfwTO5AvPyEMF9ezs30
2gf9FL56NebLZo4Yx0CF8/FR97YJXFsIbER+1N3Oz+oCmQ7TbMdMHRfCWxDqj1qB
SV/t4Bm+9ZM8wAPps5Gg
=b57W
-----END PGP SIGNATURE-----

Marek Marczykowski-Górecki

unread,
Aug 31, 2015, 9:02:26 AM8/31/15
to Joanna Rutkowska, Eric Shelton, qubes-devel
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
I don't think it is an option. The feature is simply there. Can be
enabled on per-domain basis, it is disable by default.

- --
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQEcBAEBCAAGBQJV5FBYAAoJENuP0xzK19csI4kH/R2SOE/amZVf5/Jj8bffJcxL
x3V0kvJ7w0fV7YEi+LRsiOE98H/szrX2AFZR8+ju5hZfqhpwlsHryqv+iH0GRJOv
YBkWgf+KZESO7sqKr6Iht+tntr/i2trjDo1iB7SvPgBZ1BzXgBaOF5flWGFc8tyO
7WgV4KWhrFK+DbdTGS2KhfJq2qcnYoB+GQTxEgro+HefW1+gM7PDwR+JUqwsbFSY
BWnyvM/bJLirtE1EvbWBxIOGFz54ekxvhFdse7Q+PhDgLmxIs7HJ+to2XOztJk+U
NQdAht9DhOBL7QR94Qc0D7AptQy8MJGyFwpcMEAcx3eusrzTF2GW8lyUJwc1PWY=
=O9bQ
-----END PGP SIGNATURE-----

Eric Shelton

unread,
Aug 31, 2015, 9:05:20 AM8/31/15
to qubes-devel, knock...@gmail.com, marm...@invisiblethingslab.com, joa...@invisiblethingslab.com
I'm not sure I under stand - nested HVM support is already in Xen 4.4 (as a tech preview feature), and is in the hypervisor shipped with Qubes R3 rc2. I already posted a patch for libvirt that sets the appropriate flag for creating an HVM domain, and the nested HVM feature gets enabled.

Are you suggesting the hypervisor should be patched to remove the availability of this feature altogether?

Eric

Joanna Rutkowska

unread,
Aug 31, 2015, 9:07:00 AM8/31/15
to Marek Marczykowski-Górecki, Eric Shelton, qubes-devel
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Oh, really? I remember once (perhaps Xen 4.1 or earlier then?) it was
conditionally compiled... Xen really goes into a wrong direction then :/

Somebody should really fork it and stop adding all these new features, which is
so stupid from the security point of view :/

joanna.
-----BEGIN PGP SIGNATURE-----

iQIcBAEBAgAGBQJV5FFpAAoJEDOT2L8N3GcYt/EP/AzppNM2jkpY00tSgFXrADcM
2a+KCa9JK7E6dvn8bOv3FU6Iyx7GTxUZP+MkrozMe4BWhWmh3n1zK++s1UnzFVRN
ONZpdyoo6OlrkaMAk5ueBrQdBeAan4w4v9NFxDutzMO1edfAMhnDJR1NjEktIFDM
9FauXzMT4pzHtS2mH85uLHtM8G9BjfpuME7b4gRTjICIlmvDJudVELxRyfITNcT5
Y8N3s1gBm1bWMkoKUuxxHplrK5pwJYFrKxPesYF2QZ+JOEBgTqm8buCQyAsXDB9k
V89owkRr5eNIlhEmuD6kExgUKhggl2jTOBWBNmd2iafEg2uF4ja1jTdd/YnLtAgQ
Ols1xLKE2QgnsdeLWkf+a+eteo9kkLheTKIqRgSdlndD5tj6dmSv32nRoy1ECwul
GWHbjK0LdwKpiKErPetHcJe0Fl3J105W1fthRPJRkhQJD52zOVq1h7lnUV1v7ttT
XVf5xLOfrqp3A6h2G0WWvjY+UOc438C8PW7lWsIA2VU545NQVR5N+DMxd3ioEe4V
vHQJeE5A1kyjPwfKUP+1ayNmh3UKKJldprl+nnIG8qt8/WtO0k965OD5xny9Cfjj
LgidF1v+dgLaTeY+spC6I/NA/rGu2Vp/LivJoGS1+VAJYETxLZlM+LM5v3OQC4bO
AU6OwARAjwQ7ju5d3ZJe
=8StR
-----END PGP SIGNATURE-----

Eric Shelton

unread,
Aug 31, 2015, 9:11:24 AM8/31/15
to qubes-devel, marm...@invisiblethingslab.com, knock...@gmail.com, joa...@invisiblethingslab.com
FWIW, I think Ian Campbell agrees with you on that point:

"Really, who needs nested virtualization, or XSM -- these are of pure academic interest and only make the hypervisor unnecessary bloated, IMO."

Eric

Marek Marczykowski-Górecki

unread,
Aug 31, 2015, 9:27:38 AM8/31/15
to Eric Shelton, qubes-devel, joa...@invisiblethingslab.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
No, it simply wasn't implemented there. Since it was, its is compiled in
by default. Same as most of other features. Only some of them can be
enabled/disabled by command line switch, but really few can be disabled
at compile time. I think XSM is one of them (the only one?).

> > Somebody should really fork it and stop adding all these new features,
> > which is
> > so stupid from the security point of view :/

I think the only project interested in such approach is Qubes OS. So the
only candidate for "somebody" is "Qubes team". No one will do that for
us without being interested in the outcome.
Majority of Xen users (all that hosting providers) don't care. They want
support for bigger and bigger machines. And other new fancy hardware
supported.

> FWIW, I think Ian Campbell agrees with you on that point:
>
> http://lists.xen.org/archives/html/xen-devel/2013-09/msg01826.html
> "Really, who needs nested virtualization, or XSM -- these are of pure
> academic interest and only make the hypervisor unnecessary bloated, IMO."

Actually this one was said by Joanna ;)

- --
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQEcBAEBCAAGBQJV5FZDAAoJENuP0xzK19csR5EH/17ddLcxEHtsbh3pqfU+ULXW
qD4NFesRYsV4embr1kCS1fbQ9Q4LqNbmaaxP25c/FOWtSSgzzwnZGxYKXlRohZZY
bnwusKMado5kgqvkz+MyuK95TyRx6nrEfFddDQ486MSOiEUdkXcDadhVKo53SutI
fBXpf2y4pfLk5wIb4qJrzlS69rq/qgGW1eZ+hnuZw2UeoHhX5ZkTlQhJillvR0L5
2xcYOvVnDp5BxyEWFLFuGy5e7HqArPmEnzFk6Xg5Wf5vO7jtq4iDyGYvI+7U8B4Y
aav9asyE0ngUoHzP1NB0ZvQQihlfOfnToktTtgvjLLhEZY56pgeij3nGYY9Ta5E=
=Fxj1
-----END PGP SIGNATURE-----

rob.k...@gmail.com

unread,
Sep 1, 2015, 4:58:10 PM9/1/15
to qubes-devel
I'm replying here instead of the libvirt thread because this thread is discussing whether VM-within-VM is desirable.

I'm an independent software consultant. Qubes is awesome (beyond all the great security features) because it gives me the security-by-separation between the work I do for various clients without me having to worry if I did it right every time.

As a developer, most of my work is done (or should be done) within Vagrant VMs that mimic production as close as possible. This way, all the devs are synced, whether or not they use Qubes. The inability to use Vagrant within Qubes is a major issue. I do *NOT* want to use Vagrant in dom0 because I need to use Vagrant from within the git checkout of my client's codebase and that needs to live in that client's AppVM. I can use Vagrant+cloud (AWS, DO, Rack, etc), but that means I have to do all my development while networked.

I think allowing VM-within-VM should be a domain-specific flag, absolutely. Maybe even requires a confirmation everytime a process starts doing VM actions (similar to sudo). But, it should be allowable. Saying "nope" means you force modern developers to either dismiss Qubes or dual-boot Qubes+Linux (which leads to dismissing Qubes).

I would love to help test things and even write up a primer on how to do it.

Thanks,
Rob

wanfu...@gmail.com

unread,
Feb 15, 2016, 3:17:30 PM2/15/16
to qubes-devel
HI! I noticed the patch for support of nested VM support no longer compiles against qubesOS 3.0 and qubesOS 3.1 haven't tried the alpha yet. Is there any chance that someone might be able to update this patch to work with these versions. I have more of a developers desktop idea for qubesOS than a security minded use for it and it requires the use of this patch. I am not a programmer per say so I have had some difficulty deciphering where it is failing.

any help in this regard is greatly appreciated.

steven

Steven Anderson

unread,
Feb 15, 2016, 3:26:39 PM2/15/16
to qubes-devel
I would be willing to throw a little bit of money at this, even though I am not rich. Is there anyone interested in making a few bucks for an easy side job for a good programmer?

Let me know what it would cost to do this. Project rate/hourly rate/hours estimate.

Eric Shelton

unread,
Feb 15, 2016, 8:58:26 PM2/15/16
to qubes-devel
Just to make sure you know what you are getting into:


Best of luck,
Eric 
Reply all
Reply to author
Forward
0 new messages