Storing AppVMs on Secondary Drives

402 views
Skip to first unread message

Axon

unread,
Aug 12, 2014, 6:50:11 AM8/12/14
to qubes...@googlegroups.com
This is a topic I have been following for a long time. However, I
haven't seen a clear, definitive thread on it, so I'm starting a new one.

Imagine the following commonplace scenario:

1. I have a PC with two internal drives. One is a small (~100 GB) SSD;
the other is a large (~1 TB) HDD.

2. I have many AppVMs. Thirty of them are small (~1 GB), but three of
them (labeled red, green, and black, respectively) are large (~30 GB).

It follows that:

3. My SSD is too small to hold all of my AppVMs.

4. My physical PC has ample total storage (~1.1 TB) relative to the
total size of all of my AppVMs (~120 GB).

5. I can't securely combine the three large AppVMs into a single AppVM
because they occupy different security levels (red, green, and black).
(Let's assume that security considerations also rule out serially
sharing the large HDD among the three large AppVMs.)

Therefore, I'm left with the following two questions (which may or may
not amount to the same question):

6. What are my options in this scenario? In other words, how can I have
and use all of my AppVMs (without replacing either of my internal drives)?

7. Is there any way to store and use an AppVM on a different drive than
the one on which Qubes was originally installed?

signature.asc

cprise

unread,
Aug 12, 2014, 5:16:59 PM8/12/14
to Axon, qubes...@googlegroups.com
You could simply edit the qubes.xml file to point to the other drive.

Or merge the SSD and HD using LVM (seems odd doing this with wih both an
SSD and HD, but guess it would work).

Or merge in a way like 'bcache'...
http://www.linux.com/learn/tutorials/754674-using-bcache-to-soup-up-your-sata-drive


Axon

unread,
Aug 12, 2014, 7:20:07 PM8/12/14
to cprise, qubes...@googlegroups.com
cprise:
>
> On 08/12/14 06:49, Axon wrote:
>> This is a topic I have been following for a long time. However, I
>> haven't seen a clear, definitive thread on it, so I'm starting a new one.
>>
>> Imagine the following commonplace scenario:
>>
>> 1. I have a PC with two internal drives. One is a small (~100 GB) SSD;
>> the other is a large (~1 TB) HDD.
>>
>> 2. I have many AppVMs. Thirty of them are small (~1 GB), but three of
>> them (labeled red, green, and black, respectively) are large (~30 GB).
>>
>> It follows that:
>>
>> 3. My SSD is too small to hold all of my AppVMs.
>>
>> 4. My physical PC has ample total storage (~1.1 TB) relative to the
>> total size of all of my AppVMs (~120 GB).
>>
>> 5. I can't securely combine the three large AppVMs into a single AppVM
>> because they occupy different security levels (red, green, and black).
>> (Let's assume that security considerations also rule out serially
>> sharing the large HDD among the three large AppVMs.)
>>
>> Therefore, I'm left with the following two questions (which may or may
>> not amount to the same question):
>>
>> 6. What are my options in this scenario? In other words, how can I have
>> and use all of my AppVMs (without replacing either of my internal
>> drives)?
>>
>> 7. Is there any way to store and use an AppVM on a different drive than
>> the one on which Qubes was originally installed?
>
> You could simply edit the qubes.xml file to point to the other drive.
>

So... something like this would be the procedure?

1. # mv /var/lib/qubes/appvms/my-new-appvm
/path/to/secondary/drive/my-new-appvm

2. # vim /var/lib/qubes/qubes.xml

3. Find entry for "my-new-appvm"

4. Replace

dir_path="/var/lib/qubes/appvms/my-new-appvm"

with

dir_path="/path/to/secondary/drive/my-new-appvm"


(And this won't break anything?)
signature.asc

Marek Marczykowski-Górecki

unread,
Aug 13, 2014, 1:13:53 AM8/13/14
to Axon, cprise, qubes...@googlegroups.com
Theoretically it might work. But IMO the safer option would be to create
symlink from /var/lib/qubes/appvms/my-new-appvm to
path/to/secondary/drive/my-new-appvm.

--
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

signature.asc

Axon

unread,
Aug 13, 2014, 4:52:54 AM8/13/14
to Marek Marczykowski-Górecki, cprise, qubes...@googlegroups.com
Marek Marczykowski-Górecki:
So, just to confirm, the symlink procedure would simply be this?

1. # mv /var/lib/qubes/appvms/my-new-appvm
/path/to/secondary/drive/my-new-appvm

2. # ln -s /path/to/secondary/drive/my-new-appvm /var/lib/qubes/appvms/

And "my-new-appvm" will behave just as if it were a "normal" AppVM
(except for perhaps being slower, since it is on the slow HDD)?

signature.asc

Marek Marczykowski-Górecki

unread,
Aug 13, 2014, 9:57:57 AM8/13/14
to Axon, cprise, qubes...@googlegroups.com
Yes, exactly. Of course I didn't tested this ;)
signature.asc

Axon

unread,
Aug 13, 2014, 6:51:36 PM8/13/14
to Marek Marczykowski-Górecki, cprise, qubes...@googlegroups.com
Marek Marczykowski-Górecki:
Understood. I'll report any problems I encounter once I get a chance to
test this. Thank you, Marek.

signature.asc

Zrubecz Laszlo

unread,
Aug 14, 2014, 6:16:29 PM8/14/14
to qubes...@googlegroups.com
On 13 August 2014 15:57, Marek Marczykowski-Górecki
I've tested it before.
It works but slow - compared to SSD


My current solution is like:

- mount your (crypted) big & slow drive somwhere in dom0
- create image files (touch ..., truncate -s ...) to be attached to AppVMs
- create normal AppVMs on the SSD
- attach (qvm-block -A ) one (or more) big images to any AppVM
- mount the new drives inside the AppVM

The good:
- your apps remain fast :)
- You can choose what to save to the big slow drive and waht to keep
on the fast SSD
- You can use as many images as you want and you can attach more then
one to one appVM.

The bad:
- you have to cryptopen, mount your big drive manually, (or add new
entries to crypttab, fstab)
- you have to attach the images manually after every startup (or write
custom startup scripts :)
- you have to mount the images manually inside the AppVM (or create
custom udev rules)




--
Zrubi

cprise

unread,
Aug 15, 2014, 5:25:08 AM8/15/14
to Marek Marczykowski-Górecki, Axon, qubes...@googlegroups.com
While we're talking about untested stuff...

Can think of why the root volume couldn't be configured with bcache?
This uses the SSD to cache the most-frequently accessed parts of the HD.


Zrubecz Laszlo

unread,
Aug 15, 2014, 5:49:16 AM8/15/14
to cprise, Marek Marczykowski-Górecki, Axon, qubes...@googlegroups.com
On 15 August 2014 11:25, cprise <cpr...@gmail.com> wrote:
> While we're talking about untested stuff...

You think that bcache is more tested than a symlink? ;)

> Can think of why the root volume couldn't be configured with bcache? This
> uses the SSD to cache the most-frequently accessed parts of the HD.

While this solution is sound good it needs to be setup initially by
the installer.
I guest that fedora installer thing is not prepared for this.

And what about dm-crypt + bcache? All the features, and the tests are
only care about speed, instead of reliability, security, cryptography.


--
Zrubi

cprise

unread,
Aug 15, 2014, 6:27:30 AM8/15/14
to Zrubecz Laszlo, Marek Marczykowski-Górecki, Axon, qubes...@googlegroups.com

On 08/15/14 05:49, Zrubecz Laszlo wrote:
> On 15 August 2014 11:25, cprise <cpr...@gmail.com> wrote:
>> While we're talking about untested stuff...
> You think that bcache is more tested than a symlink? ;)

Certainly not. But it could result in better performance, so its up to
the user to decide WRT their priorities.

And adding a block layer is not a big deal, anyway. Windows has this as
a standard feature so its not rocket science.


>> Can think of why the root volume couldn't be configured with bcache? This
>> uses the SSD to cache the most-frequently accessed parts of the HD.
> While this solution is sound good it needs to be setup initially by
> the installer.
> I guest that fedora installer thing is not prepared for this.
>
> And what about dm-crypt + bcache? All the features, and the tests are
> only care about speed, instead of reliability, security, cryptography.

The default cache setting is write-through, which is reliability-minded
so I'd have no qualms about that. As for crypto impacts, bcache doesn't
really leave an indication as to what is-and-isn't allocated on the
volume AFAIK. So whereas TRIM is of mild concern (IMHO) on encrypted
volumes, I think bcache would be less so. It also seems plausible that
the SSD cache drive could itself be encrypted.

Axon

unread,
Aug 16, 2014, 11:48:04 AM8/16/14
to Marek Marczykowski-Górecki, cprise, qubes...@googlegroups.com
Axon:
OK, Marek, I've had a chance to test it. However, I'm not sure if the
behavior I'm observing is intended (or safe).

_Steps/results:_
1. Create my-new-appvm via Qubes VM Manager.
2. Create a test.txt file in my-new-appvm.
3. # mv /var/lib/qubes/appvms/my-new-appvm
/path/to/secondary/drive/my-new-appvm
4. # ln -s /path/to/secondary/drive/my-new-appvm /var/lib/qubes/appvms
5. Start my-new-appvm by clicking Nautilus shortcut in KDE menu.
6. Receive (unexpected!) dom0 tray notification:
> Qubes VM Manager
> Attached new device to dom0: loop24
> Attached new device to dom0: loop23
7. Nautilus opens in my-new-appvm. The test.txt file is there.
8. Inspect Qubes VM Manager and see the "stick attached" icon next to
my-new-appvm. Right-click on my-new-appvm and hover over "Attach/detach
block devices," showing the following two "detach" options:
> Detach dom0:loop24 11 GiB /path/to/secondary/drive/my-new-appvm/volatile.img
> Detach dom0:loop23 2 GiB /path/to/secondary/drive/my-new-appvm/private.img
9. Do "qvm-block -l" in dom0. Output:
> (...normal, expected block devices here...)
> dom0:loop 24 /path/to/secondary/drive/my-new-appvm/volatile.img 11 GiB (attached to 'my-new-appvm' as 'xvdc')
> dom0:loop 23 /path/to/secondary/drive/my-new-appvm/private.img 2 GiB (attached to 'my-new-appvm' as 'xvdb')

Is it unsafe that the AppVM's .img files are being mounted in dom0?

signature.asc

Axon

unread,
Aug 16, 2014, 1:28:35 PM8/16/14
to Marek Marczykowski-Górecki, cprise, qubes...@googlegroups.com
Axon:
My question may rest on a faulty assumption, since I don't actually know
whether the AppVM's .img files are literally being *mounted* in dom0
rather than "attached" in some other (hopefully safe) way.

signature.asc

Andrew B

unread,
Aug 16, 2014, 9:37:03 PM8/16/14
to qubes...@googlegroups.com
You should make sure with `mount` and `sudo udevadm monitor` in Dom0, but these devices shouldn't be /mounted/. These notifications are are only about Dom0 loopback devices for these files. There are supposedly udev rules to prevent Dom0 auto-mounting or parsing them in any way. You should make sure they work. I'm not sure quite what to look for--Dom0 /etc/udev/rules.d/99-qubes-block.rules isn't so obvious, I'm not clear how the set environment variables prevent Dom0-based parsing, and I'm not quite sure what role xenstore plays (if any?). Maybe Marek can shed some light on this? Or reference the wiki somewhere I can't find?

But it /looks/ like everything works OK. I'd be very interested to hear if this is not the case.

Andrew
0xB364F63E.asc
signature.asc

Axon

unread,
Aug 17, 2014, 8:06:18 AM8/17/14
to Andrew B, qubes...@googlegroups.com
Andrew B:
It looks to me that they're not actually *mounted* in dom0. Nonetheless,
we routinely warn people against even *connecting* untrusted block
devices to dom0, let alone mounting them, so I'm still not sure if this
is secure...

signature.asc

Hakisho Nukama

unread,
Aug 21, 2014, 6:04:44 AM8/21/14
to qubes...@googlegroups.com
Only a symbolic link to the private.img (moved to an encrypted file
system [1]) needs to be created...

cd /var/lib/qubes/appvms/someappvm/
mv private.img /mnt/appvms/someappvm/
ln -s /mnt/appvms/someappvm/private.img .

Qubes-Manager will show this "attached" device. But do *not* detach it.

[1] https://wiki.qubes-os.org/wiki/ZFS#TipsandHints

Axon

unread,
Aug 21, 2014, 1:43:02 PM8/21/14
to Hakisho Nukama, qubes...@googlegroups.com
Hakisho Nukama:
Thanks, but there's still the open question (from earlier in this
thread) of whether attaching an untrusted domU's private.img to dom0 is
safe (for dom0).

signature.asc

cprise

unread,
Aug 21, 2014, 4:46:27 PM8/21/14
to Axon, Hakisho Nukama, qubes...@googlegroups.com
Wouldn't it, by definition, be safe?

"Attach" means creating a loopback/block device for it, not mounting the
filesystem (the mount is done inside the VM). The difference is that
Qubes Manager decides to display it in the attached list because the
volume doesn't reside in the Qubes home volume.

Other than the way it is listed, how is this different than leaving the
img in its original spot? I don't see the problem.

Axon

unread,
Aug 21, 2014, 6:23:40 PM8/21/14
to cprise, Hakisho Nukama, qubes...@googlegroups.com
cprise:
If that is, indeed, what's happening, then I'm inclined to agree with
you. The thing is, I (as a non-programmer user) don't *really* know
what's going on behind the scenes, so I have to go by what the OS
explicitly tells me, namely that new devices are being "attached" to
dom0, which we've been told is (almost?) always a security risk.

(And if QVMM really is showing an AppVM's own private storage as
"attached" to itself, then perhaps this should be classified as a bug.)

(By the way, is there a risk even if a virtual disk is connected without
the filesystem being mounted? I'm thinking about attacks like BadUSB,
and how modified firmware (e.g., in an HDD) can infect a computer,
regardless of whether the filesystem is ever mounted. Obviously virtual
disks don't have firmware in the same way that physical disks do, but do
they have something equivalent to "virtual firmware"? In other words,
would it be possible for a compromised VM to compromise a virtual disk
such that it can then compromise dom0 once connected to it, regardless
of mounting?)

> Other than the way it is listed, how is this different than leaving the
> img in its original spot? I don't see the problem.
>

The only problem is that I, as a user, can't really distinguish between
the following two possible scenarios in this situation:

(a) The operation is perfectly benign, and the messages in dom0 are
there only because the symlinked directory is outside of
/var/lib/qubes/appvms, which is not itself a security problem.

or

(b) The messages in dom0 are indicating that this operation is resulting
in some kind of weird, unexpected bug which causes dom0 to be exposed to
the untrusted contents of an AppVM.

I think (a) is much more likely, but we also know that weird, unexpected
bugs can and do occur in very complex software, so IMHO it's not
unreasonable to ask for confirmation.

Besides, if I found the messages confusing (and ostensibly inconsistent
with previous advice about what not to do in dom0), it's not
inconceivable that future users will, as well. And given how common it
is to want to store stuff on secondary drives (and that this is the
safest way to do so, if (a) is true), it seems likely that other users
will also want to do this.

signature.asc

cprise

unread,
Aug 22, 2014, 12:23:29 AM8/22/14
to Axon, Hakisho Nukama, qubes...@googlegroups.com
"Attached device" ...not... "mounted filesystem".

Putting it in the list is just a foible (not really a bug) of how the
GUI works.

> (And if QVMM really is showing an AppVM's own private storage as
> "attached" to itself, then perhaps this should be classified as a bug.)
>
> (By the way, is there a risk even if a virtual disk is connected without
> the filesystem being mounted?

AFAIK, its just presenting a block of storage space to Xen.

Do an 'xl block-list' on both a regular domain and a relocated one. You
should see that they are essentially the same--the block devices are
both attached to dom0.

> I'm thinking about attacks like BadUSB,
> and how modified firmware (e.g., in an HDD) can infect a computer,
> regardless of whether the filesystem is ever mounted. Obviously virtual
> disks don't have firmware in the same way that physical disks do, but do
> they have something equivalent to "virtual firmware"? In other words,
> would it be possible for a compromised VM to compromise a virtual disk
> such that it can then compromise dom0 once connected to it, regardless
> of mounting?)

ITL knows best (would expect them to chime in and say its safe), but I
do not think there is any virt 'firmare' or exploit path to dom0 in this
case. The code on the dom0 side should be merely tossing blocks back and
forth.

>> Other than the way it is listed, how is this different than leaving the
>> img in its original spot? I don't see the problem.
>>
> The only problem is that I, as a user, can't really distinguish between
> the following two possible scenarios in this situation:
>
> (a) The operation is perfectly benign, and the messages in dom0 are
> there only because the symlinked directory is outside of
> /var/lib/qubes/appvms, which is not itself a security problem.
>
> or
>
> (b) The messages in dom0 are indicating that this operation is resulting
> in some kind of weird, unexpected bug which causes dom0 to be exposed to
> the untrusted contents of an AppVM.
>
> I think (a) is much more likely, but we also know that weird, unexpected
> bugs can and do occur in very complex software, so IMHO it's not
> unreasonable to ask for confirmation.
>
> Besides, if I found the messages confusing (and ostensibly inconsistent
> with previous advice about what not to do in dom0), it's not
> inconceivable that future users will, as well. And given how common it
> is to want to store stuff on secondary drives (and that this is the
> safest way to do so, if (a) is true), it seems likely that other users
> will also want to do this.

If you want absolute assurance you'll have to look in the code. If I
were you and wanted to verify the situation completely, I'd start with
the Qubes Manager source because that IMO is probably the code that has
a special condition depending on where the img is located. I would
expect to see a conditional that only relates to the GUI.

Axon

unread,
Aug 22, 2014, 6:12:34 AM8/22/14
to cprise, Hakisho Nukama, qubes...@googlegroups.com
cprise:
To my (untrained) eye, it appears that you are correct:

[user@dom0 ~]$ xl block-list secondary-vm
(...) BE-path
(...) /local/domain/0/backend/vbd/(...)
(...) /local/domain/0/backend/vbd/(...)
(...) /local/domain/0/backend/vbd/(...)
(...) /local/domain/0/backend/vbd/(...)
[user@dom0 ~]$ xl block-list normal-vm
(...) BE-path
(...) /local/domain/0/backend/vbd/(...)
(...) /local/domain/0/backend/vbd/(...)
(...) /local/domain/0/backend/vbd/(...)
(...) /local/domain/0/backend/vbd/(...)
It appears (again, to my untrained eye) that there is not a special
condition depending on where the img is located.

First, as a reminder, this is the tray notification I get when I start
the secondary-appvm:

> Qubes VM Manager
> Attached new device to dom0: loopX
> Attached new device to dom0: loopX

And this, of course, is the one I get when I shut down that AppVM:

> Qubes VM Manager
> Detached device from dom0: loopX
> Detached device from dom0: loopX

These notifications appear to be due to this bit of code from
qubes-r2/qubes-manager/qubesmanager/block.py:

> (...)
> else: #new device
> self.current_blk[b] = blk[b]
> self.current_attached[b] = att
> self.msg.append("Attached new device to <b>{}</b>: {}".format(
> blk[b]['vm'], blk[b]['device']))
>
> to_delete = []
> for b in self.current_blk: #remove devices that are not there anymore
> if b not in blk:
> to_delete.append(b)
> self.msg.append("Detached device from <b>{}</b>: {}".format(
> self.current_blk[b]['vm'],
> self.current_blk[b]['device']))
> (...)

So, if I'm understanding this correctly (admittedly a big "if"!), QVMM
is just treating dom0 the same as any other VM. But, of course, all
AppVMs' img files are usually in dom0. Yet normal AppVMs don't cause
these tray notifications when started and shut down. So there must still
be some difference my secondary-appvm and my normal AppVMs which is
causing the tray notifications in the case of the former but not the
latter. So, I suppose I should take a look at the code which (IIUC)
defines the functions for attaching and detaching devices:

> (...)
> def attach_device(self, vm, dev):
> backend_vm_name = self.free_devs[dev]['backend_name']
> dev_id = self.free_devs[dev]['dev']
> backend_vm = self.qvm_collection.get_vm_by_name(backend_vm_name)
> if self.tray_message_func:
> self.tray_message_func("{0} - attaching {1}"
> .format(vm.name, dev), msecs=3000)
> qubesutils.block_attach(vm, backend_vm, dev_id)
>
> def detach_device(self, vm, dev_name):
> dev_id = self.attached_devs[dev_name]['attached_to']['devid']
> vm_xid = self.attached_devs[dev_name]['attached_to']['xid']
> if self.tray_message_func:
> self.tray_message_func("{0} - detaching {1}".format(vm.name,
> dev_name), msecs=3000)
> qubesutils.block_detach(None, dev_id, vm_xid)
> (...)

At this point, I have the sneaking suspicion that, in my ignorance, I've
left out some portion of the code which is relevant to this inquiry.
Nonetheless, it looks to me that there's no special handling based on
where the device is located (or maybe it's just in the portion of the
code I left out).

Enlightenment welcome. :)

signature.asc

Marek Marczykowski-Górecki

unread,
Sep 6, 2014, 7:38:46 AM9/6/14
to Axon, Andrew B, qubes...@googlegroups.com
The devices listed by qvm-block (and for which notifications are displayed)
are filtered based on directory name. Every VM virtual disk (root.img,
private.img etc) is set using loop device (because xen-blkback need it), but
*not mounted in dom0*. So if you store VM disks in directory other than
/var/lib/qubes, the notifications will be displayed and disks will be listed
by qvm-block, but that's all. Also to not attempt to detach such disks...

Some workaround could be to bind mount directory, instead of symlink - then
the backing file will still have path in /var/lib/qubes and no notifications
will be displayed. Something like:
mount --bind /path/to/secondary/drive/my-new-appvm
/var/lib/qubes/appvms/my-new-appvm
/etc/fstab line would be:
/path/to/secondary/drive/my-new-appvm /var/lib/qubes/appvms/my-new-appvm none
bind 0 0
signature.asc

Axon

unread,
Sep 6, 2014, 2:42:29 PM9/6/14
to Marek Marczykowski-Górecki, Andrew B, qubes...@googlegroups.com
Marek Marczykowski-Górecki:
Ah, I see. Thank you for confirming/clarifying this, Marek.


signature.asc

Axon

unread,
Apr 1, 2015, 1:46:52 AM4/1/15
to Marek Marczykowski-Górecki, Andrew B, qubes...@googlegroups.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
This solution has been working well, but lately I've noticed that ~50%
of the time, after I cleanly shut down these symlinked AppVMs, their
loop devices remain attached to dom0, which causes the host shutdown
process to hang (forcing me to do a hard reboot of the host machine).

What should I include in my reboot/shutdown script to ensure that all
of these loop devices get detached before I attempt to reboot/shutdown
the host?
-----BEGIN PGP SIGNATURE-----

iQIcBAEBCgAGBQJVG4YoAAoJEJh4Btx1RPV8LjYQAIPwzn506liD1zTjs/auQAhB
KSZQaA+XwL7qUc/rG7fV9sDi73sAAHuJXtl4f3UZ8UZBwf5E+qSYdfGV9eZM78pc
BtmBg23ZUETNxe1ueMEd415GG4Ka/MMY0r5yJt2KvzTzsN/bG6Pk//WvdU57Tr/q
FLJgs6g/Ws15QpywrVvJ9xIz8Bp7to6EBaADiT9qRElT9sPecPcFC7drbu1f5lUa
up0jGQ8tE3LZmndetKpkJUCCXY1WtCGndioPBoB1tI65uT6icx/m11JGIp5/FCUP
o4iEe03Ne1a3YB4HF18+jnd6Fj1K4DXwGKu/wW9HPc+boQqp0pS9veu+WweYN5ve
yA+7pZX235t3rML7w2Y+e5XewEEKGpk/PsPbpzWDcurzuS95vjSzec+HYXcW369j
E/cegaqlj7GdngZWJGdFzNjYy6Yp2PV8Yw6ezyne7Wq3PsS5e6FdaZDiosCKT6lX
LZdeZ206kHGXwACeJNJFvT5N/P4lM5TdvFXj4sD/1fDkPqaWgKN7CsOBsQVys9zU
D4B8dq9GhkjdStaHT7tgeTRVBjjQZEWQGioZRZZYqBxygGcVmbsAE23tIofOD+OT
dOm6pucDePaRk3A3F/NnSJXpAhOz4z7GUOH6MNbDNcIko/+5Tcm4nXxy4qQrd0Z+
8xjlWMSkbt3MP57VkNDB
=Vu8j
-----END PGP SIGNATURE-----

7v5w7go9ub0o

unread,
Apr 1, 2015, 1:03:03 PM4/1/15
to qubes...@googlegroups.com
An alternative solution is to use DispVMs, created on the fly, instead
of a bunch of resident AppVms.

Less space; better security.




cprise

unread,
Apr 1, 2015, 2:05:34 PM4/1/15
to Axon, Marek Marczykowski-Górecki, Andrew B, qubes...@googlegroups.com
On 04/01/15 01:46, Axon wrote:
Are you sure it hangs only when there are residual loop devices?

I used to be able to shut down the host with just 'halt' but lately that
will hang. I have to use 'halt -p' to get it to shut down completely and
turn off.

If you're trying to manage loop devs try 'losetup -l' to list and
'losetup -D' to detach all used devices.

Axon

unread,
Apr 1, 2015, 6:45:51 PM4/1/15
to 7v5w7go9ub0o, qubes...@googlegroups.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

7v5w7go9ub0o wrote:
> An alternative solution is to use DispVMs, created on the fly,
> instead of a bunch of resident AppVms.
>
> Less space; better security.
>

I'm afraid I don't see the "less space; better security" claim holding
true, but perhaps I'm wrong.

Based on your other posts on this topic, I take it that your DispVM
approach is roughly as follows:

1. All user data is stored in one or more OfflineVMs. No programs are
run in these VMs. They are used solely for data storage.

2. When you need to work with some of your data, you create one or
more DispVMs, transfer the data from the OfflineVM(s) to the
DispVM(s), and do your work.

3. When finished, you copy the altered data from the DispVM(s) back to
the OfflineVM(s).

Here are some thoughts about this approach:

4. I don't see this approach using less space, because I still have to
store all the data somewhere (namely, in the OfflineVMs).

5. I don't see this approach being much more secure than regular
AppVMs. Both DispVMs and AppVMs can be compromised if their parent
TemplateVM(s) are compromised. In addition, some malicious code stored
along with the user data in the non-Template persistent storage space
of an AppVM could re-infect the AppVM every time it (re)starts and
executes that code. However, this exact same thing will happen if we
copy over the same data from an OfflineVM to a DispVM and work with it
in the DispVM in the same way as we would have worked with it in the
AppVM. In other words, every fresh DispVM into which we import this
malicious code will be compromised, just as our AppVM would have been
(re)compromised every time, assuming we perform the same tasks in
both. (Maybe there are some cases where only the AppVM would be
vulnerable if the malicious code relies on a startup process or
something, but this doesn't seem like a strong enough reason to say
that DispVMs are safer per se.) The DispVM approach is just shifting
the risk from one place to another without reducing it.

6. I see this approach as being highly inconvenient and potentially
inefficient. DispVMs seem to use more RAM and CPU than regular AppVMs.
Even if we don't care about that, there's the fact that it takes time
and additional effort to create and set up DispVMs for each task,
especially if we have to switch netvms and set up firewall rules. (It
takes at least several minutes when switching A DispVM from netvm
"none" to netvm "firewallvm" for any network connectivity to be
available.) For tasks which I want to perform every day (or more
frequently), this could literally add up to many extra hours of work
and waiting over the course of a month. (Scripts can mitigate this to
some extent, but in my experience trying to incorporate DispVM
operations into scripts is very unreliable.) In addition, there's
always the risk that I'll accidentally close the last window of some
DispVM and lose all of my work.


Maybe I've misunderstood the DispVM approach you have in mind, and
maybe I'm wrong about one or more of the points above. As always, I
welcome being corrected, and I'm open to having my mind changed about
this.
-----BEGIN PGP SIGNATURE-----

iQIcBAEBCgAGBQJVHHT2AAoJEJh4Btx1RPV8UO4P/i8gSidiEIyc6zjwjmkc8FBE
7KhmCTVabTiuAfaGaFccGNJAweuT/zGeJtCOc/IC/Nr1dgrQGiNk+wI4WJBDyGPk
0/gxZbP2liU0WQxofmIA0yNBJnUKXF4JLwTOTaLpYddWpMC2d9SM1ikYt2cMrodF
8/AqlspvO3OwckIvITLqtDPJ0rfazB8QhKdTi/ErzwsV6WPfSAKfh3NpZ3FEuvKJ
RDFb3jMmlQyA3Vy/X4E0jUt5OPpdfmVZEapbqpDNT4AOW1LhRdKBfP3OphEpwd0B
6ommdYxYfdegqFeH26cG+lW3NM87xCGtKdgJvit6wKgPfAOZ2Ue2owmGDLW2Gxv/
FhARtkhl7UeOJA9BFPBIKA8HTsfB1M/TPMZOaHTR9wMkSxy0oVwX6JRR8KHW3O7l
p25kv3Tc1E3ObFErfGtbbYNvzz+fSYrgY49b3y44i1hAigbShpi4YBXCcMKR31eU
pgAsfS9sEliuykFQsahTxzSpnIjcfd738fI4aTmO0GQnV0hXMxQOfJ00NIvCA3m+
RcNyu+qgaVxnUsL9YF+JpLrUSa9cU7+RLymmRPZmdLsgLZbQ5iHwaj6J6mJhaZSl
q6LNjJCvs8TElZGm3WtwJWPCicgIjDtxxGMJa14EBJEyyucd84Xg4jizv7NBODfM
AEUC3BxGzEln+3wc3etK
=IwEG
-----END PGP SIGNATURE-----

Axon

unread,
Apr 1, 2015, 6:55:38 PM4/1/15
to cprise, Marek Marczykowski-Górecki, qubes...@googlegroups.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

cprise wrote:
> Are you sure it hangs only when there are residual loop devices?
>

No, I'm not. It may well hang other times, as well.

> I used to be able to shut down the host with just 'halt' but lately
> that will hang. I have to use 'halt -p' to get it to shut down
> completely and turn off.
>

I think "shutdown" and "poweroff" will achieve the same thing as "halt
- -p", since "-p" just means "power-off the machine."

I've always just used "shutdown now".

> If you're trying to manage loop devs try 'losetup -l' to list and
> 'losetup -D' to detach all used devices.
>

Ok, thanks.

The last time the loop devices stayed attached to dom0 after the AppVM
was shut down (as shown by qvm-block), I did "sudo umount -l
/mnt/secondary" (the AppVM dirs reside on /mnt/secondary) before
shutting down the host, and it didn't hang that time. (But that was
only a single test.)
-----BEGIN PGP SIGNATURE-----

iQIcBAEBCgAGBQJVHHdZAAoJEJh4Btx1RPV8xnQP/349/QGLIg7YSdMW/G9uNI9b
q+ysbIHgHj64kY/z1txcMVxmH31LIU/2zDpByoT/Ut+q1Pri1nARCznPU/1MwHSh
45OY049tcGS1OS2YACdVkbJ53OWhLqoO7CCrzkm3wnAviFOQsOcSppS8GQvm5H7H
L4v0LpxBOQe2euz1YccXjphSCM0yUUnKLNfYx7lI9D6rNHy/KB/1/ktJmybbVHih
02YOJYasyqn0id71SC1J6KgWVHi5o+qHpd1zBtJ3OGDy7ThjCxnVvOGrfk3tH2Iy
gabZmZmtNe2U23S1+9DJ3SVlxGzi9NlqQAqv+po9+m0yssMY+0BomYOLx+nKcHy6
Y+hgZBPfEEjkESZJmFRe5g/o8I7Znq1ZAFOygVWomfaij4XPdNsdosL2+XqjnPO0
LP0Wl06xsqArE1EOy5nPpAc8qHlL7Srp33MRj9GlvDNgY4GrsVajZUfNoheEHZHW
vIc10I6S1JB6Z8/GsBTi7l1sZQxzonpUO6ih8I4+2CqCKxkBjpqzhiJ7Z4XZp7iN
dD53PQcjfJOKJQFEWr8A8oQY4mxMJzq5kxzj3yVXIzcHTFtgxCwomsAMt1Xc3Jz5
rYPaXWaT5i1Z67NWjGZjLqznnEignNqhVHUQFwslNqLxTd9Q5wwwn3Mz0113h7sK
iFM+bgutDLui+0WdCpLB
=ojxA
-----END PGP SIGNATURE-----
Reply all
Reply to author
Forward
0 new messages