wipe released diskspace of a disposable VM's

96 views
Skip to first unread message

josefh...@hushmail.com

unread,
Dec 12, 2019, 10:58:48 AM12/12/19
to qubes...@googlegroups.com
Hello list,

I heard that a Qubes-user was forced to hand over the Qubes-password, and that a forensic examiner was able to restore artifacts of a deleted disposeable form the harddisk...

Is this story possible? And what's the best aprroach to wipe diskspace used before by a disposable VM after that VM is closed?


Thank you!

Regards,

Joe

Mike Keehan

unread,
Dec 12, 2019, 12:23:59 PM12/12/19
to qubes...@googlegroups.com, josefh...@hushmail.com
Qubes won't help in this situation - see
https://www.qubes-os.org/doc/disposablevm/#disposablevms-and-local-forensics

They recommend using Tails for this type of situation.

Mike.

tetra...@danwin1210.me

unread,
Dec 12, 2019, 9:34:35 PM12/12/19
to Mike Keehan, qubes...@googlegroups.com, josefh...@hushmail.com
On Thu, Dec 12, 2019 at 05:23:47PM +0000, Mike Keehan wrote:
>Qubes won't help in this situation - see
>https://www.qubes-os.org/doc/disposablevm/#disposablevms-and-local-forensics
>
>They recommend using Tails for this type of situation.
>
>Mike.

I am getting very many duplicate copies of Mike's emails, but only of
emails from Mike. Is this happening to anyone else?

David Hobach

unread,
Dec 13, 2019, 2:59:21 AM12/13/19
to tetra...@danwin1210.me, Mike Keehan, qubes...@googlegroups.com, josefh...@hushmail.com
Probably because he clicked "reply all" on one of your questions like I
just did.

tetra...@danwin1210.me

unread,
Dec 13, 2019, 6:09:43 PM12/13/19
to David Hobach, Mike Keehan, qubes...@googlegroups.com, josefh...@hushmail.com
On Fri, Dec 13, 2019 at 08:59:16AM +0100, David Hobach wrote:
>>I am getting very many duplicate copies of Mike's emails, but only of
>>emails from Mike. Is this happening to anyone else?
>
>Probably because he clicked "reply all" on one of your questions like
>I just did.

No, when that happens (as it does with everyone who replies-all to my
emails) I only get 2 messages. However I currently have 15 copies of
Mike's "Qubes won't help in that situation" email...!

Jackie

unread,
Dec 13, 2019, 9:12:59 PM12/13/19
to qubes...@googlegroups.com
tetrahedra via qubes-users:
I see this too. I've seen it on qubes-devel also. I guess the problem is
on his email provider's end?

Claudia

unread,
Dec 15, 2019, 2:28:56 PM12/15/19
to josefh...@hushmail.com, qubes...@googlegroups.com
josefh.maier via qubes-users:
Isn't there an option in VM settings called "Keep DispVMs in memory" or
something like that? I'm assuming its purpose is so that dispvm contents
are never written to disk in the first place. It consumes more ram so it
might not be exactly what you want, but it's worth trying.

Depending on your situation, you might be able achieve what you want by
regularly wiping free space. There are tools for ext4 that do this.
Maybe there are similar tools for Btrfs or LVM volume groups. It's
probably not a great solution, because depending on the filesystem some
data may be released and reallocated, but not overwritten, in between
wipes. I'm not aware of any tools that transparently overwrite free
space at the moment the file is deleted.

Maybe you could write your own storage pool configuration that stores
DispVM images in a special place (partition, loop device, LV, whatever).
Then, you can wipe the entire thing (when no DispVMs are currently
running), which is much more reliable than wiping free space. Or, you
can encrypt it with a random key at boot, which is never written anywhere.

If you're up to the task, it would definitely be possible to do this by
writing a storage pool *driver*, but much more work.

https://www.qubes-os.org/doc/storage-pools/

https://github.com/QubesOS/qubes-issues/issues/904

-------------------------------------------------
This free account was provided by VFEmail.net - report spam to ab...@vfemail.net

ONLY AT VFEmail! - Use our Metadata Mitigator to keep your email out of the NSA's hands!
$24.95 ONETIME Lifetime accounts with Privacy Features!
15GB disk! No bandwidth quotas!
Commercial and Bulk Mail Options!

brenda...@gmail.com

unread,
Dec 15, 2019, 9:55:27 PM12/15/19
to qubes-users
Disposable VMs were not developed with anti-forensics in mind (e.g. no protection in jurisdictions where you can be forced to hand over your drive password).

That being said...

In 4.0 (updated) qubes now calls blkdiscard on volumes being removed before invoking lvremove. If you happen to use a SED SSD and you have manually enabled discards through all layers to the physical drive, then, depending on the manufacturer implementation, the data from a shutdown disposable VM might not be recoverable even with your disk password.

No guarantee but I recommend enabling discards all the way down.

After some forensics experiments, I put together a rough-at-the-edges bash script that does a rather good job of ensuring the volumes are not recoverable.

It creates an over-provisioned lvm volume In the current pool, overlays a new randomly keyed luks volume on top, makes that into an lvm pv, layers lvm vg and finally an lvm thin pool on top.

Adds that new thin pool to qvm-pools, copies templates and VMs there, temporarily modifies the global default pool setting (needed to be sure *all* volumes related to the VMs were in the ephemeral pool) and starts up some sessions.

I need to make it a bit more flexible but it served my need.

Once all started VMs are shut down, the script unwinds all the storage layers.

Since the luks layer was random keyed, that data is gone once unmounted.

Been meaning to clean it up and share at some point.

Brendan


brenda...@gmail.com

unread,
Dec 15, 2019, 10:04:33 PM12/15/19
to qubes-users
As to the first question: with qubes 4.0 it is a bit difficult to effectively wipe free space in the default thin pool.

One can create a thin volume and write to it until the thin pool reaches some saturation level (99.5%), then hit that volume with blkdiscard before invoking lvremove. Because you should not go to 100% the user may still be rolling the dice.

Lvm doesn’t like hitting 100% and one can permanently corrupt the system if you fill the lvm all the way.

It’s possible the lvm tool chain in 4.1 may have more capabilities once dom0 is on a much more recent fedora version.

It’d also be nice to have dom0 in a different pool than the templates/VMs...to reduce catastrophic failures.

B

Claudia

unread,
Dec 16, 2019, 5:33:52 PM12/16/19
to qubes...@googlegroups.com, Brendan Hoar
brenda...@gmail.com:
> Disposable VMs were not developed with anti-forensics in mind (e.g. no protection in jurisdictions where you can be forced to hand over your drive password
Never thought about it, but that makes sense. I can see how it would be
easy to confuse "non-persistence of malware" aspect and the
"non-persistence (non-remenance) of data" aspect, though.

But then... What does the checkbox mean, "Keep dispVM in memory", under
global settings (R3.2, at least)? Screenshot attached.

> That being said...
>
> In 4.0 (updated) qubes now calls blkdiscard on volumes being removed before invoking lvremove. If you happen to use a SED SSD and you have manually enabled discards through all layers to the physical drive, then, depending on the manufacturer implementation, the data from a shutdown disposable VM might not be recoverable even with your disk password. >
> No guarantee but I recommend enabling discards all the way down.
>
> After some forensics experiments, I put together a rough-at-the-edges bash script that does a rather good job of ensuring the volumes are not recoverable.
>
> It creates an over-provisioned lvm volume In the current pool, overlays a new randomly keyed luks volume on top, makes that into an lvm pv, layers lvm vg and finally an lvm thin pool on top.
>
> Adds that new thin pool to qvm-pools, copies templates and VMs there, temporarily modifies the global default pool setting (needed to be sure *all* volumes related to the VMs were in the ephemeral pool) and starts up some sessions.
>
> I need to make it a bit more flexible but it served my need.
>
> Once all started VMs are shut down, the script unwinds all the storage layers.
>
> Since the luks layer was random keyed, that data is gone once unmounted.
>
> Been meaning to clean it up and share at some point.
>
> Brendan
>
>

I sort of like the idea mentioned in bug #904, about doing the crypto
inside the dispVM itself, so that 1) the key is scrubbed by Xen when the
dispVM is shut down, and 2) data is non-recoverable instantly so you
don't have to wait until all dispVMs have been shut down for example.
Incidentally this approach actually offers a lot of improvement in
scenarios where the machine is seized while it's on and unlocked, too,
but that's another topic.

That being said, do you think it's currently possible to set up a dispVM
template so dispVMs run off encrypted storage? I'm thinking maybe a
startup script that overwrites /dev/xvdc (volatile.img) with a dm-crypt
container? Or perhaps it could be done at the filesystem level instead,
with ecryptfs or encfs using random keys, and dm-crypt for swap? Or even
just disable swap and mount a tmpfs over /home, if you have have enough
ram or don't write much data in dispVMs.

Just bouncing around some ideas. Seems like it might be possible to do
something like that, and perhaps simpler than the ephemeral pool
approach, depending on your situation. Thoughts?
Screenshot_2019-12-16_16-43-28.png

brenda...@gmail.com

unread,
Dec 17, 2019, 7:38:37 PM12/17/19
to qubes-users
On Monday, December 16, 2019 at 5:33:52 PM UTC-5, Claudia wrote:
brend...@gmail.com:
> Disposable VMs were not developed with anti-forensics in mind (e.g. no protection in jurisdictions where you can be forced to hand over your drive password
Never thought about it, but that makes sense. I can see how it would be
easy to confuse "non-persistence of malware" aspect and the
"non-persistence (non-remenance) of data" aspect, though.

But then... What does the checkbox mean, "Keep dispVM in memory", under
global settings (R3.2, at least)? Screenshot attached.


It was meant to be a dispVM speed-up option, not an anti-forensics option.
 
I sort of like the idea mentioned in bug #904, about doing the crypto
inside the dispVM itself, so that 1) the key is scrubbed by Xen when the
dispVM is shut down, and 2) data is non-recoverable instantly so you
don't have to wait until all dispVMs have been shut down for example.
Incidentally this approach actually offers a lot of improvement in
scenarios where the machine is seized while it's on and unlocked, too,
but that's another topic.

That could work, but depends upon threat model, e.g. if the dispVM hosts untrusted content then depending upon the VM to prevent leakage may have issues.
 
Just bouncing around some ideas. Seems like it might be possible to do
something like that, and perhaps simpler than the ephemeral pool
approach, depending on your situation. Thoughts?

I dunno...the ephemeral approach is simpler to me...in that it's just a bash script in dom0.

It's less simple in usuage...in that it takes a while to run to get to a usable state. :) But it did help uncover some inefficiencies in the qvm-clone implementation that has been patched by the devs.

In any case: the proof is testing data recovery during/after using the technique.

e.g. With R4, I found that even after copying the disposable vm template and the template it is based off of to a new pool, on startup, at least one volatile volume per dispvm is created in the default pool.

I'm pretty sure that's a defect and it's definitely a forensics gotcha. Hence the script currently needs to change the default pool before dispVM startup and then, after a time, reverts it back.

Brendan

Steve Coleman

unread,
Dec 18, 2019, 10:04:40 AM12/18/19
to qubes-users
I have a suggestion but don't know exactly how to implement it since I
am not that familiar with how the underlying storage pools work.

My suggestion is, rather than the time consuming wiping of bits after
the fact would be to instead create an encrypted volume/partiton/pool
when launching a DispVM, and upon shutting it down you simply throw away
the key to that temporary volume. Without the key any data on that
encrypted volume would be unrecoverable, so then all you really need to
wipe is just the memory space that stored the runtime key's working
memory. If the key is generated before the volume is created then the
key would be only available to dom0 where the key's working memory space
can be managed properly and Qubes would be able to support any number of
guest OS's as a DispVM.

If the volume were intentionally stored on an Opal 2.0 SSD device you
could then use the built in SSD hardware capabilities of the 'encrypted
locking range' (up to four are possible if I remember correctly) for the
temporary workspace and when you destroy/reset the MEK (key) this will
instantly flip all the bits in the underlying hardware of that disk
region and make that range completely unrecoverable. You just need to
assign another key to that locking range. Yes I realize many people
don't trust the Opal standard to be on your side, but the exact same
capability could be emulated in software using a Qubes generated random
one-time-use symmetric software key.

Anyone know if a storage pool can be created quickly on top of an
encrypted disk volume? Or can you efficiently create a software
encrypted volume on top of a storage pool? Discarding a key might be the
fastest way to 'virtually wipe' that temp storage space.

brenda...@gmail.com

unread,
Dec 18, 2019, 5:50:00 PM12/18/19
to qubes-users
On Wednesday, December 18, 2019 at 10:04:40 AM UTC-5, steve.coleman wrote:
On 2019-12-15 22:04, brend...@gmail.com wrote:
My suggestion is, rather than the time consuming wiping of bits after
the fact would be to instead create an encrypted volume/partiton/pool
when launching a DispVM, and upon shutting it down you simply throw away
the key to that temporary volume. Without the key any data on that
encrypted volume would be unrecoverable, so then all you really need to
wipe is just the memory space that stored the runtime key's working
memory. If the key is generated before the volume is created then the
key would be only available to dom0 where the key's working memory space
can be managed properly and Qubes would be able to support any number of
guest OS's as a DispVM.

That is what I do via a bash script. LVM vg/pool on top of LUKS (as pv) on top of default LVM pool, and the luks layer uses /dev/urandom sourced key.

However, due to some of the constraints of LVM (which, given the features it does provide, I *can* live with), I also need to move the templates to the temporary pool for the approach to work, so there's a time consuming (~4 minutes) setup time before the VMs are running.

It's just a script, so execute, perform some other task for a bit and wait for windows to appear. When done with the sensitive task (or prepping for anti-forensics testing), user performs shutdown VMs, tell script you "done", and the script ensures all VMs are closed, then removes the above storage layers.

--

There were some discussions in the qubes-issues ticket(s) about adding additional driver layers in the mix that might make utilizing a separate encrypted pool unnecessary.  Other discussions involved performing the encryption inside the VMs, but as I mentioned earlier, if the content in the VM that is being manipulated is untrustworthy...then is the VM's internal encryption really trustworthy?
 
If the volume were intentionally stored on an Opal 2.0 SSD device you
could then use the built in SSD hardware capabilities of the 'encrypted
locking range' (up to four are possible if I remember correctly) for the
temporary workspace and when you destroy/reset the MEK (key) this will
instantly flip all the bits in the underlying hardware of that disk
region and make that range completely unrecoverable.

OPAL ranges could be useful, but as they are also basically hardware managed partitions, I believe they would be difficult to utilize effectively, esp. if you want n (instead of a max 4 or 8) different secure areas (some permanent, some ephemeral). That being said, I do believe the opportunistic anti-forensics of trim/discard on a SED with DZAT might be useful (and hence my suggestion of utilizing trim through all layers and proposing to the qubes devs to blkdiscard before lvremove, which qubes now does).

Also: some business use cases for permanent encrypted VMs have been given, e.g. keeping client A resources locked down while client B work is being performed or demo'd, etc. I would think LVM, esp. thin LVM, gives quite a bit of flexibility in sizing, adding/removing, etc. and would be applicable, perhaps with the additional encrypting driver layer discussed in the related qubes-issues items in github.
 
Brendan

brenda...@gmail.com

unread,
Dec 19, 2019, 12:09:26 PM12/19/19
to qubes-users
This script shows the approach I take for an ephemerally keyed lvm pool:


Assuming you want a windows standalone work VM and one or more whonix disposable VMs, you just need to change the two variables in the script and launch it in dom0.

Be sure you know what you are doing. Review the script first.

It's hacked together, but it's been working well.

Brendan

brenda...@gmail.com

unread,
Dec 19, 2019, 1:14:40 PM12/19/19
to qubes-users
On Thursday, December 19, 2019 at 12:09:26 PM UTC-5, Brendan Hoar wrote:
This script shows the approach I take for an ephemerally keyed lvm pool:



And of course, since I was in a hurry, I see typos and better possible edits in the explanatory text it displays after it runs.

In any case: while you can use the disk space widget to see primary pool data usage, I recommend you use lvs to determine your primary pool data usage % *and* metadata usage %. Qubes R4.1 will show both.

Also, review the VMs you intend to put in the new pool. Little to no space is used until the VMs and associated templates are copied over to it. So in addition to copies of the VMs/templates, additional work you do with data in the new pool will use storage in the primary pool, until you remove it (either by exiting the script correctly or by manually removing via lvremove if you did not exit the script correctly).

A full pool is a dead pool.

Brendan

brenda...@gmail.com

unread,
Dec 19, 2019, 3:48:51 PM12/19/19
to qubes-users
Use this one instead, previous one had a missing newline:


Brendan

Claudia

unread,
Dec 23, 2019, 8:52:51 AM12/23/19
to qubes...@googlegroups.com
brenda...@gmail.com:
> Other discussions involved performing the
> encryption inside the VMs, but as I mentioned earlier, if the content in
> the VM that is being manipulated is untrustworthy...then is the VM's
> internal encryption really trustworthy?


This is a good point which I hadn't thought of. Forgive me I still
haven't read the whole discussion, I was just coming up with some ideas
for the OP. For networked VMs, it's a moot point because malware could
just as easily send the data to attacker.gov instead of breaking
encryption. But for non-networked VMs, it's a very good point.
Reply all
Reply to author
Forward
0 new messages