Backup to AppVM

168 views
Skip to first unread message

Axon

unread,
Dec 18, 2013, 2:16:50 AM12/18/13
to qubes...@googlegroups.com
Has anyone been able to use the new backup utility? Every time I try to
use it, I get the following (rather uninformative) error message:

Backup error!

ERROR: Failed to perform backup: error with VM

Of course, I have no idea what kind of error there is with the VM. I
also don't know whether the message is referring to the VM(s) being
backed up or the target VM.

I tried creating a fresh "backupvm" as a destination VM just to test it,
but that didn't help anything.

I also noticed that dom0 appears to be included in the backup regardless
of whether dom0 is actually selected by the user to be part of the
backup (i.e., there is no way to *not* back up dom0).

signature.asc

Axon

unread,
Dec 22, 2013, 10:58:17 PM12/22/13
to qubes...@googlegroups.com
Axon:
> Has anyone been able to use the new backup utility? Every time I try to
> use it, I get the following (rather uninformative) error message:
>
> Backup error!
>
> ERROR: Failed to perform backup: error with VM
>
> Of course, I have no idea what kind of error there is with the VM. I
> also don't know whether the message is referring to the VM(s) being
> backed up or the target VM.

The problem was that I was not specifying the directory inside the AppVM
correctly.

> I tried creating a fresh "backupvm" as a destination VM just to test it,
> but that didn't help anything.
>
> I also noticed that dom0 appears to be included in the backup regardless
> of whether dom0 is actually selected by the user to be part of the
> backup (i.e., there is no way to *not* back up dom0).


I now see a new error upon attempting to restore:

[Dom0] Houston, we have a problem...

Whoops. A critical error has occured. This is most likely a bug in Qubes
Manager.

AttributeError: 'NoneType' object has no attribute 'read'
at line 213
of file /usr/lib64/python2.7/site-
packages/qubesmanager/restore.py.

----
line: backup_source,vmproc.stderr.read()))
func: restore_vm_dirs
line no.: 882
file: /usr/lib64/python2.7/site-packages/qubes/backup.py
----
line:appvm=appvm)
func: backup_restore_header
line no.: 962
file: /usr/lib64/python2.7/site-packages/qubes/backup.py
----
line: appvm=self.target_appvm)
func: __fill_vms_list__
line no.: 137
file: /usr/lib64/python2.7/site-packages/qubesmanager/restore.py
----
line: self.__fill_vms_list__()
func: current_page_changed
line no.: 213
file: /usr/lib64/python2.7/site-packages/qubesmanager/restore.py

signature.asc

Axon

unread,
Dec 23, 2013, 5:03:01 AM12/23/13
to qubes...@googlegroups.com
Axon:
Just FYI: I also tried qvm-backup-restore (with the appropriate options)
from the dom0 command line, and the same error occurs, with the same
line numbers mentioned in the error message. Perhaps there's a problem
with the code on those lines?

signature.asc

Marek Marczykowski-Górecki

unread,
Dec 23, 2013, 9:41:31 AM12/23/13
to Axon, qubes...@googlegroups.com
Yes, it looks like it. Can you say which option did you used to reproduce this
error?

--
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

signature.asc

Axon

unread,
Dec 23, 2013, 9:35:04 PM12/23/13
to Marek Marczykowski-Górecki, qubes...@googlegroups.com
Marek Marczykowski-Górecki:
I didn't use any options. Actually, though, I think there's a different
(and more serious) problem which occurs when attempting to restore total
data greater than a certain size. Lengthy explanation follows:

Today, I decided to do a clean install of R2B3 (having already upgraded
in-place from R2B2 to R2B3). First, I did a full system backup using
qvm-backup. Since I am still accustomed to using the old backup system,
I have a LUKS/dm-crypt encrypted local hard drive where I store my
backups. So I did not encrypt my backup using the backup tool (i.e., I
did not issue the "-e" flag when running qvm-backup). Since the tool
still asked for a password, I just left it blank. (IMHO, the code should
probably be changed so that no password is requested if the backup is
not encrypted, since there is no point in doing so, and doing so will
give some users a false sense of security, causing them to mistakenly
believe that their unencrypted backup is encrypted. A scary warning
message should also be printed if the backup is not being encrypted by
the backup tool, recommending that they either enable encryption or
ensure that they encrypt the backup by some other means.)

The backup is successful, so I reboot and install R2B3 from the ISO.
Once I get to the new desktop, I attempt to restore from the backup I
just made, first using the GUI (which is when I got an error message
like the one above), then using the command line tool
(qvm-backup-restore). Now, here's the interesting part:

I tried to restore using qvm-backup-restore around 10-20 times, and
every time, the restoration failed with a slightly different error
message. Usually, the error was that an hmac or private.img file did not
exist in /var/tmp/restore_xxxxx. However, every time I checked to see
whether the allegedly missing files were there, I found them. So,
somehow, qvm-backup-restore is getting confusing, and it's thinking that
a file is missing when it's not. At this point, I was afraid I had lost
all of my data (in the sense that I wouldn't be able to restore it). So,
I tried restoring from an older backup which was created using the old
backup tool (i.e., directory structure rather than single file backup).
This one restored perfectly the first time! This came as a relief to me,
but I still wanted my newer data from my most recent backup. (Perhaps it
would be a good idea to allow the user the choice to use the old backup
system, at least until the new one is fixed and thoroughly tested. After
upgrading to R2B3, I didn't have the choice to create a backup using the
old tool, which I knew worked for me. This made it scary to test the new
tool, since I felt like I was working without a net.)

But the strange thing is that, several days ago, I had run a successful
test (using only one AppVM) of backing up and restoring using the new
tool. Very odd indeed. Thinking about this, I had the idea of trying to
restore from my most recent backup *one AppVM at a time*. It turns out
that this did the trick (but with a caveat).

Since all of my AppVMs had already been restored (from an older backup),
I had a list full of AppVMs with the same names as the list of AppVMs in
my most recent backup. As you know, you can't restore an AppVM if there
already exists an AppVM with the same name on the system. This turned
out to be rather convenient, since I wanted to restore only one AppVM at
a time. So, I was able to delete a single AppVM, then restore that
single AppVM from my most recent backup by issuing "qvm-backup-restore
--skip-confliting". (The only other way to do it would have been to type
out a very long list of exclusion options, e.g., "-x AppVM1 -x AppVM2 -x
AppVM3". (By the way, would it be possible to add an option for
*inclusion* in addition to *exclusion*? That would come in handy in
cases like this, as in many others.)

So, I went through, one AppVM at a time, first deleting (or renaming)
the one currently on the system, and then restoring that single AppVM
from the most recent backup. This worked perfectly... until I reached an
AppVM which is around 15 GB in size. Then I got the error again. I was
able to restore one that's around 6 GB without any problems, so I think
the problem must arise when the AppVM is somewhere between 6 GB and 15
GB. (It would be very helpful if others could also test this.)
Fortunately, I don't think I've made any changes to that particular
AppVM since the old backup, so I don't actually need to restore from my
most recent backup. *But if there are other Qubes users who have AppVMs
which are that large (or larger), I fear that they are currently at risk
of not being able to restore from their backups of that AppVM.*

Related threads:
https://groups.google.com/d/topic/qubes-users/OFBijm__ZRs/discussion
https://groups.google.com/d/topic/qubes-users/fpgPFAxEC84/discussion



signature.asc

Marek Marczykowski-Górecki

unread,
Dec 23, 2013, 10:14:49 PM12/23/13
to Axon, qubes...@googlegroups.com
The password is still used to verify backup integrity. Perhaps this should be
clearly marked in message.

> The backup is successful, so I reboot and install R2B3 from the ISO.
> Once I get to the new desktop, I attempt to restore from the backup I
> just made, first using the GUI (which is when I got an error message
> like the one above), then using the command line tool
> (qvm-backup-restore). Now, here's the interesting part:
>
> I tried to restore using qvm-backup-restore around 10-20 times, and
> every time, the restoration failed with a slightly different error
> message. Usually, the error was that an hmac or private.img file did not
> exist in /var/tmp/restore_xxxxx. However, every time I checked to see
> whether the allegedly missing files were there, I found them. So,
> somehow, qvm-backup-restore is getting confusing, and it's thinking that
> a file is missing when it's not.

Are you sure you didn't run out of disk space at some point? Wasn't the file
zero-sized?
This is doable in GUI. But indeed would be handy also in command line tool.

> So, I went through, one AppVM at a time, first deleting (or renaming)
> the one currently on the system, and then restoring that single AppVM
> from the most recent backup. This worked perfectly... until I reached an
> AppVM which is around 15 GB in size. Then I got the error again. I was
> able to restore one that's around 6 GB without any problems, so I think
> the problem must arise when the AppVM is somewhere between 6 GB and 15
> GB. (It would be very helpful if others could also test this.)
> Fortunately, I don't think I've made any changes to that particular
> AppVM since the old backup, so I don't actually need to restore from my
> most recent backup. *But if there are other Qubes users who have AppVMs
> which are that large (or larger), I fear that they are currently at risk
> of not being able to restore from their backups of that AppVM.*

AFAIR I've tested it on some bigger data set, but not sure if that was a
single AppVM. Perhaps the magic size is 10GB? The data is divided into 100MB
chunks, so 10GB is 100 of them (3 decimal digits). Are you sure that filename
of hmac in error messsage was correct (so additional/missing zeros etc)?

Anyway thanks for very detailed description, I hope this will help greatly to
fix the problem.
signature.asc

Axon

unread,
Dec 23, 2013, 10:52:46 PM12/23/13
to Marek Marczykowski-Górecki, qubes...@googlegroups.com
Marek Marczykowski-Górecki:
Oh, I forgot about checking integrity. That makes sense. In that case,
is using a blank password a problem?

>> The backup is successful, so I reboot and install R2B3 from the ISO.
>> Once I get to the new desktop, I attempt to restore from the backup I
>> just made, first using the GUI (which is when I got an error message
>> like the one above), then using the command line tool
>> (qvm-backup-restore). Now, here's the interesting part:
>>
>> I tried to restore using qvm-backup-restore around 10-20 times, and
>> every time, the restoration failed with a slightly different error
>> message. Usually, the error was that an hmac or private.img file did not
>> exist in /var/tmp/restore_xxxxx. However, every time I checked to see
>> whether the allegedly missing files were there, I found them. So,
>> somehow, qvm-backup-restore is getting confusing, and it's thinking that
>> a file is missing when it's not.
>
> Are you sure you didn't run out of disk space at some point? Wasn't the file
> zero-sized?

The error occurred even when I had plenty of disk space to spare.
(However, I did quickly run out of disk space after a few failed restore
attempts.)

When I read your email reply a few minutes ago, I immediately went back
to try to reproduce the problem, so that I could check to see whether
the file was zero-sized. (I also wanted to send you the exact error
message, for the ticket.) So, I again attempted to restore that 15 GB
AppVM..... and it worked. No error this time. (Well, except for this (I
think) minor one, which was there every time: "kbuildsycoca4(26610)
parseLayoutNode: The menu spec file contains a Layout or DefaultLayout
tag without the mandatory Merge tag inside. Please fix your file.")

It just worked. I'm really scratching my head now, because the problem
seems to be totally random. It seems to happen *more often* when the
restore is many GBs in size, but apparently not *always*.
Yes, I noticed that it's divided into 100 MB chunks when I inspected the
contents of the temp restore directory. When I wasn't able to restore, I
was thinking that I would have to figure out how to manually
mount/extract my data from each of those 100 MB private.img files. But
in the large VMs, there are so many! It seems like it would take a long
time...
signature.asc

Axon

unread,
Dec 24, 2013, 6:07:12 AM12/24/13
to Marek Marczykowski-Górecki, qubes...@googlegroups.com
I've noticed some other backup-related oddities:

* The GUI backup tool seems to think that there exists a StandaloneVM
called "dom0". Of course, when it tries to back it up, there is a
problem, because no such StandaloneVM exists. Meanwhile, the real dom0
(which is included as a separate entry, "Dom0" (note the
capitalization), in the ASCII table backup list on the screen *after*
VMs have been selected) is always included in the backup, whether the
user wants to back up dom0 or not. You will notice that if you choose to
include "dom0" in the list of AppVMs to be backed up, you will then have
both "dom0" and "Dom0" in the resulting list, with the error that "dom0"
is still on and running. (At least, this is what happened to me.)

* I tried creating a "backupvm", and I increased the "private storage
max. size" of this AppVM to around 30 GB in order to accommodate my 20
GB backup. But when I try to back up to this AppVM using the tool, I
receive an error because the AppVM runs out of space after the default
~2 GB gets used up. For some reason, the AppVM is not growing to use the
space it is authorized to use, so the backup just fails. I remember
people asking how to grow the size manually, so I'll search for those
emails now...

signature.asc

Axon

unread,
Dec 24, 2013, 6:37:49 AM12/24/13
to Marek Marczykowski-Górecki, qubes...@googlegroups.com
Axon:
Ah, here's the thread I was thinking of:
https://groups.google.com/d/topic/qubes-users/mEOUiuWs5wU/discussion

Marek, your suggestion worked. Thank you. I ran "resize2fs /dev/xvdb" in
the VM, and it increased to the correct size. In the other thread, you
said, "It should be called automatically, but perhaps for some reason it
isn't." Indeed, it appears that it is not being called automatically
when the size is increased via the QVMM options interface (not sure if
CLI is any different, though).

signature.asc

Axon

unread,
Dec 24, 2013, 7:42:06 AM12/24/13
to qubes...@googlegroups.com
A minor issue:

The date format of the output file is incorrect. Currently, the format is:

qubes-backup-YYYY-DD-DD-hhmmss

Note that the day is repeated twice, and the month is omitted.

In addition, according to ISO 8601, the correct way to represent both
date and time (i.e., a single point in time) is by concatening a
complete date expression, the letter "T" as a delimiter, and a valid
time expression, as follows:

qubes-backup-YYYY-MM-DDThhmmss (extended date format, basic time format)

or

qubes-backup-YYYYMMDDThhmmss (basic date format, basic time format)

For example:

qubes-backup-2013-12-25T123456 (extended date format, basic time format)

or

qubes-backup-20131225T123456 (basic date format, basic time format)

(The extended time format (hh:mm:ss) uses colons, which of course should
not be used in computer file names.)

signature.asc

Alex Dubois

unread,
Dec 25, 2013, 7:11:56 PM12/25/13
to qubes...@googlegroups.com
When I had the problem, I had used the command kine tool qvm-grow-private.
I have a lot of problems related to non consistent calls to rc.local. I am trying and hope to shed some light in future post.

I have updated the wiki to include Marek's tip.

Alex

Axon

unread,
Jan 3, 2014, 8:06:05 AM1/3/14
to qubes...@googlegroups.com
There's an option to compress when using qvm-backup, but there doesn't
appear to be such an option in the GUI. Is the backup compressed by
default when created via the GUI? Either way, perhaps the option should
be available there, as well.

signature.asc

Marek Marczykowski-Górecki

unread,
Jan 3, 2014, 8:33:09 AM1/3/14
to Axon, qubes...@googlegroups.com
On 03.01.2014 14:06, Axon wrote:
> There's an option to compress when using qvm-backup, but there doesn't
> appear to be such an option in the GUI. Is the backup compressed by
> default when created via the GUI?

No.

> Either way, perhaps the option should
> be available there, as well.

Yes, indeed.
signature.asc

Axon

unread,
Jan 3, 2014, 11:38:52 AM1/3/14
to Marek Marczykowski-Górecki, qubes...@googlegroups.com
Marek Marczykowski-Górecki:
> On 03.01.2014 14:06, Axon wrote:
>> There's an option to compress when using qvm-backup, but there doesn't
>> appear to be such an option in the GUI. Is the backup compressed by
>> default when created via the GUI?
>
> No.
>
>> Either way, perhaps the option should
>> be available there, as well.
>
> Yes, indeed.
>

I just tried creating a backup with qvm-backup while passing the -z
flag, but the backup is still full size. It doesn't appear to have been
compressed at all. Is it possible that compression isn't working with
qvm-backup?

signature.asc

Axon

unread,
Jan 3, 2014, 12:58:23 PM1/3/14
to Marek Marczykowski-Górecki, qubes...@googlegroups.com
Axon:
Assuming that everything is correct in backup.py and qvm-backup.py, here
are two ideas/possibilities (I apologize in advance if these are totally
off-base):

1. Openssl is applying compression *after* encryption.[1][2] Of course,
encrypted data cannot be compressed. (In fact, trying to compress
encrypted data often just increases its size.) The links are to a patch
for this problem which was sent in April 2012. I don't know how to check
whether the change ever made it into the latest version of openssl
currently used in dom0 (1.0.1e-fips 11 Feb 2013).

or

2. Openssl in dom0 was not compiled with zlib, which is required for the
compression option to be available.[3] I don't know how to check whether
this is the case.



[1]https://rt.openssl.org/Ticket/Display.html?id=2787&user=guest&pass=guest
[2]http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=659251
[3]man enc

signature.asc

Axon

unread,
Jan 3, 2014, 1:11:28 PM1/3/14
to Marek Marczykowski-Górecki, qubes...@googlegroups.com
Axon:
> Axon:
>> Marek Marczykowski-Górecki:
>>> On 03.01.2014 14:06, Axon wrote:
>>>> There's an option to compress when using qvm-backup, but there doesn't
>>>> appear to be such an option in the GUI. Is the backup compressed by
>>>> default when created via the GUI?
>>>
>>> No.
>>>
>>>> Either way, perhaps the option should
>>>> be available there, as well.
>>>
>>> Yes, indeed.
>>>
>>
>> I just tried creating a backup with qvm-backup while passing the -z
>> flag, but the backup is still full size. It doesn't appear to have been
>> compressed at all. Is it possible that compression isn't working with
>> qvm-backup?
>>
>
> Assuming that everything is correct in backup.py and qvm-backup.py, here
> are two ideas/possibilities (I apologize in advance if these are totally
> off-base):
>
> 1. Openssl is applying compression *after* encryption.[1][2] Of course,
> encrypted data cannot be compressed. (In fact, trying to compress
> encrypted data often just increases its size.) The links are to a patch
> for this problem which was sent in April 2012. I don't know how to check
> whether the change ever made it into the latest version of openssl
> currently used in dom0 (1.0.1e-fips 11 Feb 2013).
>

Well, actually, if the ticket metadata at link [1] is accurate, then I
suppose I *do* know how to check (by simply reading the page). If so,
then it seems that this ticket and patch were simply ignored and
forgotten, which is rather depressing. I suppose this means we'll have
to do our own compression before passing the data to openssl.
signature.asc

Axon

unread,
Jan 7, 2014, 5:38:27 AM1/7/14
to Marek Marczykowski-Górecki, qubes...@googlegroups.com
Axon:
It appears that this is, indeed, the culprit. Here's an interesting
discussion on the Talk page of the Enc wiki entry
(http://wiki.openssl.org/index.php/Talk:Enc):

> Zlib compression
>
> "Use this flag to enable zlib-compression. After a file is encrypted (and maybe base64 encoded) it will be compressed via zlib. Vice versa while decrypting, zlib will be applied first."
>
> I think this happens the other way around. Doesn't compression occur before encryption, or after decryption? It doesn't make sense to do it the other way around - after encryption the data will appear indistinguishable from random data, and therefore should not have any appreciable ability to be compressed.
>
> --Matt 12:17, 5 July 2013 (UTC)
>
> Yes, i had the same question. But i tried it and it works actually like this: encryption -> base64 -> zlib
>
> encrypt + unzip:
>
> $ openssl enc -des -in message.plain -a -out message.enc -nosalt -z
>
> $ printf "\x1f\x8b\x08\x00\x00\x00\x00\x00" |cat - message.enc | gzip -dc
>
> gives the same as:
>
> $ openssl enc -des -in message.plain -a -out message.enc -nosalt
>
> --Frukto 14:05, 5 July 2013 (UTC)
>
> Interesting. I checked the source code and you appear to be right. The manual page however says this about -z: "Compress or decompress clear text using zlib before encryption or after decryption. This option exists only if OpenSSL with compiled with zlib or zlib-dynamic option. "
>
> --Matt 11:41, 6 July 2013 (UTC)
>
> Maybe, I should file a bug? I mean as you pointed out, the point of encryption is, in some sense, to maximize entropy. It does not make sense to compress afterwards. I don't know what was intended, but as you cite (actually i didn't check) the man page, it is a bug? But obviously nobody is using it, it would have been noticed immediately.
>
> --Frukto 12:07, 6 July 2013 (UTC)
>
> Might be worth raising it on the mailing list to see what the reaction is. I suspect it won't be fixed as it might break scripts etc for people that do use it. A possible feature enhancement could be to add a new option to do it properly. If it won't get fixed then we can edit the wiki version of the manual here, so at least the documentation is right.
>
> --Matt 13:09, 6 July 2013 (UTC)



signature.asc

Marek Marczykowski-Górecki

unread,
Jan 14, 2014, 10:31:26 PM1/14/14
to Axon, qubes...@googlegroups.com
On 24.12.2013 12:37, Axon wrote:
> Axon:
>> * I tried creating a "backupvm", and I increased the "private storage
>> max. size" of this AppVM to around 30 GB in order to accommodate my 20
>> GB backup. But when I try to back up to this AppVM using the tool, I
>> receive an error because the AppVM runs out of space after the default
>> ~2 GB gets used up. For some reason, the AppVM is not growing to use the
>> space it is authorized to use, so the backup just fails. I remember
>> people asking how to grow the size manually, so I'll search for those
>> emails now...
>>
>
> Ah, here's the thread I was thinking of:
> https://groups.google.com/d/topic/qubes-users/mEOUiuWs5wU/discussion
>
> Marek, your suggestion worked. Thank you. I ran "resize2fs /dev/xvdb" in
> the VM, and it increased to the correct size. In the other thread, you
> said, "It should be called automatically, but perhaps for some reason it
> isn't." Indeed, it appears that it is not being called automatically
> when the size is increased via the QVMM options interface (not sure if
> CLI is any different, though).

Was it on R2B2 or B3 template? I've just checked and B3 template already have
resize2fs call in startup scripts. Did it happened again later?
signature.asc

Axon

unread,
Jan 15, 2014, 8:43:56 PM1/15/14
to Marek Marczykowski-Górecki, qubes...@googlegroups.com
Marek Marczykowski-Górecki:
> On 24.12.2013 12:37, Axon wrote:
>> Axon:
>>> * I tried creating a "backupvm", and I increased the "private storage
>>> max. size" of this AppVM to around 30 GB in order to accommodate my 20
>>> GB backup. But when I try to back up to this AppVM using the tool, I
>>> receive an error because the AppVM runs out of space after the default
>>> ~2 GB gets used up. For some reason, the AppVM is not growing to use the
>>> space it is authorized to use, so the backup just fails. I remember
>>> people asking how to grow the size manually, so I'll search for those
>>> emails now...
>>>
>>
>> Ah, here's the thread I was thinking of:
>> https://groups.google.com/d/topic/qubes-users/mEOUiuWs5wU/discussion
>>
>> Marek, your suggestion worked. Thank you. I ran "resize2fs /dev/xvdb" in
>> the VM, and it increased to the correct size. In the other thread, you
>> said, "It should be called automatically, but perhaps for some reason it
>> isn't." Indeed, it appears that it is not being called automatically
>> when the size is increased via the QVMM options interface (not sure if
>> CLI is any different, though).
>
> Was it on R2B2 or B3 template? I've just checked and B3 template already have
> resize2fs call in startup scripts. Did it happened again later?
>

Given the date I sent that message, it would have been an R2B3 template.
It only happened once to me, but I only tried the above operation once.

signature.asc

Alex Dubois

unread,
Jan 16, 2014, 4:01:43 AM1/16/14
to Axon, Marek Marczykowski-Górecki, qubes...@googlegroups.com


Alex
I would think that compression should not be done in dom0. Attack surface is much small with encryption only (mix data stream with pseudo radom stream). The data stream has not many opportunities to change the control flow. A dispvm should do the compression with a new one for each vm being backed-up so that a compromised vm, by being comprossed, cannot leack and spread in other vms backups.

Manuel Amador (Rudd-O)

unread,
Jan 16, 2014, 3:42:32 PM1/16/14
to Alex Dubois, Axon, Marek Marczykowski-Górecki, qubes...@googlegroups.com
Needs to be done in dom0 because encryption AND THEN compression simply doesn't compress anything.
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.

Axon

unread,
Jan 16, 2014, 8:17:39 PM1/16/14
to Manuel Amador (Rudd-O), Alex Dubois, Marek Marczykowski-Górecki, qubes...@googlegroups.com
Manuel Amador (Rudd-O):
> Needs to be done in dom0 because encryption AND THEN compression simply doesn't compress anything.
>

No, I think he means that the compression would be done before the
encryption but that each AppVM would be compressed in a separate DispVM.
After an AppVM is compressed in a DispVM, the compressed tarball would
be piped back to dom0, where it would then be encrypted. IIUC, the idea
is that the compression operation is sufficiently complex that we should
be worried about dom0 handling it, but we don't have to worry about dom0
handling the compressed tarball, since all it does is encrypt it (which
is a comparatively less risky operation).

On restore, I suppose the same thing could be done in reverse (i.e.,
decryption in dom0, decompression in a DispVM) so that dom0 never has to
handle the de-/compression.

Sounds like a good idea to my non-expert ears, but I'm interested to
hear what Joanna and Marek have to say about it.

PS - Please don't top-post!
signature.asc

Marek Marczykowski-Górecki

unread,
Jan 16, 2014, 8:55:22 PM1/16/14
to Axon, Manuel Amador (Rudd-O), Alex Dubois, qubes...@googlegroups.com
On 17.01.2014 02:17, Axon wrote:
> Manuel Amador (Rudd-O):
>> Needs to be done in dom0 because encryption AND THEN compression simply doesn't compress anything.
>>
>
> No, I think he means that the compression would be done before the
> encryption but that each AppVM would be compressed in a separate DispVM.
> After an AppVM is compressed in a DispVM, the compressed tarball would
> be piped back to dom0, where it would then be encrypted. IIUC, the idea
> is that the compression operation is sufficiently complex that we should
> be worried about dom0 handling it, but we don't have to worry about dom0
> handling the compressed tarball, since all it does is encrypt it (which
> is a comparatively less risky operation).

I don't think compression is that much more complex than encryption
(especially when using such long used and tested software as gzip). So IMO
doesn't worth using DispVM for this.

But I was thinking about another idea - the VM extracting its own data itself
and then giving them to dom0 for encryption and storage (wherever it is).
Something like running "tar cz /rw" in the VM. This idea have one major
advantage - you can quite safely backup a running VM(s).
Potential drawbacks:
- you need to start every VM for the backup (not necessarily at once),
- restore process more complicated (create "standard" VM, start it, pass it
the data),
- not sure how to do it for StandaloneVM/TemplateVM - perhaps stay with
current solution (requiring shutdown the VM) - otherwise restore would be very
hard (keep in mind we don't want to parse any VM data in dom0, only pass it as
some data stream).

Anyway we don't have this idea on our roadmap, but if anyone willing to
contribute in this area - feel free to do it.

> On restore, I suppose the same thing could be done in reverse (i.e.,
> decryption in dom0, decompression in a DispVM) so that dom0 never has to
> handle the de-/compression.
>
> Sounds like a good idea to my non-expert ears, but I'm interested to
> hear what Joanna and Marek have to say about it.
>
> PS - Please don't top-post!


signature.asc

Alex Dubois

unread,
Jan 17, 2014, 1:12:57 AM1/17/14
to Axon, Manuel Amador (Rudd-O), Marek Marczykowski-Górecki, qubes...@googlegroups.com


Alex

> On 17 Jan 2014, at 01:17, Axon <ax...@openmailbox.org> wrote:
>
> Manuel Amador (Rudd-O):
>> Needs to be done in dom0 because encryption AND THEN compression simply doesn't compress anything.
>
> No, I think he means that the compression would be done before the
> encryption but that each AppVM would be compressed in a separate DispVM.
> After an AppVM is compressed in a DispVM, the compressed tarball would
> be piped back to dom0, where it would then be encrypted. IIUC, the idea
> is that the compression operation is sufficiently complex that we should
> be worried about dom0 handling it, but we don't have to worry about dom0
> handling the compressed tarball, since all it does is encrypt it (which
> is a comparatively less risky operation).
>
> On restore, I suppose the same thing could be done in reverse (i.e.,
> decryption in dom0, decompression in a DispVM) so that dom0 never has to
> handle the de-/compression.

That was the idea

Alex Dubois

unread,
Jan 17, 2014, 1:36:25 AM1/17/14
to Marek Marczykowski-Górecki, Axon, Manuel Amador (Rudd-O), qubes...@googlegroups.com


Alex

> On 17 Jan 2014, at 01:55, Marek Marczykowski-Górecki <marm...@invisiblethingslab.com> wrote:
>
>> On 17.01.2014 02:17, Axon wrote:
>> Manuel Amador (Rudd-O):
>>> Needs to be done in dom0 because encryption AND THEN compression simply doesn't compress anything.
>>
>> No, I think he means that the compression would be done before the
>> encryption but that each AppVM would be compressed in a separate DispVM.
>> After an AppVM is compressed in a DispVM, the compressed tarball would
>> be piped back to dom0, where it would then be encrypted. IIUC, the idea
>> is that the compression operation is sufficiently complex that we should
>> be worried about dom0 handling it, but we don't have to worry about dom0
>> handling the compressed tarball, since all it does is encrypt it (which
>> is a comparatively less risky operation).
>
> I don't think compression is that much more complex than encryption
> (especially when using such long used and tested software as gzip). So IMO
> doesn't worth using DispVM for this.
Please leave option to not compress. I prefer to pay for the storage
>
> But I was thinking about another idea - the VM extracting its own data itself
> and then giving them to dom0 for encryption and storage (wherever it is).
> Something like running "tar cz /rw" in the VM. This idea have one major
> advantage - you can quite safely backup a running VM(s).
> Potential drawbacks:
> - you need to start every VM for the backup (not necessarily at once),
> - restore process more complicated (create "standard" VM, start it, pass it
> the data),
> - not sure how to do it for StandaloneVM/TemplateVM - perhaps stay with
> current solution (requiring shutdown the VM) - otherwise restore would be very
> hard (keep in mind we don't want to parse any VM data in dom0, only pass it as
> some data stream).
I had the same idea when posting but thought it would not be possible

But I am worried that as we add feature, because of time pressure we take shortcut.

I am going to think about how to abstract qrexec to have an option in the caller to have --onDispVM

The idea being that as you code your qrexec client and server, it is completely transparent (you do it as if you would do between 2 domains, but under the hood it would create the dispVM). You would however need to specify which code is executed at which layer.

Another area I may explore will be to have all qrexec scripts/packages stored in dom0. The calls in the client vm when executed, instead of being read from local filesystem would trigger a qrexec to pull the code from dom0 and execute. With such an arch it would be easier to just chain/pipe client or server scripts, the benefit being that if you do not specify the vm to execute on, it would instantiate a dispVM to do the processing of stdout and push to the next one in the chain.
The dom0 storage would have to have the associated policy to allow a storage read request....

Marek Marczykowski-Górecki

unread,
Jan 17, 2014, 7:02:19 AM1/17/14
to Alex Dubois, Axon, Manuel Amador (Rudd-O), qubes...@googlegroups.com
On 17.01.2014 07:36, Alex Dubois wrote:
>
>
> Alex
>
>> On 17 Jan 2014, at 01:55, Marek Marczykowski-Górecki <marm...@invisiblethingslab.com> wrote:
>>
>>> On 17.01.2014 02:17, Axon wrote:
>>> Manuel Amador (Rudd-O):
>>>> Needs to be done in dom0 because encryption AND THEN compression simply doesn't compress anything.
>>>
>>> No, I think he means that the compression would be done before the
>>> encryption but that each AppVM would be compressed in a separate DispVM.
>>> After an AppVM is compressed in a DispVM, the compressed tarball would
>>> be piped back to dom0, where it would then be encrypted. IIUC, the idea
>>> is that the compression operation is sufficiently complex that we should
>>> be worried about dom0 handling it, but we don't have to worry about dom0
>>> handling the compressed tarball, since all it does is encrypt it (which
>>> is a comparatively less risky operation).
>>
>> I don't think compression is that much more complex than encryption
>> (especially when using such long used and tested software as gzip). So IMO
>> doesn't worth using DispVM for this.
> Please leave option to not compress. I prefer to pay for the storage

Actually compression is disabled by default (at least for now).
signature.asc
Reply all
Reply to author
Forward
0 new messages