Moving Qubes+VMs to Larger SSD - How to Handle Storage Pools on Other Disks?

290 views
Skip to first unread message

Heinrich Ulbricht

unread,
Aug 31, 2019, 4:49:58 PM8/31/19
to qubes-users
I need to move Qubes OS and all VMs to a larger SSD since I'm running low on disk space.

According to the documentation this should be easy:
  1. Backup everything
  2. Install Qubes OS to new SSD
  3. Restore everything
  4. Done.
But my current system has two additional (slower) disks attached for storage of larger App VMs (thin storage pools). How do I handle those? I'm worried about two scenarios that aren't explicitly mentioned in the documentation:

Scenario A - this is for getting the small SSD replaced by a larger one (top priority)
  • migration of Qubes OS from SSD OLD (the small one) to SSD NEW (larger one)
  • within the same computer
  • leaving the other two disks attached, making no changes to them, leaving the App VMs in place
Will this "just work"?

Scenario B - a future scenario (lower priority)
  • restore everything to a new computer with new empty disks
  • while putting the App VMs back to their respective separate disks in the new computer
This scenario I simply did not test yet and the restore UI might provide options for this (?). But I did not find documentation about this scenario. Any links/hints are appreciated.



How much do I need to worry? What do I need to prepare in addition to the steps outlined in the documentation?

(I'm on Qubes OS v4 latest)

awokd

unread,
Aug 31, 2019, 5:02:38 PM8/31/19
to qubes...@googlegroups.com
'Heinrich Ulbricht' via qubes-users:
> I need to move Qubes OS and all VMs to a larger SSD since I'm running low
> on disk space.

Believe the following would work:

1. Backup everything
2. Disconnect ALL drives, connect new SSD
3. Install Qubes and Restore SSD only qubes
4. Connect hard drives
5. Edit /etc/fstab (and crypttab if needed) & reboot to mount them
6. Remind Qubes about them with qvm-pool --add <pool_name> lvm_thin -o
volume_group=<vg_name>,thin_pool=<thin_pool_name>,revisions_to_keep=2

awokd

unread,
Aug 31, 2019, 7:06:15 PM8/31/19
to qubes...@googlegroups.com
'awokd' via qubes-users:
On second thought, your HDD qube definitions won't come over this way,
only the data. You could either recreate the qube definitions manually
and shuffle LVM partitions around, or temporarily change your default
pool in qubes-prefs to one of the HDD pools before restoring VMs to it.

Andrew David Wong

unread,
Aug 31, 2019, 9:19:49 PM8/31/19
to Heinrich Ulbricht, qubes-users
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512


On 31/08/2019 3.49 PM, 'Heinrich Ulbricht' via qubes-users wrote:
> I need to move Qubes OS and all VMs to a larger SSD since I'm running low
> on disk space.
>
> According to the documentation
> <https://www.qubes-os.org/doc/backup-restore/> this should be easy:
>
> 1. Backup everything
> 2. Install Qubes OS to new SSD
> 3. Restore everything
> 4. Done.
>

Personally, I would just stick with this. In other words, I would treat
the new Qubes installation as completely new and use qvm-backup-restore
as the only mechanism for migrating my old data to the new installation.
This is the only way I would be confident that I weren't screwing
anything up.

Using the sorts of shortcuts described below might save time, but I
don't know enough about them to be confident that they wouldn't result
in mysterious problems that only manifest later, requiring
troubleshooting that wastes more time than was initially saved.

If I were feeling experimental, I might try out some the unknown
shortcuts first (since I have everything safely backed up), then
reevaluate based on the results of my experiments.

> But my current system has two additional (slower) disks attached for
> storage of larger App VMs (thin storage pools). How do I handle those? I'm
> worried about two scenarios that aren't explicitly mentioned in the
> documentation:
>
> *Scenario A - this is for getting the small SSD replaced by a larger one
> (top priority)*
>
> - migration of Qubes OS from SSD OLD (the small one) to SSD NEW (larger
> one)
> - within the same computer
> - leaving the other two disks attached, making no changes to them,
> leaving the App VMs in place
>
> Will this "just work"?
>
> *Scenario B - a future scenario (lower priority)*
>
> - restore everything to a new computer with new empty disks
> - while putting the App VMs back to their respective separate disks in
> the new computer
>
> This scenario I simply did not test yet and the restore UI might provide
> options for this (?). But I did not find documentation about this scenario.
> Any links/hints are appreciated.
>
>
>
> How much do I need to worry? What do I need to prepare *in addition* to the
> steps outlined in the documentation
> <https://www.qubes-os.org/doc/backup-restore/>?
>
> (I'm on Qubes OS v4 latest)
>

- --
Andrew David Wong (Axon)
Community Manager, Qubes OS
https://www.qubes-os.org

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCgAdFiEEZQ7rCYX0j3henGH1203TvDlQMDAFAl1rHKkACgkQ203TvDlQ
MDBirhAAny4ppWHm8DbcZlPdM4Qwh+RIkqiScMvVZfVqbQzbhe8q1mL6udHZGSee
dqYLVXI1s2D703/VsqBcT45mUx5NBfca7jDU+YXlLnopC4DaD6/yn90lHyRhZmRK
GHj2Dc3bdVKLeywsluDUpkqqoGIAmC3AvAxVK2Z4PSPVuY8NedRV0NlLq6bKsf8a
W3WHS1NfVU/Wr/vWsmK1RImSdg9UN05Wpqwx3xItNkYcZa7sflkhAsbluWdbAUBv
l1yhZJtD45iw/2Wbq7yz1e1NmY/g/swF+u4D2XLYILPJZNRC5YtSmAG13zHVSqNH
hlXMWVMO+xm5eYvfhrlloaGwP/daaUjUjXrKPibJI3auAzBg32Mp+n4onMQEPgNj
OWkpuPqb6hCyXQ8dhnmHg4xP8zkstxV0r1+c/1AXqg38nSJnxd4shy2vr+02WDgu
YvAuyKmREcwq9wjhFMZQLgUZU4SjSp0tc2aT1jiYH+yo3rYzeZ/OkLc0iwYnUeh0
aKf7/xoslSD4fKicSjdtWy46DXxjMCEzGl/xq5sJB0ZJ6QqYQmreFURa2aKpvy19
yzFuyWecY8Ngf/CtAuhXQFUXeYH3Y9N7TOKwmpX5QBv38fGMHw+p+3bIJ5ZgKm8r
zuIJRDRl8N89Pl0xH0QilVeFDoGgcbzHkeQ87gWhjV2X9kX90Yc=
=hJka
-----END PGP SIGNATURE-----

Heinrich Ulbricht

unread,
Sep 1, 2019, 4:46:47 AM9/1/19
to qubes-users

Personally, I would just stick with this. In other words, I would treat
the new Qubes installation as completely new and use qvm-backup-restore
as the only mechanism for migrating my old data to the new installation.
This is the only way I would be confident that I weren't screwing
anything up.


Thank you very much for helping me out on this, awokd and Andrew. Currently I'm leaning toward taking the safe path. If I understand correctly that means:
  1. Backup everything that's on the SSD and the external storage pool HDDs - this will take a lot of time and space but that's the price I have to pay for the safety I get
  2. Connect the new SSD, wipe the external drives
  3. Install Qubes OS on the new SSD
  4. Create external storage pools on the additional HDDs
  5. Make the SSD the default pool; restore VMs for SSD
  6. Make external disk 1 the default pool; restore VMs for this pool
  7. Make external disk 2 the default pool; restore VMs for this pool
  8. Switch default pool back to SSD
  9. Done
How does this sound?

donoban

unread,
Sep 1, 2019, 6:00:20 AM9/1/19
to qubes...@googlegroups.com
On 9/1/19 10:46 AM, 'Heinrich Ulbricht' via qubes-users wrote:
> Thank you very much for helping me out on this, awokd and Andrew.
> Currently I'm leaning toward taking the safe path. If I understand
> correctly that means:
>
> 1. Backup everything that's on the SSD /and/ the external storage pool
> HDDs - this will take a lot of time and space but that's the price I
> have to pay for the safety I get
> 2. Connect the new SSD, wipe the external drives
> 3. Install Qubes OS on the new SSD
> 4. Create external storage pools on the additional HDDs
> 5. Make the SSD the default pool; restore VMs for SSD
> 6. Make external disk 1 the default pool; restore VMs for this pool
> 7. Make external disk 2 the default pool; restore VMs for this pool
> 8. Switch default pool back to SSD
> 9. Done
>
> How does this sound?
>

Hi,

I recently did a hard disk upgrade and reinstall so I followed this same
steps.

Generally it should work fine but in mi experience there is a little
issue[1] that can cause additional delay on the process. In steps 6/7,
if your destination hard disk is slower than the your main hard disk
(where dom0 is installed), your backup will be full extracted on dom0,
so you can run out of space if you don't take this in account.

If your dom0 is smaller than the total amount to extract, you should
restore your domains grouping them in a reasonable amount.

Another way is changing the temporary directory for the restore process
but it can not be changed with command arguments. You need to modify
'restore.py' or mount /var/tmp on another device, or use symbolic link.

[1] https://github.com/QubesOS/qubes-issues/issues/3230

donoban

unread,
Sep 1, 2019, 6:16:45 AM9/1/19
to qubes...@googlegroups.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
Ouch, marked as Spam. Trying with pgp sign...
-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEznLCgPSfWTT+LPrmFBMQ2OPtCKUFAl1qynkACgkQFBMQ2OPt
CKW7sRAAjwiDddhSFN6+FEhVfBqhcdHE4iE6l6YnGXpXEDnMB5Q7cVIFKp8+NQmp
03C75NcGE67Mfpuqw+9rxw/ZJvJKm+zVbCa5RTp7k0Ei/WA9qPQSCuTnX5eXDBZp
a3ioZvgo7I05p0euipv6uMqUfRbmz3b5crXLU4b7x+/Z2snet90NyaQdjNEeelA1
HmaRDX3EFFvgee079VxXfl+W1Msh/D9MC7HOel6/Q3Q/UaBW9OxVPMXO/KOjF4VK
W8bmuttGsjpBXWeLJz8xYWquU5tEBMkVSZp1eXmM+Z0CT6OjKv2AcFMjfMm+80EJ
ECC8zuv+NlUR3qnggzXfFk0F3fhrGvKL8uBH1UBw+f0sFmdWMIxtb8SzCvOwA0PG
nsCvPdvKegsXoNrQdzljIwNl/zhzZI4L3AeicnbqtOusZhKuB9nxinSD5LEA1N7C
q9uiHEnVePtgjUBLXeOPZo477iVHKn/ulOjGemnaf2TtD9sZXUqC3VGRjNKO6ofg
owjF76df3vduMudivBXHzm2q1+DU25NEENBzIjqFMmyagDQgM+V/3OUXAIFEtDWj
H9yjxTxAKw+/4OrGRWQirIVhF2lfjA/wTnSyjfjqnuGVlL7CBh/lMYgukGZdrxL3
rBHpVwHg+6ppRHHeO4GwtsX2naWvK86Mild1UGCeQ3FvyN0qVTI=
=j234
-----END PGP SIGNATURE-----

brenda...@gmail.com

unread,
Sep 1, 2019, 7:30:10 AM9/1/19
to qubes-users
I would advise against wiping any disks until you are sure the full set of restores are complete and tested.

I’ve learned the hard way to never put myself
into a situation where I cannot revert to my original configuration.

Brendan

awokd

unread,
Sep 1, 2019, 7:39:07 AM9/1/19
to qubes...@googlegroups.com
'Heinrich Ulbricht' via qubes-users:

> 1. Backup everything that's on the SSD *and* the external storage pool
> HDDs - this will take a lot of time and space but that's the price I have
> to pay for the safety I get
> 2. Connect the new SSD, wipe the external drives
> 3. Install Qubes OS on the new SSD
> 4. Create external storage pools on the additional HDDs
> 5. Make the SSD the default pool; restore VMs for SSD
> 6. Make external disk 1 the default pool; restore VMs for this pool
> 7. Make external disk 2 the default pool; restore VMs for this pool
> 8. Switch default pool back to SSD
> 9. Done
>
> How does this sound?
>
This looks good, with the caveats others noted in this thread.

Chris Laprise

unread,
Sep 1, 2019, 12:52:26 PM9/1/19
to awokd, qubes...@googlegroups.com
For posterity, a couple of true 'shortcut' methods are possible
(although using backup+restore is always safer).

One method involves dd'ing the entire old drive contents onto the new
drive. At the end you'll have to expand the partition holding
qubes_dom0, as well as the volume group itself and pool00 in order to
have access to the additional space on the new drive; but that should be
easy.

A second method uses LVM management commands to mirror the volume group
onto the new drive, but there would be extra steps you'd need to take
for a Qubes boot partition:

https://casesup.com/knowledgebase/how-to-migrate-lvm-to-new-storage/


--

Chris Laprise, tas...@posteo.net
https://github.com/tasket
https://twitter.com/ttaskett
PGP: BEE2 20C5 356E 764A 73EB 4AB3 1DC4 D106 F07F 1886

Heinrich Ulbricht

unread,
Sep 1, 2019, 3:42:30 PM9/1/19
to qubes-users
Thank you very much for pointing this out. Sounds like a lot of saved troubleshooting time for me as I might run into this.

I think I'm set for the safe path (although the shortcut proposed by Chris Laprise sounds tempting). I'll report back how it went.

Andrew David Wong

unread,
Sep 2, 2019, 12:11:58 AM9/2/19
to Heinrich Ulbricht, qubes-users
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

On 01/09/2019 3.46 AM, 'Heinrich Ulbricht' via qubes-users wrote:
>> Personally, I would just stick with this. In other words, I would treat
>> the new Qubes installation as completely new and use qvm-backup-restore
>> as the only mechanism for migrating my old data to the new installation.
>> This is the only way I would be confident that I weren't screwing
>> anything up.
>>
>>
> Thank you very much for helping me out on this, awokd and Andrew. Currently
> I'm leaning toward taking the safe path. If I understand correctly that
> means:
>
> 1. Backup everything that's on the SSD *and* the external storage pool
> HDDs - this will take a lot of time and space but that's the price I have
> to pay for the safety I get
> 2. Connect the new SSD, wipe the external drives
> 3. Install Qubes OS on the new SSD
> 4. Create external storage pools on the additional HDDs
> 5. Make the SSD the default pool; restore VMs for SSD
> 6. Make external disk 1 the default pool; restore VMs for this pool
> 7. Make external disk 2 the default pool; restore VMs for this pool
> 8. Switch default pool back to SSD
> 9. Done
>
> How does this sound?
>

I haven't personally used external storage pools, so I can't comment
there. (Thankfully, though, others have already weighed in on that part.)

Related to Brendan's point, in step 1, it's important to *verify* the
backups you've just created.

GUI: Qube Manager -> Restore qubes from backup -> [x] Verify backup
integrity, do not restore the data

CLI: qvm-backup-restore --verify-only [...]

As the saying goes: Backups always succeed. It's restores that fail.

(If you have the extra disks, it would technically be even safer to
use new HDDs and refrain from wiping the existing ones until after
you've verified that everything is correct in the new system, but this
might be overly cautious. I personally don't feel the need to do this
anymore, because I know the data is already there in a verified Qubes
backup, and I've tested my ability to manually recover it
independently of Qubes as a last resort.)

Aside from these caveats, your plan sounds like what I would do.

- --
Andrew David Wong (Axon)
Community Manager, Qubes OS
https://www.qubes-os.org

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCgAdFiEEZQ7rCYX0j3henGH1203TvDlQMDAFAl1sloEACgkQ203TvDlQ
MDDqdg//cFoY4k/EFm5t2pVpoiFKubh/o34CzJo8eYfOWgHaNnnVlhfqXGartWkS
F5aSeig2PiPWpPqfJ9G4mWV46ySjC/hez0fSmAl2rsNA/PaRTAG/aIhIy7DlNuoo
H9MQJw5TsiHvgz6h/FfYVOcl1/mDOwmVw4n9WQ1x49bgsmI+CdZf94c7P+XvEpde
YM3G2hGi6e1Bd3H3Xm1B5EtovsMXC3ieDkXDYlun814t1jBv0BmVib93vRaltHIt
Bdd766BAvhkdMOOWONHfmOrw1An7FBm1uKIxdJb71w5Kltv8VV1S7xXq387/QmGF
fcdJljdEtArgxg4Pe35i03GseDjRfw1rhsH3TL/8PDjY2P1n+lwI5JG8+fa7ZPUM
FHKoQAiPk9D3d8vOXmnwVT8QrCdaJvnX0bqtihusUnUh13+ZCrP5akq3KE7hrIn3
dgVG6eywbwS/Y18pbXChdvDdz3kx6Q05cB56nsFfyR65amLJh7F3NmkVd20HBDoh
yNxEqWMaHLiz/chOLLXFxUy8nAor/CQ8JgRPbERh40M5l67jzhFEXgHRl5u4XmbM
g8iNpYbFoUsBDP8bSzgPIFaNJ/OuUGnNsXtyYGwfxTzH45UMHLqGCqAPPdRUqaRB
W3JYH81cFnRVJdqBU+bj+1GPyD6an31++0ahJFX11DawHiP8nEc=
=d7uO
-----END PGP SIGNATURE-----

Heinrich Ulbricht

unread,
Sep 7, 2019, 4:28:58 PM9/7/19
to qubes-users
Here is an update on how my migration from SSD_small to SSD_big is going so far.

Just as a remindet this is the challenge I face:
* dom0 SSD has 100 GB capacity, ~10% of this is free (that's why I want to migrate to a new SSD)
* external storage pool 1 has 1 TB storage, AppVM 1 with < 500 GB private storage in use
* external storage pool 2 has 1 TB storage, AppVM 2 with > 500 GB private storage in use
* I want to migrate everything via backup+restore to new disks/pools

Here is what worked
* backing up App VMs from all 3 pools using built-in backup mechanisms (UI) - cool

Here is what did not work
* verifying the huge (400-700 GB) backups did not work since this filled up my dom0 pretty fast and then failed -> this is the reason why I resorted to what Andrew wrote: having the original still in place while restoring to different disks, not overwriting anything, just in case restoring fails
* restoring the huge (400-700 GB) backups did not work since this filled up my dom0 pretty fast and then failed -> this is exactly like donoban wrote; I managed to work around this for AppVM 1, NOT for AppVM 2 (yet)

To restore AppVM 1 (< 500 GB) I modified restore.py to restore to another location than /var/tmp. The easiest for me was to create a new (temporary) AppVM in my new 1 TB external storate pool 1, to increase its private storage to 500 GB, to mount its private volume to dom0 and to use this path as temporary location in restore.py. So I was using my 1 TB disk both as restore target and temporary location for backup extraction. I was lucky - the pool filled up to 99.8% and the restore succeeded. So currently it seems you need double the amount of storage your to-be-restored AppVM consumes to restore the AppVM.

Now there is one challenge left. I have to restore AppVM 2 which is about 700 GB. To my current knowledge I would now need to have twice this amount to restore - which currently I don't have. This is why I'd like to somehow slow down the extraction. donoban mentioned this is possible. I had a look at restore.py but honestly have not idea where to start. I also currently don't know how the different extraction processes interact and how the backup is structured.

Can anybody suggest a modification (or hack, however dirty - it's meant to be temporary) to restore.py so it won't need 700 GB of additional temporary storage when I try to restore my 700 GB AppVM?

Thanks for all your input so far. Knowing that dom0 could fill up certainly saved my some hours of questioning life.

donoban

unread,
Sep 7, 2019, 4:53:09 PM9/7/19
to qubes...@googlegroups.com
> On 01/09/2019 3.46 AM, 'Heinrich Ulbricht' via qubes-users wrote:> Here is an update on how my migration from SSD_small to SSD_big is going
> so far.
>
> Just as a remindet this is the challenge I face:
> * dom0 SSD has 100 GB capacity, ~10% of this is free (that's why I want
> to migrate to a new SSD)
> * external storage pool 1 has 1 TB storage, AppVM *1* with < 500 GB
> private storage in use
> * external storage pool 2 has 1 TB storage, AppVM *2* with > 500 GB
> private storage in use
> * I want to migrate everything via backup+restore to new disks/pools
>
> _Here is what worked_
> * backing up App VMs from all 3 pools using built-in backup mechanisms
> (UI) - cool
>
> _Here is what did not work_
> * *verifying* the huge (400-700 GB) backups *did not work* since this
> filled up my dom0 pretty fast and then failed -> this is the reason why
> I resorted to what Andrew wrote: having the original still in place
> while restoring to different disks, not overwriting anything, just in
> case restoring fails
> * *restoring* the huge (400-700 GB) backups *did not work* since this
> filled up my dom0 pretty fast and then failed -> this is exactly like
> donoban wrote; I managed to work around this for AppVM *1*, NOT for
> AppVM *2* (yet)
>
> To restore AppVM *1* (< 500 GB) I modified /restore.py
> <https://github.com/QubesOS/qubes-core-admin-client/blob/9158412a24da300e4c54346ccb54fce1e748500f/qubesadmin/backup/restore.py#L858>/
> to restore to another location than //var/tmp/. The easiest for me was
> to create a new (temporary) AppVM in my new 1 TB external storate pool
> *1*, to increase its private storage to 500 GB, to mount its private
> volume to dom0 and to use this path as temporary location in
> /restore.py/. So I was using my 1 TB disk both as restore target and
> temporary location for backup extraction. I was lucky - the pool filled
> up to 99.8% and the restore succeeded. So currently it seems you need
> double the amount of storage your to-be-restored AppVM consumes to
> restore the AppVM.
>
> Now there is one challenge left. I have to restore AppVM *2* which is
> about 700 GB. To my current knowledge I would now need to have twice
> this amount to restore - which currently I don't have. This is why I'd
> like to somehow slow down the extraction. donoban mentioned this is
> possible. I had a look at restore.py
> <https://github.com/QubesOS/qubes-core-admin-client/blob/master/qubesadmin/backup/restore.py>
> but honestly have not idea where to start. I also currently don't know
> how the different extraction processes interact and how the backup is
> structured.
>
> Can anybody suggest a modification (or hack, however dirty - it's meant
> to be temporary) to restore.py so it won't need 700 GB of additional
> temporary storage when I try to restore my 700 GB AppVM?
>
> Thanks for all your input so far. Knowing that dom0 could fill up
> certainly saved my some hours of questioning life.
>
Sorry to hear that. There are two options in 'tar' (--checkpoint and
--checkpoint-exec) that can be used for execute a command during the
extraction process so they could potentially be used to force a 'sleep'
and slow down it. Take a look on this issue comment[1], and try to hack
restore.py with it. I personally didn't try it since my AppVM's were
small enough for just restoring them in few groups.

Hopefully Marek could help with this, probably he will answer if you
comment on the github issue.

[1]
https://github.com/QubesOS/qubes-issues/issues/3230#issuecomment-340253679

Heinrich Ulbricht

unread,
Sep 7, 2019, 4:54:34 PM9/7/19
to qubes-users

Can anybody suggest a modification (or hack, however dirty - it's meant to be temporary) to restore.py so it won't need 700 GB of additional temporary storage when I try to restore my 700 GB AppVM?


The best bet I currently have it applying the "sleep trick" (see here) to line 598ff in restore.py.

So this:
elif inner_name in self.handlers:
tar2_cmdline = ['tar',
'-%svvO' % ("t" if self.verify_only else "x"),
inner_name]
redirect_stdout = subprocess.PIPE

Would become something like this:
elif inner_name in self.handlers:
tar2_cmdline = ['tar',
'-%svvO' % ("t" if self.verify_only else "x"),
'--checkpoint=20000',
'--checkpoint-action=exec=\'sleep "$(stat -f --format="(((%b-%a)/%b)^5)*30" /var/tmp | bc -l)"\'',
inner_name]
redirect_stdout = subprocess.PIPE


Too naive?

donoban

unread,
Sep 7, 2019, 5:01:07 PM9/7/19
to qubes...@googlegroups.com
On 9/7/19 10:54 PM, 'Heinrich Ulbricht' via qubes-users wrote:
> The best bet I currently have it applying the "sleep trick" (see here
> <https://github.com/QubesOS/qubes-issues/issues/3230#issuecomment-340253679>)
> to line 598ff
> <https://github.com/QubesOS/qubes-core-admin-client/blob/master/qubesadmin/backup/restore.py#L598>
> in /restore.py/.
>
> So this:
> elifinner_name inself.handlers:
> tar2_cmdline =['tar',
> '-%svvO'%("t"ifself.verify_only else"x"),
> inner_name]
> redirect_stdout =subprocess.PIPE
>
> Would become something like this:
> elifinner_name inself.handlers:
> tar2_cmdline =['tar',
> '-%svvO'%("t"ifself.verify_only else"x"),
> '--checkpoint=20000',
> '--checkpoint-action=exec=\'sleep "$(stat -f
> --format="(((%b-%a)/%b)^5)*30" /var/tmp | bc -l)"\'',
> inner_name]
> redirect_stdout =subprocess.PIPE
>
>
> Too naive?
>

It could work, take in account that backup file should be exposed
directly to dom0 or it will use 'qfile-unpacker':
https://github.com/QubesOS/qubes-issues/issues/3230#issuecomment-340277836

Good luck :)

Heinrich Ulbricht

unread,
Sep 7, 2019, 5:10:14 PM9/7/19
to qubes-users


It could work, take in account that backup file should be exposed
directly to dom0 or it will use 'qfile-unpacker':
https://github.com/QubesOS/qubes-issues/issues/3230#issuecomment-340277836

Good luck :)

Ok currently I'm attaching a USB drive to a temporary AppVM to restore from there via UI. So I should rather mount something containing the backup file to dom0 to restore from there using the command line(?).

The overall experience so far leaves room for improvement ;) Thanks for the tip.

(I also commented on the mentioned GitHub issue).

donoban

unread,
Sep 7, 2019, 5:26:07 PM9/7/19
to qubes...@googlegroups.com
On 9/7/19 11:10 PM, 'Heinrich Ulbricht' via qubes-users wrote:
> Ok currently I'm attaching a USB drive to a temporary AppVM to restore
> from there via UI. So I should rather mount something containing the
> backup file to dom0 to restore from there using the command line(?).
>
> The overall experience so far leaves room for improvement ;) Thanks for
> the tip.
>

Obliviously it is a security risk but you can attach directly to dom0
using command line: qvm-device block attach dom0 sys-usb:sdx . And then
mount it. Then you can use the UI backup and select dom0 as Qube.

Chris Laprise

unread,
Sep 7, 2019, 5:57:22 PM9/7/19
to Heinrich Ulbricht, qubes-users
Its a little tough for me to read all this, since my own
Qubes-compatible backup tool with no storage overhead is nearing
release. OTOH, if I had been in your shoes (not wanting to use someone
else's pre-release tool) and faced with backups that large, I probably
would have opted for a direct 'dd' copy to the new drive, or use 'dd' +
hash for the backups.

Since you're at the point of trying to modify restore.py, you may still
want to consider the direct 'dd' option (if so, then you would have two
"backups", the one from qvm-backup plus the original work drive).

Even so, using tar directly with a small script as donoban suggested is
workable and something I've done back in the R2.0-3.0 days. So I'll
include my old verification script for you here... it could be easily
adapted to append the segments to a whole tar file (or to save space,
pipe them to tar directly).


===
#!/bin/bash



# ** Quick and dirty script to verify integrity of Qubes backup files
**
# ** Change the backup file name and 'passphrase' variable to match
yours **
# ** Should be invoked from a 'tar' command (not directly) like so...
**
#

# tar -i -xf put-your-backup-file-here \

# --to-command='./verify-qbackup $TAR_FILENAME'





passphrase="put-your-backup-passphrase-here"

fname=$1





if [ "${fname##*.}" = "hmac" ] ; then

# This file has .hmac extension; Compare with prior file's
just-calculated hmac
read hmac

echo -n "Received... " $hmac

read myhmac <verifyqb.tmp

if [ "$hmac" = "$myhmac" ] ; then

echo " OK"

else

echo " *****MISMATCH*****"

kill -SIGINT $PPID

# Stop tar/parent

fi

else

# This file is data; create an hmac from stdin for comparison with next
file
echo

echo "FOUND " $fname

echo -n "Calculating..."

openssl dgst -hmac "$passphrase" >verifyqb.tmp

read myhmac <verifyqb.tmp

echo $myhmac

fi

===

Andrew David Wong

unread,
Sep 8, 2019, 8:16:50 PM9/8/19
to Heinrich Ulbricht, Marek Marczykowski-Górecki, qubes-users
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

On 07/09/2019 3.28 PM, 'Heinrich Ulbricht' via qubes-users wrote:
> Here is an update on how my migration from SSD_small to SSD_big is going so
> far.
>
> Just as a remindet this is the challenge I face:
> * dom0 SSD has 100 GB capacity, ~10% of this is free (that's why I want to
> migrate to a new SSD)
> * external storage pool 1 has 1 TB storage, AppVM *1* with < 500 GB private
> storage in use
> * external storage pool 2 has 1 TB storage, AppVM *2* with > 500 GB private
> storage in use
> * I want to migrate everything via backup+restore to new disks/pools
>
> *Here is what worked*
> * backing up App VMs from all 3 pools using built-in backup mechanisms (UI)
> - cool
>
> *Here is what did not work*
> * *verifying* the huge (400-700 GB) backups *did not work* since this
> filled up my dom0 pretty fast and then failed -> this is the reason why I
> resorted to what Andrew wrote: having the original still in place while
> restoring to different disks, not overwriting anything, just in case
> restoring fails
> * *restoring* the huge (400-700 GB) backups *did not work* since this
> filled up my dom0 pretty fast and then failed -> this is exactly like
> donoban wrote; I managed to work around this for AppVM *1*, NOT for AppVM
> *2* (yet)
>
> To restore AppVM *1* (< 500 GB) I modified *restore.py
> <https://github.com/QubesOS/qubes-core-admin-client/blob/9158412a24da300e4c54346ccb54fce1e748500f/qubesadmin/backup/restore.py#L858>*
> to restore to another location than */var/tmp*. The easiest for me was to
> create a new (temporary) AppVM in my new 1 TB external storate pool *1*, to
> increase its private storage to 500 GB, to mount its private volume to dom0
> and to use this path as temporary location in *restore.py*. So I was using
> my 1 TB disk both as restore target and temporary location for backup
> extraction. I was lucky - the pool filled up to 99.8% and the restore
> succeeded. So currently it seems you need double the amount of storage your
> to-be-restored AppVM consumes to restore the AppVM.
>
> Now there is one challenge left. I have to restore AppVM *2* which is about
> 700 GB. To my current knowledge I would now need to have twice this amount
> to restore - which currently I don't have. This is why I'd like to somehow
> slow down the extraction. donoban mentioned this is possible. I had a look
> at restore.py
> <https://github.com/QubesOS/qubes-core-admin-client/blob/master/qubesadmin/backup/restore.py>
> but honestly have not idea where to start. I also currently don't know how
> the different extraction processes interact and how the backup is
> structured.
>
> Can anybody suggest a modification (or hack, however dirty - it's meant to
> be temporary) to restore.py so it won't need 700 GB of additional temporary
> storage when I try to restore my 700 GB AppVM?
>
> Thanks for all your input so far. Knowing that dom0 could fill up certainly
> saved my some hours of questioning life.
>

Sorry to hear about the problems. I'm surprised about dom0 filling up. I
thought we had solved this problem a long time ago. I remember running
into the same problem years ago, and I thought we had subsequently
moved to restoring in smaller chunks so that only a small amount of
temporary storage in dom0 is required when restoring.

Is this not the case, Marek?

- --
Andrew David Wong (Axon)
Community Manager, Qubes OS
https://www.qubes-os.org

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCgAdFiEEZQ7rCYX0j3henGH1203TvDlQMDAFAl11md8ACgkQ203TvDlQ
MDByKxAAisO6qynlzkIoXaPqAws2GXRByGDZMbt3NWi42GptQL1HEYztu9O3AcYX
zwu7Bz5CtFMRzjNcdzpCoiWsHJzLKuNYQR+jutnsp0iR6J5xSiKSV/ERvhebo8wj
CqRuq6Gb1cJ3M4BbnD+tsCbRXFA9E5fx0bKztZMRgPBPexvFjRc6mYcOXff3swS4
cz+0PjFku07rTphz108EVImRXjhiiq4qpA1vVZbm/ljMxA/TJpAAHR01UdIn1fGc
HbZPIep4E3WOG3oWQQZdWoUvYuloMnnA+ExlD5nTwnCxTR2pUrk6WxeJ8UWMD+3A
50xaw6nqn/746oEJuXTnQ49DFnD90hr2gRJrKguESElchEFMFBhGxy7OmkBZyvaW
WcYDNtAsGOaAMz3etAJzhOI7JE4KjmXUPgF2/8Y22JsGEqakMK9PXmB5pW3t9GCV
w31rC2/Ja3pR69BJ/TblyHpDD2Pu7RxFq9sMYE7y6u1+cYYqgoLUR+QWArp/+ZOY
RD6+ldIbRCehrSFSBD60apav9tgaMEmKiTSEAkvIvayFpJu+I4lQCiUlLftSTIJu
jb15AzC10ioSsb0LZ0sONuTKOfjVHD958JXQLlDbP+ygpuAWKY2se+bVXoeYt64x
90qA6Lio66Ij6E0IeTtVG6nbwDRVry4Zf4Q77ezGO7QrwOiEjZQ=
=NBQ3
-----END PGP SIGNATURE-----

Marek Marczykowski-Górecki

unread,
Sep 8, 2019, 9:10:54 PM9/8/19
to Andrew David Wong, Heinrich Ulbricht, qubes-users
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
It's this issue:
https://github.com/QubesOS/qubes-issues/issues/4791

In fact, I do have part of the fix already implemented. Hopefully will
have the other part finished this week.

In the meantime, you can try some naive methods of slowing down the
extraction process, for example by attaching strace to it (`strace -p
$(pidof qfile-dom0-unpacker)`), or pausing it from time to time by
sending SIGSTOP signal (and then SIGCONT to unpause). You can do it in a
loop like this:

pid=$(pidof qfile-dom0-unpacker)
while kill -STOP $pid; do sleep 30; kill -CONT $pid; done

- --
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAl11ppYACgkQ24/THMrX
1ywFqQf+P3sJIPpk7UOI09+rITICJB6LWm310nKRaJ0sx/lcSkjNH6tAQuF2Z8Nn
G0mepvBBG9bEfUUMGfxurn4ud0exbCz4W/AH8DEpAFuF41BSsTtXKsUCT278W3SP
A8ifNWd6+wF/TZYabFet+HW8UH6AyRWTv0urdQYOo4SrKVlD5mnzH51OfFcRERQ0
PsOZRBakJZoN5trc32VmYwTjpJCpGbPGkza/OnvtYlWPLuo0zkAvHuqEE7OKQMOd
I2NJRzGtX1jNQDBJAaRxhZN5YqVh1h6arRtKCHtEjHsGnB95oVO6G/NLsOP4oRW/
/s1GO8CseecPv+/R7Vp9eb7Gjh0iaw==
=pLnv
-----END PGP SIGNATURE-----

Heinrich Ulbricht

unread,
Sep 9, 2019, 2:20:42 AM9/9/19
to qubes-users
Quick update on how it is going: I' got the restore operation in an equilibrium now.

THIS is the place to insert the checkpoint operation for the un-tar (line 913ff in restore.py):

tar1_command = ['tar',
'-ixv',
'--occurrence=1',
'--checkpoint=20000',
'--checkpoint-action=exec=\'sleep "$(stat -f --format="(((%b-%a)/%b)^5)*30" /var/tmp | bc -l)"\'',
'-C', self.tmpdir] + filelist

This seems to be a non-interruptable tar operation that is pushing thousands of chunk files into the temp location with the other processes having no chance to process them in time. I'll follow up with details.

Heinrich Ulbricht

unread,
Sep 9, 2019, 7:31:29 AM9/9/19
to qubes-users


THIS is the place to insert the checkpoint operation for the un-tar (line 913ff in restore.py):

tar1_command = ['tar',
'-ixv',
'--occurrence=1',
'--checkpoint=20000',
'--checkpoint-action=exec=\'sleep "$(stat -f --format="(((%b-%a)/%b))*120" /var/tmp | bc -l)"\'',
'-C', self.tmpdir] + filelist

This seems to be a non-interruptable tar operation that is pushing thousands of chunk files into the temp location with the other processes having no chance to process them in time. I'll follow up with details.


Above sole change to restore.py did solve the problem for me. Note that I tweaked the parameters a bit to give it more time. With this change the initial sleep duration was about 2 seconds. The temp directory slowly filled up and the sleep duration increased to about 11 seconds and stayed there, keeping everything in balance. Looking at the task manager I saw the checkpoint was hit about every 5 seconds. That's an 11 second sleep every 5 seconds.

To summarize this is what I did to restore:
* copy the 700 GB backup file to a new temporary AppVM in storage pool 1 (I had to free some space there...) [HDD]
* mount the private storage of this new AppVM to dom0
* modify restore.py as described above (added checkpoint, left /var/temp unchanged as temporary location which is on a SSD)
* set the default storage pool to storage pool 2 [HDD]
* used the "Restore Backup" UI to select the backup file
* start the restore operation and wait until it successfully finished

There were other proposed solutions I did not try:
* dd the old HDD content to a new HDD, make the new Qubes installation recognize it
* LVM mirroring
* use Chris' pre-release script
* slow down qfile-dom0-unpacker (<- this solution came in as slowing down the tar process was already working)
* attach USB drive to dom0 and restore from there (<- I choose to not attach a USB drive)

I would've tried those next had the restore operation with modified restore.py failed. In my view the out-of-the-box solution has the lowest security and maintenance risks.

Thank you to all who helped me getting this done! And good to hear there is currently work being done to make this smoother in the future (by Chris & Marek - should you synchronize?).

Note: While fiddling around I discovered that the restore operation cannot be canceled: https://github.com/QubesOS/qubes-issues/issues/5304

donoban

unread,
Sep 9, 2019, 10:22:04 AM9/9/19
to qubes...@googlegroups.com
On 9/9/19 1:31 PM, 'Heinrich Ulbricht' via qubes-users wrote:
> Above sole change to restore.py did solve the problem for me. Note that
> I tweaked the parameters a bit to give it more time. With this change
> the initial sleep duration was about 2 seconds. The temp directory
> slowly filled up and the sleep duration increased to about 11 seconds
> and stayed there, keeping everything in balance. Looking at the task
> manager I saw the checkpoint was hit about every 5 seconds. That's an 11
> second sleep every 5 seconds.
> ...
Great, nice to know.
Reply all
Reply to author
Forward
0 new messages