bottlenecks during qvm-backup

30 views
Skip to first unread message

lik...@gmx.de

unread,
Apr 23, 2019, 4:58:25 PM4/23/19
to qubes...@googlegroups.com
Hi,

is there a possibility to find the bottlenecks during a qvm-backup? A backup of ~100GB with compression takes several hours. During the time the (4) cpu cores (xentop) are not used well and the harddisk (iotop) is also idling a lot. I'm creating a backup to a usb attached ext3 formated harddrive.

How to find out which components are responsible for the slow process?

Best, Pete

Mike Keehan

unread,
Apr 23, 2019, 5:53:06 PM4/23/19
to qubes...@googlegroups.com
Just try copying 100Gb to the usb connected drive to see how long that
takes. USB drive speed can be quite slow.

Mike.

Chris Laprise

unread,
Apr 23, 2019, 6:52:23 PM4/23/19
to lik...@gmx.de, qubes...@googlegroups.com
Hi,

You may want to try my new incremental backup tool that works with Qubes:

https://github.com/tasket/sparsebak

Using LVM snapshot metadata, it finds the volume changes instantly
without having to read+hash the whole volume first. Its able to update
large 100GB+ volumes in under 10 sec. (i.e. its proportional to the
amount of changed data since the last backup) and testers say its pretty
stable even in pre-beta.

The 'new5' branch also has a new de-duplication feature that we're
testing to save even more bandwidth and disk space.

The only thing its lacking at this point is built-in encryption, so
you'll have to supply that yourself with an encrypted filesystem if you
desire it.

--

Chris Laprise, tas...@posteo.net
https://github.com/tasket
https://twitter.com/ttaskett
PGP: BEE2 20C5 356E 764A 73EB 4AB3 1DC4 D106 F07F 1886
Reply all
Reply to author
Forward
0 new messages