ANN: Fast incremental backups project

282 views
Skip to first unread message

Chris Laprise

unread,
Dec 9, 2018, 10:38:07 AM12/9/18
to qubes...@googlegroups.com
'Sparsebak'

Fast Time Machine-like disk image backups for Qubes OS and Linux LVM.


Introduction
------------

Sparsebak is aimed at incremental backup management for logical volumes.
The focus on condensing logical volume metadata is combined with a
sparsebundle-style (similar to OS X Time Machine) storage format that
enables flexible and quick archive operations.

The upshot of this combination is that Sparsebak has nearly
instantaneous access to volume changes (no repeated scanning of volume
data!), remains space-efficient regardless of file sizes within a
volume, avoids snapshot space consumption pitfalls, and can make an
indefinite* number of frequent backups to an archive with relatively
little CPU / disk IO overhead.

Sparsebak is optimized to process data as streams whenever possible,
which avoids writing temporary caches of data to disk. It also doesn't
require the source admin system to ever mount processed volumes, meaning
untrusted data and guest filesystems can be handled safely on Qubes OS
and other security-oriented hypervisors.

(* See `prune` command for freeing space in the destination archive.)


Status - Alpha version
------

Can do full or incremental backups of Linux thin-provisioned LVM to
local dom0 or VM filesystems or via ssh, as well as simple volume
retrieval for restore and verify.

Fast pruning of past backup sessions is now possible.

Data verification currently relies on SHA-256 manifests being safely
stored on the source/admin system to maintain integrity. Encryption and
key-based verification are not yet implemented.

The codebase at this stage is lean at only 1,050+ lines of Python.


A partial 'Todo' list:

* Basic functions: Volume selection options, Delete

* Inclusion of system-specific metadata in backups (Qubes VM configs, etc.)

* Additional functions: Untar, receive-archive, verify-archive

* Show configured vs present volumes in list output

* Encryption integration

* File name and sequence obfuscation

* Pool-based Deduplication

* Additional sanity checks

* Btrfs, XFS, ZFS support


Comments.....

So far my experience with Sparsebak since I started using it this past
July has been very positive with fast and accurate backups - and at
times its been fun to code, too. I hope the Qubes community finds it
interesting and useful as well.

One more item under 'Todo' I should also mention is the name: I don't
really like the current working title and would appreciate your
suggestions and PRs on this and many other issues!

--

Chris Laprise, tas...@posteo.net
https://github.com/tasket
https://twitter.com/ttaskett
PGP: BEE2 20C5 356E 764A 73EB 4AB3 1DC4 D106 F07F 1886

Chris Laprise

unread,
Dec 9, 2018, 10:39:21 AM12/9/18
to qubes...@googlegroups.com
On 12/09/2018 10:38 AM, Chris Laprise wrote:
> 'Sparsebak'
>
> Fast Time Machine-like disk image backups for Qubes OS and Linux LVM.
>

And of course, a link to the project :) ....

https://github.com/tasket/sparsebak

Ivan Mitev

unread,
Dec 10, 2018, 5:23:28 AM12/10/18
to qubes...@googlegroups.com
That's really great work.

On 12/9/18 5:38 PM, Chris Laprise wrote:

> Status - Alpha version
> ------
>
> Can do full or incremental backups of Linux thin-provisioned LVM to
> local dom0 or VM filesystems or via ssh, as well as simple volume
> retrieval for restore and verify.

I'm not familiar with the way OS X Time Machine works - apologies if
that's a dumb question: would the tool be able to backup to a LUKS
volume mounted in a VM, with the underlying volume being a file on an
NFS share on a NAS ? (that's a bit convoluted but that's the way I
backup my stuff now).

> [...]
>
> One more item under 'Todo' I should also mention is the name: I don't
> really like the current working title and would appreciate your
> suggestions and PRs on this and many other issues!

Given how backups take forever to complete and how they trash my
laptop's CPU I do hope that the tool will get included "upstream" if no
major issues are found. The name could be qubes-incremental-backup ; or
the project, "Qubes incremental backups". Anyway, I'm not good at find
good names :)

Chris Laprise

unread,
Dec 10, 2018, 12:57:28 PM12/10/18
to Ivan Mitev, qubes...@googlegroups.com
On 12/10/2018 05:23 AM, Ivan Mitev wrote:
> That's really great work.
>
> On 12/9/18 5:38 PM, Chris Laprise wrote:
>
>> Status - Alpha version
>> ------
>>
>> Can do full or incremental backups of Linux thin-provisioned LVM to
>> local dom0 or VM filesystems or via ssh, as well as simple volume
>> retrieval for restore and verify.
>
> I'm not familiar with the way OS X Time Machine works - apologies if
> that's a dumb question: would the tool be able to backup to a LUKS
> volume mounted in a VM, with the underlying volume being a file on an
> NFS share on a NAS ? (that's a bit convoluted but that's the way I
> backup my stuff now).

It sounds like your setup would work readily with the current alpha
version: A remote disk image file accessed via NFS by a Qubes VM, and
the VM does a 'losetup' on the disk image file and then 'cryptsetup
luksOpen' on the loop device... If so, that's very similar to my own
setup where I use sshfs instead of NFS. For this kind of setup, specify
'qubes://vmname' as the destvm in sparsebak.ini.

Any VM-mounted disk filesystem thats reasonably modern (like Ext4 or
even NTFS) should work regardless of whether the back end is remote or
local.

OTOH, someone wishing to directly use NFS, sshfs or samba sharing
(without an intermediate disk image file) would have some experimenting
to do at this stage as I haven't yet explored it. But it might work
without any special steps. The main thing is that currently sparsebak
will expect to see a mountpoint.

Finally, there is an option to store each backup session as a single
.tar file which doesn't require hard links but this naturally limits the
user's maintenance options and makes restoring a volume much slower.


>
> > [...]
> >
> > One more item under 'Todo' I should also mention is the name: I don't
> > really like the current working title and would appreciate your
> > suggestions and PRs on this and many other issues!
>
> Given how backups take forever to complete and how they trash my
> laptop's CPU I do hope that the tool will get included "upstream" if no
> major issues are found. The name could be qubes-incremental-backup ; or
> the project, "Qubes incremental backups". Anyway, I'm not good at find
> good names :)
>

You should find sparsebak to be extremely CPU & IO efficient for
incremental backups. For full/initial backups the advantage isn't as
dramatic, but should still be noticeable.

I've already pointed Marek to the underlying fast delta-harvesting tech
that sparsebak uses in LVM and he seemed interested in that aspect.
Qubes is certainly also welcome to borrow or adopt code from my project
or have qvm-backup act as a wrapper for sparsebak.

For now, I'd like to make it convenient for Qubes users to configure
volumes in sparsebak and run backup sessions. The next step is including
VM settings along with the volumes in the backups, and integrating
encryption so the user doesn't feel compelled to setup a destination
LUKS volume.

Andrew David Wong

unread,
Dec 10, 2018, 8:28:03 PM12/10/18
to Chris Laprise, qubes...@googlegroups.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
Looks great, Chris! Would you be willing to submit this as a package
contribution?

https://www.qubes-os.org/doc/package-contributions/

- --
Andrew David Wong (Axon)
Community Manager, Qubes OS
https://www.qubes-os.org

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCgAdFiEEZQ7rCYX0j3henGH1203TvDlQMDAFAlwPEooACgkQ203TvDlQ
MDBtrg//dULbS5YS5zWZQpYVKw9+Uoj9qjzSvUsmCRyZdblU2RxrKPPp9kPyCju9
mGRUkO5VSCbzhDgdC+tGfyrDMSGRIXuR2hCSLCpHTAYpmc1AGmE28CSZqQKmfjb4
wscyNBPNnR8+3FhMTPnDyhzu/YNI6xoHIYilTeK1BsHyIbO95Txnoti7uBa4mieB
uu1g0RK10V33Yp0U8sCf1j7BaGaposjuZCqS6/MRECYBL2BMI6uHxKxmxNRhjTI7
wfXaMJMPddzg+gm9S4Sk7HRl4+aul1MCAk5p8MV9MlWmCaFHslUQv+1kNVIxGk+2
kQ1fIz2rN/emtwneAzwbRaySDyNHiBnODUvz4YqvGLcWX4wGE/3QvtK03I+/dqBO
i7sVXBRQ9UqWyYwoIqypkRSNcsDRaYlDsIgEH8wUyTGu5efdRHBThw+Z4wbkmzOP
7tWvU/NpvVYtl32QNes1B5W4Ym6QF0RgiJTYNumF/cXRMNi+uP604e7mmOZLgd+R
NvzcAG8ooQnwinzfGZrhm3JJ7zTPUgACpbG6Iqcix9cX6u0s8LbHekaclIs9YkgI
1w7eY6WThcyP+NVv5ODpYAtWLlj5p7ItVPeWSiKBgQfbs7Y+oIyrwj1xQQsv9lWl
dpQ8TH79OBgvNJGDDY8aoZ6IQHo7luy4VSZk9gUpbCdq+igqvv8=
=EpTQ
-----END PGP SIGNATURE-----

Outback Dingo

unread,
Dec 10, 2018, 9:43:28 PM12/10/18
to tas...@posteo.net, iv...@maa.bz, qubes...@googlegroups.com
restic also seems to be a very nice package with a lot of features


> Chris Laprise, tas...@posteo.net
> https://github.com/tasket
> https://twitter.com/ttaskett
> PGP: BEE2 20C5 356E 764A 73EB 4AB3 1DC4 D106 F07F 1886
>
> --
> You received this message because you are subscribed to the Google Groups "qubes-devel" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to qubes-devel...@googlegroups.com.
> To post to this group, send email to qubes...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/qubes-devel/48d06c37-2215-2e75-3328-cfcb1c5e534a%40posteo.net.
> For more options, visit https://groups.google.com/d/optout.

Ivan Mitev

unread,
Dec 11, 2018, 1:40:27 AM12/11/18
to qubes...@googlegroups.com


On 12/10/18 7:57 PM, Chris Laprise wrote:
> On 12/10/2018 05:23 AM, Ivan Mitev wrote:
>> That's really great work.
>>
>> On 12/9/18 5:38 PM, Chris Laprise wrote:
>>
>>> Status - Alpha version
>>> ------
>>>
>>> Can do full or incremental backups of Linux thin-provisioned LVM to
>>> local dom0 or VM filesystems or via ssh, as well as simple volume
>>> retrieval for restore and verify.
>>
>> I'm not familiar with the way OS X Time Machine works - apologies if
>> that's a dumb question: would the tool be able to backup to a LUKS
>> volume mounted in a VM, with the underlying volume being a file on an
>> NFS share on a NAS ? (that's a bit convoluted but that's the way I
>> backup my stuff now).
>
> It sounds like your setup would work readily with the current alpha
> version: A remote disk image file accessed via NFS by a Qubes VM, and
> the VM does a 'losetup' on the disk image file and then 'cryptsetup
> luksOpen' on the loop device... If so, that's very similar to my own
> setup where I use sshfs instead of NFS. For this kind of setup, specify
> 'qubes://vmname' as the destvm in sparsebak.ini.
>
> Any VM-mounted disk filesystem thats reasonably modern (like Ext4 or
> even NTFS) should work regardless of whether the back end is remote or
> local.

That was the bit I was missing, thanks for the explanation.

>
> OTOH, someone wishing to directly use NFS, sshfs or samba sharing
> (without an intermediate disk image file) would have some experimenting
> to do at this stage as I haven't yet explored it. But it might work
> without any special steps. The main thing is that currently sparsebak
> will expect to see a mountpoint.

FWIW I had quite a few problems with sparse files and initial file
allocation on NFS mounts some time ago - which is actually the reason I
asked you about the NFS "backend".


> Finally, there is an option to store each backup session as a single
> .tar file which doesn't require hard links but this naturally limits the
> user's maintenance options and makes restoring a volume much slower.
>
>
>>
>>  > [...]
>>  >
>>  > One more item under 'Todo' I should also mention is the name: I don't
>>  > really like the current working title and would appreciate your
>>  > suggestions and PRs on this and many other issues!
>>
>> Given how backups take forever to complete and how they trash my
>> laptop's CPU I do hope that the tool will get included "upstream" if
>> no major issues are found. The name could be qubes-incremental-backup
>> ; or the project, "Qubes incremental backups". Anyway, I'm not good at
>> find good names :)
>>
>
> You should find sparsebak to be extremely CPU & IO efficient for
> incremental backups. For full/initial backups the advantage isn't as
> dramatic, but should still be noticeable.
>
> I've already pointed Marek to the underlying fast delta-harvesting tech
> that sparsebak uses in LVM and he seemed interested in that aspect.
> Qubes is certainly also welcome to borrow or adopt code from my project
> or have qvm-backup act as a wrapper for sparsebak.
>
> For now, I'd like to make it convenient for Qubes users to configure
> volumes in sparsebak and run backup sessions. The next step is including
> VM settings along with the volumes in the backups, and integrating
> encryption so the user doesn't feel compelled to setup a destination
> LUKS volume.

Thanks for taking the time to reply, and for your work !


Chris Laprise

unread,
Dec 11, 2018, 7:01:41 AM12/11/18
to Outback Dingo, iv...@maa.bz, qubes...@googlegroups.com
I'd probably like to examine how they implemented encrypted storage. As
for using it on Qubes, I vaguely remember reviewing it among a couple
dozen other programs and found it was file-oriented and employs a CPU
and IO-intensive process to scan all the contents for each backup. Some
examples of use even show things like "dd if=/dev/sda1 | restic". Restic
also caches file data (not just metadata) locally... IMO laptop users
'have no time for this' which is why I wanted to avoid this kind of
burden on local source disks.

Sparsebak's approach assumes that most computers now use some kind of
snapshot-capable COW storage by default (in this case thin LVM, which is
the Qubes default) and so there is no need to read through all the
source data to find where the changed bits are. It has this information
available immediately and can complete a backup of multi-gigabyte disk
volumes in a fraction of the time that other backup tools require.

--

Chris Laprise

unread,
Dec 11, 2018, 7:21:39 AM12/11/18
to Andrew David Wong, qubes...@googlegroups.com
Yes, although I'll probably wait until it gets to beta stage before
submitting it.

meskio

unread,
Dec 11, 2018, 8:40:48 AM12/11/18
to Chris Laprise, iv...@maa.bz, qubes...@googlegroups.com
Quoting Chris Laprise (2018-12-11 13:01:33)
> On 12/10/2018 09:42 PM, Outback Dingo wrote:
> > restic also seems to be a very nice package with a lot of features
>
> I'd probably like to examine how they implemented encrypted storage. As
> for using it on Qubes, I vaguely remember reviewing it among a couple
> dozen other programs and found it was file-oriented and employs a CPU
> and IO-intensive process to scan all the contents for each backup. Some
> examples of use even show things like "dd if=/dev/sda1 | restic". Restic
> also caches file data (not just metadata) locally... IMO laptop users
> 'have no time for this' which is why I wanted to avoid this kind of
> burden on local source disks.

I guess is already in your list of backup tools to watch. But it might be easier
to copy code/ideas from borg backup as is also in python. I have good
experiences with the deduplication and compression features of borg.

Thanks for the project, the backups in qubes are one of the mayor pains for me
right now. I'm happy to see some efforts in the direction of improving them.

--
meskio | http://meskio.net/
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
My contact info: http://meskio.net/crypto.txt
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Nos vamos a Croatan.
signature.asc

Steve Coleman

unread,
Dec 11, 2018, 10:20:57 AM12/11/18
to qubes...@googlegroups.com
On 12/9/18 10:38 AM, Chris Laprise wrote:
> 'Sparsebak'
>
> Fast Time Machine-like disk image backups for Qubes OS and Linux LVM.

I tested the new version this morning and had worked around a few issues
such as first trying to run from /usr/local/sparsebak, and then went
back to square one and used /sparsebak as it was originally intended.

$ sudo /sparsebak/sparsebak.py send --save-to
sys-usb:/run/media/user/USBdrive/sparsebak

I was attempting to "send" all my VM's private sections to a drive
mounted on sys-usb, and I seem to have run into a problem with one
particular AppVM. When it got to my vm-vault-private it printed:

"Delta map not finalized for vm-vault-private ...recovering."

and a few seconds later this exception:

"
Processing Volume : vm-vault-private
Updating block change map: Traceback (most recent call last):
File "/sparsebak/sparsebak.py", line 1008, in <module>
monitor_send(options.volumes, monitor_only=False)
File "/sparsebak/sparsebak.py", line 543, in monitor_send
= update_delta_digest()
File "/sparsebak/sparsebak.py", line 307, in update_delta_digest
assert bkchunksize >= (bs*dblocksize) and bkchunksize % (bs*dblocksize)
== 0
AssertionError
"

All the other AppVM's appeared to be processed/snapshot'ed Ok, but since
there was this one fatal exception the backup obviously did not
complete. I'm not sure what the complaint is with the blocksize because
the AppVM runs fine and with no errors that I can see.

Steve.

Ivan Mitev

unread,
Dec 11, 2018, 11:19:47 AM12/11/18
to qubes...@googlegroups.com


On 12/11/18 5:20 PM, Steve Coleman wrote:
> On 12/9/18 10:38 AM, Chris Laprise wrote:
>> 'Sparsebak'
>>
>> Fast Time Machine-like disk image backups for Qubes OS and Linux LVM.
>
> I tested the new version this morning and had worked around a few issues
> such as first trying to run from /usr/local/sparsebak, and then went
> back to square one and used /sparsebak as it was originally intended.
>
> $ sudo /sparsebak/sparsebak.py send --save-to
> sys-usb:/run/media/user/USBdrive/sparsebak
>
> I was attempting to "send" all my VM's private sections to a drive
> mounted on sys-usb, and I seem to have run into a problem with one
> particular AppVM. When it got to my vm-vault-private it printed:
>
> "Delta map not finalized for vm-vault-private ...recovering."
>
> and a few seconds later this exception:
>
> "
> Processing Volume : vm-vault-private
> Updating block change map: Traceback (most recent call last):
>    File "/sparsebak/sparsebak.py", line 1008, in <module>
>       monitor_send(options.volumes, monitor_only=False)
>    File "/sparsebak/sparsebak.py", line 543, in monitor_send
>       = update_delta_digest()
>    File "/sparsebak/sparsebak.py", line 307, in update_delta_digest
>     assert bkchunksize >= (bs*dblocksize) and bkchunksize %
> (bs*dblocksize) == 0
> AssertionError


FWIW I get the same error when "sending" a test VM for the 2nd time (the
first backup seemed to be OK) or when using "monitor".

I'd suggest creating issues on Chris' github project page...



Steve Coleman

unread,
Dec 11, 2018, 12:11:52 PM12/11/18
to qubes...@googlegroups.com
Done: Assert bkchunksize #3

Chris Laprise

unread,
Dec 11, 2018, 1:46:17 PM12/11/18
to Ivan Mitev, qubes...@googlegroups.com, Steve Coleman
I tried creating a new archive with some test vms and its working OK. Do
you know specific steps that reproduce the error?

I also posted an update in the 'new' branch that will print out the
relevant values if/when the error occurs:

https://github.com/tasket/sparsebak/tree/new

Thanks!

Ivan Mitev

unread,
Dec 11, 2018, 2:20:15 PM12/11/18
to Chris Laprise, qubes...@googlegroups.com, Steve Coleman
issue #5 ...

(might be a dupe of Steve's issue though).

donoban

unread,
Dec 11, 2018, 5:26:46 PM12/11/18
to qubes...@googlegroups.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

On 12/9/18 4:38 PM, Chris Laprise wrote:
> 'Sparsebak'
>
> Fast Time Machine-like disk image backups for Qubes OS and Linux
> LVM.
>
>
> Introduction ------------
>
> Sparsebak is aimed at incremental backup management for logical
> volumes. The focus on condensing logical volume metadata is
> combined with a sparsebundle-style (similar to OS X Time Machine)
> storage format that enables flexible and quick archive operations.
>

This sounds great, I will try it :)

Thanks Chris.
-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEznLCgPSfWTT+LPrmFBMQ2OPtCKUFAlwQOZwACgkQFBMQ2OPt
CKUEaQ//aa/n/CAyAmykgCd5mZtBy3WH4hCb0JFHKcIAAnKCjgOrQRSv9tjxdMYK
GPSIANi/rbpCyZFOibMyM0sGS39CbXdmeOcVFZ7etIoRychH6CHaaNDCLiZXdNUj
Nc3kGIqHCHKZ8RvFuNg+3f7KB7UXjCurVIFUfmJOj45gKKly2+/oMD1jN6BQI1z+
px56A9vrbUP65NZhsGUGKCRUC9MoOONDB70TB+o2ok676HKT9hNZ6Y6xSKgmnP1b
8VYsF6mmJByMwUXzYjvoC9jGDnheftmC5FLXHTTjMp0xi+yU0ci2xNT0ajIg2Wrf
aO959LfHg2WpKBaiLqNSwtwJA1A3i991egWBI2pyrxRblrXP0PkmwK7k3pqwnfTO
FiRnK3pHMIDqnbIYiYgFFpxP9VKSPlX9fbLS7neUMDHNOR3oxMoVqtEILR+dxnLX
Wqa1CybBd/daQchYprUKriwiXde54uIkqFzLUyybmYS3Xcc/MuGutgB3DRu3rB9Y
0gZmTj7/f/k7QKMuXEPGvK9qPZ1PYMXw2SGNqun9tPKvx1NOU2OqazlDhl6gspIj
TvvSGM0M9TFmUC7BRJJ9ePSCt1sMLtw/eGyzmLcg9g7kMJp36HpAdkGeCLkZ9t3k
pSv2t7PrO282FQte03uY88UyEth5D9rqbjZYrB4R4NigEeBldtw=
=ZqOL
-----END PGP SIGNATURE-----

Jean-Philippe Ouellet

unread,
Dec 11, 2018, 6:01:59 PM12/11/18
to Chris Laprise, qubes-devel
I don't have time to audit it right now, but conceptually this seems
fantastic! Thank you!! :D

Cheers,
Jean-Philippe

Andrew David Wong

unread,
Dec 11, 2018, 8:55:47 PM12/11/18
to Chris Laprise, qubes...@googlegroups.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

Sounds good! Looking forward to seeing this in Qubes someday. :)

- --
Andrew David Wong (Axon)
Community Manager, Qubes OS
https://www.qubes-os.org

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCgAdFiEEZQ7rCYX0j3henGH1203TvDlQMDAFAlwQapAACgkQ203TvDlQ
MDCGpxAAwu9NZXUQaoToXFETojXn8gwIniZ0zLk6tJLXkLZx3GK7yjpkluQAV+oq
AM04+suVJEVrOiYd17hVMxgZ9MDEhS5NtBIPwwGc7jqfBmubHA6ic9mromSUAFzR
KssWIgDqQiVOya+u4rCRhOmMvQfvavhyOEb3/woXKyMVFB+MU8jN+HtVVntjkysl
ldUAkSTuWJRYTqaIzuyM7hri/agxgC6WTujNrV7XJNNOtQAXFReq5b+JJu3GaXSH
OoXZLonLT9bHiSrXbzpqk7E6U68F+cb1DTc6q1ESU7OfUDJHKHu+P2bwcPSIq7UU
V0xlEFZSdwM4tW9kePkRGNjZjLLr4TwilRgoI6wwOlnR1B/hWQF1ThK9zJNft1eS
9D4fkp0HANgV21IYXd5OzYcivEShyRA/Jqbow9fu4diIT9wT/2+saahTy8I8bVA9
UFr0a1iPabsg/j3GGyrxl6KTN/pOCeI8xJkEtH+hCgmoEus8yJeS1o168oxUYs4o
+ci1L1ym+94IZOgXGhUmk1+ba3U3z/POiy+HOyID6grBRZ3VTmDbx0WnzWPSDZyv
vOy/2UnhcfwFv4FOgx6DItCwz8DmdEAwl3VujaF8K6GJILcGqhNsKMxuGkJ94l/X
M+ptLQmEf+mtGNqWuostCyWtEu29oAeuxkAu3wFbvKPJv7y2i+s=
=KFSb
-----END PGP SIGNATURE-----

Chris Laprise

unread,
Dec 12, 2018, 8:13:25 AM12/12/18
to qubes...@googlegroups.com, Steve Coleman
A fix has been pushed to master (alpha2).

Steve Coleman

unread,
Dec 12, 2018, 9:11:17 AM12/12/18
to qubes...@googlegroups.com
I ran this new version and the first time it gave another error. Second
time the same error, third time trying to capture a logfile it ran but
was incomplete.

Oddly enough the only backup performed was my vault VM, which has two
folder entries on the destination drive. No other VM's appear to have
been backed up because of this exception.

Steve


sparsebak.log

Chris Laprise

unread,
Dec 12, 2018, 2:41:41 PM12/12/18
to Steve Coleman, qubes...@googlegroups.com
The exception should now be "dblocksize error" and line 308 should read:

> if dblocksize % lvm_block_factor != 0:

If this is the source of the error and dblocksize is still = 4096 then
I'm a bit stymied; so I need to double-check this before proceeding.

Thanks!

Steve Coleman

unread,
Dec 12, 2018, 5:12:59 PM12/12/18
to Chris Laprise, qubes...@googlegroups.com
On 12/12/18 2:41 PM, Chris Laprise wrote:
> On 12/12/2018 09:11 AM, Steve Coleman wrote:
>> On 12/12/18 8:13 AM, Chris Laprise wrote:
>>> A fix has been pushed to master (alpha2).
>>
>> I ran this new version and the first time it gave another error.
>> Second time the same error, third time trying to capture a logfile it
>> ran but was incomplete.
>>
>> Oddly enough the only backup performed was my vault VM, which has two
>> folder entries on the destination drive. No other VM's appear to have
>> been backed up because of this exception.
>
> The exception should now be "dblocksize error" and line 308 should read:
>
> > if dblocksize % lvm_block_factor != 0:

That is correct, on line 308

I copied the print statements that followed line 308 and placed them up
in front of that if, and reran the backup. Here is what I got:

bkchunksize = 131072
dblocksize = 4096
bs = 512

lvm_block_factor = 128

4096 mod 128 = 0

The value from the expression "dblocksize % lvm_block_factor" (4096 %
128) evaluates to 0 so the print statements following that if did not
get called.

Chris Laprise

unread,
Dec 13, 2018, 2:33:17 PM12/13/18
to Steve Coleman, qubes...@googlegroups.com
I didn't see your response earlier... sorry about that!

The problem with this is in alpha2 the 'raise dblocksize error' is
located after the print statements. So in this case I suspect that a
different error (not produced by a 'raise' statement) is occurring,
otherwise you wouldn't have had to move the print statements.

Chris Laprise

unread,
Dec 13, 2018, 3:51:06 PM12/13/18
to Steve Coleman, qubes...@googlegroups.com
Posted an update to new2 branch.

If its OK with you then lets continue on the github issue page.

Marek Marczykowski-Górecki

unread,
Dec 20, 2018, 9:40:39 PM12/20/18
to Chris Laprise, qubes...@googlegroups.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
> https://github.com/tasket/sparsebak

Thanks for doing this!

I haven't really looked at the code, but I have more generic comment:

The idea of small, frequent snapshots to collect modified blocks bitmaps
is neat. But I'd really, really like to avoid inventing yet another
backup archive format. The current qubes backup format have its own
limitations and while I have some ideas[1] how to plug incremental
backups there, I don't think there is a future in that. On the other
hand, there are already solutions using very similar approach for
handling incremental backup (basically, do not differentiate between
"full", "incremental" and "differential" backups, but split data set
into chunks and send only those not already present in backup archive).
And those already have established formats, including encryption and
integrity protection. Specifically, I'm looking into two of them:
- duplicity
- BorgBackup

duplicity have a nice thing that it's easy to integrate with "untrusted
backup storage" - like an AppVM or even sending to some cloud service
via AppVM. Because its model explicitly assume it, and it have an API
for that. Rudd-O even have written such plugin already: [2].

One issue with duplicity is its usage of gpg, which should be done
carefully. Because gpg is rather keen on passing untrusted data through
a lot of code paths, unless explicitly told what to do. Even in the PoC
linked above, Rudd-O suggested some non-default gpg options...

As for BorgBackup, AFAIR encryption scheme is done better there (this
needs verification), but on the other hand, it doesn't have such
flexible API for plugging alternative store for the backup. It can
backup either to a local directory, or to remote server speaking
BorgBackup-specific protocol (in practice - running borg tool over ssh).
But the threat model do assume possibility of that server being
malicious, and I believe the client is written with such assumption.

It would be good idea to talk with BorgBackup developers (of which at
least one do use Qubes OS ;) ) about possible integration here. I think
this could include those areas:
- more abstract and simplified handling of remote repositories (like
duplicity)
- integrate the above "Sparsebak" approach for detecting changed
blocks, instead of reading all the data
- then, finally, use as a backend for qvm-backup (in short: qvm-backup
would feed a file list to backup into borg)

[1] https://github.com/QubesOS/qubes-issues/issues/858#issuecomment-258388616
[2] https://github.com/QubesOS/qubes-issues/issues/858#issuecomment-227024434

- --
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAlwcUp4ACgkQ24/THMrX
1yw1xQgAmPoN4qd0RT0STZ6c4853k4iSku0Wwz6TJRiTvuKNDSrE8Uf3qZ1VQBmh
UoOjO17GA7ErOuCj9Kfq/fi7EkhyAPXIqtpb7lkJOp3r7UWyP37F8vdIBMPgUrae
gWI7O3XHBsp8SYJ91BPEnL0bKh/02Xcl3xq+eBgmZA3h6WBm/cC9klHxdCLgKoa2
9gsv8wE5daykYZtDAfwtMxzH5ocsuzoQJASmlEbC6DRfKSL4rfgXFnl/IUOmHFVA
S3X73PFFHeKnSPcv/ssQdtt+WFIAib7pVxN9I/fEKP1/BnADnIbxiCYLmsHFu6FT
QDSjP/tjznHD8akv4KnZMBpUfCc2xg==
=gJIP
-----END PGP SIGNATURE-----

Chris Laprise

unread,
Dec 21, 2018, 8:39:54 AM12/21/18
to Marek Marczykowski-Górecki, qubes...@googlegroups.com
On 12/20/2018 09:40 PM, Marek Marczykowski-Górecki wrote:
> Thanks for doing this!
>
> I haven't really looked at the code, but I have more generic comment:
>
> The idea of small, frequent snapshots to collect modified blocks bitmaps
> is neat. But I'd really, really like to avoid inventing yet another
> backup archive format. The current qubes backup format have its own
> limitations and while I have some ideas[1] how to plug incremental
> backups there, I don't think there is a future in that. On the other
> hand, there are already solutions using very similar approach for
> handling incremental backup (basically, do not differentiate between
> "full", "incremental" and "differential" backups, but split data set
> into chunks and send only those not already present in backup archive).
> And those already have established formats, including encryption and
> integrity protection. Specifically, I'm looking into two of them:
> - duplicity
> - BorgBackup
>

I think its about time someone in open source created an analog to the
Time Machine sparsebundle format, just because its so effective and
_simple_: Fixed size chunks of the volume stored as files with filenames
representing addresses, and manifests with sha256 hashes. There's
scarcely anything more to it than that, and its simple enough to be
processed by shell commands like find, cp and zcat (see 'spbk-assemble'
for a functional example).

This already works on millions of Mac systems where people expect it to
provide hourly backups without noticeably affecting system resources.
This class of format I don't mind creating; I think Apple chose well.

As for borg, I'm not sure a heavy emphasis on deduplication is
appropriate for many PC applications. Its a resource drain that leads to
complex archive formats on the back end. And my initial testing suggests
the dedup efficacy is oversold: Sparsebak can sometimes produce smaller
multi-generation archives even without dedup.

(We should also consider that among users of de-duplicating filesystems
like ZFS and Btrfs, the feature is rarely used in an always-on fashion
due to resources.)

But mainly I have doubts about adopting a program like borg as a back
end while intending to replace its... back end ...in service of the
original goal which is to replace its... front end. This situation
speaks to the fact that borg was not designed for this type of use case;
It wants files as input and the docs only briefly mention whole-volume
data sets when they suggest using image files. That's why using it here
is still largely academic.

> duplicity have a nice thing that it's easy to integrate with "untrusted
> backup storage" - like an AppVM or even sending to some cloud service
> via AppVM. Because its model explicitly assume it, and it have an API
> for that. Rudd-O even have written such plugin already: [2].

Duplicity is far from having that low-maintenance Time Machine quality
that quickly prunes old backups instead of filling up the destination
volume and requiring user administration in the form of deleting entire
archives and establishing new ones. You might as well adapt qvm-backup
to accommodate increments; either way, you get tar files.

>
> One issue with duplicity is its usage of gpg, which should be done
> carefully. Because gpg is rather keen on passing untrusted data through
> a lot of code paths, unless explicitly told what to do. Even in the PoC
> linked above, Rudd-O suggested some non-default gpg options...
>
> As for BorgBackup, AFAIR encryption scheme is done better there (this
> needs verification), but on the other hand, it doesn't have such
> flexible API for plugging alternative store for the backup. It can
> backup either to a local directory, or to remote server speaking
> BorgBackup-specific protocol (in practice - running borg tool over ssh).
> But the threat model do assume possibility of that server being
> malicious, and I believe the client is written with such assumption.
>
> It would be good idea to talk with BorgBackup developers (of which at
> least one do use Qubes OS ;) ) about possible integration here. I think
> this could include those areas:
> - more abstract and simplified handling of remote repositories (like
> duplicity)


Actually this is one of Sparsebak's strong points... very low
interactivity during remote operations.

Marek Marczykowski-Górecki

unread,
Dec 22, 2018, 8:49:02 PM12/22/18
to Chris Laprise, qubes...@googlegroups.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Can you point at the documentation of encryption scheme used by
Time Machine backups?

Also note that we'd like to have at least some level of hiding metadata
- - like VM names (leaked through file names).

> As for borg, I'm not sure a heavy emphasis on deduplication is appropriate
> for many PC applications. Its a resource drain that leads to complex archive
> formats on the back end. And my initial testing suggests the dedup efficacy
> is oversold: Sparsebak can sometimes produce smaller multi-generation
> archives even without dedup.

Not arguing with this. I think borg could be good enough in our case
with fixed-size chunks, using your way of detecting what have changed.
Deduplication here would be mostly about re-using old chunks (already in
backup archive) for new backup - so, the "incremental" part.
I just want to avoid re-inventing compressed and encrypted archive
format (a mistake we've made before). Borg already have established
format for that.
But as far as I understand, to get most of it, you need
hardlink-compatible storage, which for example exclude most cloud
services...

- --
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAlwe6YcACgkQ24/THMrX
1yyc1Af+LGInhDaGAOLXUGBnA+eAEIxRL3OSC3/xCStdSmtugi7u6sQfSPOaGSZT
OJlClDaEFh0N6/ICQxrRYasSLOOH+WV3KLxKi96DjiSQ2pPxfQ7u8JotmO9NJjiW
0HuOlJg2YVDztqN88dHHBsGBJ/vUfQBltPN18PC8MPJ/TLsv4ZngHAAGAJTzhj0u
sMrsoOIuV6Rc8R8Vgit7GNS5bZo0kr1Zx8esZaxOwXNiqwbqvkKKRPfKwSRH65Kt
Jj6DMnt4OFOezAczvtEIqlA8Jdgaxqav7MrcKJZ6RvZNiwenYXHEVotZenr7Bu/Z
31HMh6Y5UaOQ61wS1NfhvqREzsdSJg==
=KoCt
-----END PGP SIGNATURE-----

Holger Levsen

unread,
Dec 23, 2018, 7:45:42 AM12/23/18
to qubes...@googlegroups.com
On Sun, Dec 23, 2018 at 02:48:55AM +0100, Marek Marczykowski-Górecki wrote:
> Also note that we'd like to have at least some level of hiding metadata
> - like VM names (leaked through file names).

I think it would be nice if this was an optional feature (on by
default), as I find it super inconvinient to find a certain backed up
qube.... (and I dont care that much about hiding this metadata most of
the time.)


--
cheers,
Holger

-------------------------------------------------------------------------------
holger@(debian|reproducible-builds|layer-acht).org
PGP fingerprint: B8BF 5413 7B09 D35C F026 FE9D 091A B856 069A AA1C
signature.asc

Achim Patzner

unread,
Dec 23, 2018, 9:39:09 AM12/23/18
to qubes...@googlegroups.com
On 20181223 at 02:48 +0100 Marek Marczykowski-Górecki wrote:
> Can you point at the documentation of encryption scheme used by
> Time Machine backups?

You use an encrypted volume. It would probably be nicer if you could
use APFS with its multiply encrypted files but we're not there yet:
https://support.apple.com/en-gb/guide/mac-help/disks-you-can-use-with-time-machine-mh15139/mac

> But as far as I understand, to get most of it, you need
> hardlink-compatible storage, which for example exclude most cloud
> services...

That's where Apple's container file formats come in handy. They had the
same problem and didn't find any other solution.


Achim

Chris Laprise

unread,
Dec 29, 2018, 2:30:12 PM12/29/18
to Marek Marczykowski-Górecki, qubes...@googlegroups.com, Achim Patzner
This is the weakest part of my effort so far: Scant planning for
encryption (and IANAC). Its the one struggle I see coming with this project.

I recognize the problem as one involving block-based ciphers/modes and
the level of resistance they offer against any spy who can view
successive chunk updates. My understanding of the Time Machine method is
that its similar if not identical to encryption of a normal disk volume
(or a 'normal' loop dev that happens to be in chunks). If so, I may try
to implement something close to it using python aes but will still seek
input from a cryptographer which should be done in any case.

FWIW, I've considered trying a modified version of the scrypt method
used in qvm-backup. Sparsebak can be used in a tarfile mode, for
instance, which makes this practical but has the side-effect of removing
pruning ability.

>
> Also note that we'd like to have at least some level of hiding metadata
> - - like VM names (leaked through file names).

I have an idea for a relatively simple obfuscation layer that could even
re-order the transmission of chunks in addition to concealing filenames.
It would use an additional index with randomized names and the order
shuffled. Implementing this, I surmise, could improve robustness of the
encryption.

>
>> As for borg, I'm not sure a heavy emphasis on deduplication is appropriate
>> for many PC applications. Its a resource drain that leads to complex archive
>> formats on the back end. And my initial testing suggests the dedup efficacy
>> is oversold: Sparsebak can sometimes produce smaller multi-generation
>> archives even without dedup.
>
> Not arguing with this. I think borg could be good enough in our case
> with fixed-size chunks, using your way of detecting what have changed.
> Deduplication here would be mostly about re-using old chunks (already in
> backup archive) for new backup - so, the "incremental" part.
> I just want to avoid re-inventing compressed and encrypted archive
> format (a mistake we've made before). Borg already have established
> format for that.

Yes, keeping in mind the chunk size I'm using currently is 128kB with
fixed boundaries. I've experimented with simple retroactive dedup based
on sorting the manifest hashes and that can save a little space with
almost no time/power cost. This could be done at send time to save
bandwidth, but that savings may not be worth it. OTOH, if we expect some
users to backup related cloned VMs (common with templates) the potential
savings then becomes very significant even with this simple method.

To be sure, borg gets better dedup with arbitrary data input, but even
so that looks to be around 2-4%. Would I work days to add that
efficiency to sparsebak's COW awareness? Probably. Months? Probably not.
If borg were to be integrated at all, it would need modification to
accept named objects (sparsebak chunks) streamed into it and have some
way of indicating an incremental backup so there is a namespace
integration between successive backup sessions.

I've also done another test that should be a better indicator of
relative speed. It uses a new 'qubes-ssh' protocol option I added so
that loopdev + fuse layers aren't a factor. With this, sparsebak is
consistently faster than borg over local 802.11n wifi for both initial
and incremental backups. An assumption here is that adding encryption
will not have a large impact -- but also keeping in mind sparsebak has
no multiprocessing or optimizations as of yet.


>> Actually this is one of Sparsebak's strong points... very low interactivity
>> during remote operations.
>
> But as far as I understand, to get most of it, you need
> hardlink-compatible storage, which for example exclude most cloud
> services...

Hardlinks are currently used for housekeeping operations (i.e. merge
during pruning), but I didn't follow Apple's example to the extent that
each incremental session must look like a whole volume (where they
really lean on hardlinks). Instead, manifests are quickly assembled on
an as-needed basis to create a meta-index for a complete volume view. So
the question is: can session merging use a method without hardlinks? I
think it could use 'move' -- and possibly gain encfs as an encryption
option in the process.

As for targeting cloud storage, that will take some time on my part as
its not something I normally use. Although its becoming less necessary,
my original concept for the storage end was a Unix environment (GNU +
python currently required) with contemporary mainstream filesystems that
can handle and rapidly process large numbers of files. This is accessed
via a qube or protocol like ssh. Maybe cloud storage apis could be
targetted, but they might not be practical for volumes over a certain size.

Brendan Hoar

unread,
Dec 31, 2018, 8:49:02 AM12/31/18
to qubes-devel
On Saturday, December 29, 2018 at 2:30:12 PM UTC-5, Chris Laprise wrote:
> > Also note that we'd like to have at least some level of hiding metadata
> > - - like VM names (leaked through file names).
>
> I have an idea for a relatively simple obfuscation layer that could even
> re-order the transmission of chunks in addition to concealing filenames.
> It would use an additional index with randomized names and the order
> shuffled. Implementing this, I surmise, could improve robustness of the
> encryption.
...

> Yes, keeping in mind the chunk size I'm using currently is 128kB with
> fixed boundaries. I've experimented with simple retroactive dedup based
> on sorting the manifest hashes and that can save a little space with
> almost no time/power cost. This could be done at send time to save
> bandwidth, but that savings may not be worth it. OTOH, if we expect some
> users to backup related cloned VMs (common with templates) the potential
> savings then becomes very significant even with this simple method.

I tend to keep one or two clones of each template some number of weeks of updates behind, just in case an update (especially a *-testing update) goes awry. I think this approach is useful for most folks who are trying to balance "more secure by updating regularly" and "able to manually recover when a template stops working". So: a backup regime that can dedupe on some level would be very welcome.

Q: Speaking of hashes (this is regardless of the encryption question): are the hashes in sparsebak salted per qubes system (or backup set?)...or would the same hash on two different (non-cloned) Qubes systems match for the same content?

And Chris: thanks for all your contributions to Qubes usability, I really appreciate it.

Brendan

Chris Laprise

unread,
Jan 1, 2019, 6:46:18 PM1/1/19
to Brendan Hoar, qubes-devel
They're not salted, so there's likely to be at least some matches
between systems. However, the precise version of the template you
started with on each system (preferably the same version) will play a
role in how much can be matched. To increase the chance of matching, the
chunk size used for backups could be reduced as well.

Note that templates are usually very compressible, so its possible
compression -- and the incremental delta technique -- will make a bigger
difference than dedup overall. Especially true since re-doing initial
backups isn't needed, so vast majority of backups are increments which
are probably small.

Also, this won't save bandwidth in the near future. Dedup will begin as
an optional process that retroactively reclaims space on the destination
after send operations. I do have an idea for efficient matching before
transmission, but it will take several months to work out.

-

On a different performance note, I'd like to mention something about the
ballyhooed borg. In my tests with 8GB volume data and 3GB of incremental
updates over wifi, clearing the obsolete 3GB in pruning operations took
borg 65sec while pruning out the same 3GB with sparsebak took only 11sec.

That's a factor of six and is almost a minute difference.

This raises questions about the apparent need for borg to re-write some
data blocks (or at least large amounts of metadata) in what is probably
a highly interactive process. Assuming a user might want backup
frequency like they're used to on a Mac with Time Machine, this means
eventually pruning will start to kick in many times each day; its a
significant factor in performance and may also have ramifications for
security.

-

>
> And Chris: thanks for all your contributions to Qubes usability, I really appreciate it.
>
> Brendan

Thanks for the feedback! And if you have suggestions for the release
name ("Sparsebak" it will not be!) I'd like to hear it.

qtpie

unread,
May 21, 2019, 9:03:44 PM5/21/19
to qubes...@googlegroups.com
Chris Laprise:
> On 12/09/2018 10:38 AM, Chris Laprise wrote:
>> 'Sparsebak'
>>
>> Fast Time Machine-like disk image backups for Qubes OS and Linux LVM.
>>
>
> And of course, a link to the project :) ....
>
> https://github.com/tasket/sparsebak
>
>
Have people been using Sparsebak? What are your experiences? I havent
been able to test it myself yet, And will it be submitted as a package
to Qubes?

Chris Laprise

unread,
May 22, 2019, 12:00:55 AM5/22/19
to qtpie, qubes...@googlegroups.com
I've been getting reports from some users who are testing sparsebak as
alpha software. They report that its well behaved, and so far there has
been no reported incident of data loss due to bugs. I myself have also
been using it to backup several times a day and have relied on it to
restore on more than one occasion.

It probably will be submitted for the community/contrib repository when
it hits beta, which I hope will be fairly soon (my shortlist for beta
has stopped growing and is getting shorter :) ).

General observations:

* Its fast and reliable.

* The archive format is still very simple.

* De-duplication is now working as an experimental feature.

* Still no direct handling of VM configs (copying qubes.xml to a trusted
VM, or just backing up dom0 root can take care of that).

* Setting up encryption for the archive is the only hassle; I made a
small shell script to mount and unmount a LUKS container in dom0 when
necessary.

-

If you start using sparsebak now, I suggest taking the version from the
latest 'new5' branch.

Ivan Mitev

unread,
May 27, 2019, 3:45:16 PM5/27/19
to qubes-devel

I've been using sparsebak since the announcement - here's my experience with it so far:

Before using sparsebak I used custom rsync scripts for frequent incremental backup of the data in my VMs' private volumes. It was quick and efficient and despite the many obvious drawbacks of that approach it was a thousand times better than using qvm-backup/-restore (those programs are so resource hungry that I only ran them once or twice a year - the suggestion of leaving my laptop on at night with a scheduled unattented backup task is a no go for me).

Sparsebak was a huge improvement over both qvm-backup/-restore and my custom script approach. Despite the alpha (beta?) status I haven't run into a single problem and Chris has been very diligent in replying to some usage questions I had. Since the announcement he has improved the tool without breaking anything and he also implemented deduplication which definitely saves disk space when backuping cloned VMs (interestingly this also works pretty well cloned VMs that have been heavily customized/updated compared to the original VM).

I currently backup 24 volumes (a mix of -root and -private volumes from linux and windows VMs) with a total size of ~100GB, and I use my laptop without any problem while sparsebak is running in the background (incremental backups are fast anyway so even if it used a bit more resources it would be for a short time).

Now, as Chris pointed out the only hassle is that you have to set up an encrypted volume to encrypt your backups but it's a pretty easy thing to do.

So, I recommend you give the tool a try, maybe in addition to using qvm-backup every now and then until you're confident it's working properly for your setup.

Mike Keehan

unread,
May 28, 2019, 4:32:48 PM5/28/19
to qubes...@googlegroups.com
HI Ivan,

It seems to be working well for you. Can I ask if you have tried
restoring VMs? Is it possible to restore single files, or is it
whole VMs only? (I haven't tried it myself - still just using
Qubes builtin backup and restore).

Mike.

Chris Laprise

unread,
May 28, 2019, 6:07:26 PM5/28/19
to Mike Keehan, qubes...@googlegroups.com
Hi Mike,

I can't speak for Ivan's restore experiences, but wanted to chime in
regarding restore possibilities:

Receive/restore is whole-volume (or vm) only at this point. However,
there is an open issue for a fuse filesystem to access the data (and the
format isn't too different from others that already have a fuse
implementation).

FWIW, you're not limited to restoring to the original volume path if all
you want to do is check/view an older file version: The --save-to option
lets you restore a volume to any lv or file path.

There is also the near-term possibility of a differential restore that
would load only the (older versions of) blocks that changed since a
specified date-time, to quickly see an older version of the volume. The
process could be automated to open a file browser in a disposable VM,
allowing the user to easily access their 'back-in-time' files.

I should note: With the latest commit the program is very close to beta
release. At that point (within ~3 days) the name and file paths will
change. You may want to try the alpha now, or wait until I announce the
beta.
Reply all
Reply to author
Forward
0 new messages