On Sat, Jan 04, 2020 at 05:05:01PM +0530, Anil Eklavya wrote:
(please dont TOFU)
> I wasn’t aware of these options. Thanks for pointing out. I will
> certainly try them out.
this is all "some assembly required" stuff, but i will try to describe
a working borg setup with some variations and try to explain some of
the thinking behind it.
there are some example scripts here:
https://github.com/xaki23/rzqubes/tree/master/borg
most of these are not commented, userfriendly or have proper
separation of "code" and "config", but otoh we are talking
about something you set up just once for each system.
the complex parts there are bsnap.sh (which is the hourly
cronjob that does the actual borg-snapping) and bsync.sh
(which is optionally called at the end of bsnap and does
the syncing of backup to external target(s), if desired).
the *wraps are just thin wrappers as crude ways to use
remote-capable tooling (here: borg and rsync) over qubes-rpc.
bsnap.sh has a bit of config at the beginning (lines 3-5),
storing a password like that is certainly not ideal, but otoh
doesnt matter (to me) since the script is inside dom0 which
already has access to all my data and if it is compromised
its pretty much gameover anyways.
lines 7-13 are leftover from qubes3 days (or for people using
qubes pools of type "file" with q4).
lines 15+16 are a sample of how to use the remote-wrapped variant.
basicly that means your dom0 still does all the reading, chunking,
encryption, but the actual storage backend process is running
on a remote host (or in a qubes appvm). this can be very useful
if you are backing up a stationary desktop to a bulk storage
host on the same lan.
lines 20-30 are three "backup groups". private volumes at rest,
unsynchronized private volumes of running vms, and dom0 as "files".
the FLP/FLS parts (lines 20+24) select which VMs are backed up
in that way, you can play around with the ls+grep on the commandline
until it matches whatever you want to back up. the examples there
are of the "everything, except what the grep throws away".
lines 21+25+29 delete old backup snapshots that are outside the
specified keep-range. 30/30/30/30 is _a_ _lot_ ...
lines 34-43 are the call to sync out the backup to external
storage, with crude locking. the locking (even when less crude)
is mainly a policy question. if you dont use locking for the sync,
and your sync takes longer than your backup frequency, you might
end up with the sync always just doing half a sync, never completing.
that can be very bad.
otoh, if you do locking, and the (locked) sync stalls out and you
dont have stale-lock detection or a timed hard limit, that stalled
sync job will block all newer sync attempts forever.
thats also very bad.
the called-under-lock bsync.sh tries to level the field (lines 4-8)
by killing/removing anything that might be leftover from older
syncs, creates a lvm snapshot of the local lvm backup volume,
attaches it to a sync-vm and runs a target-specific script inside
that vm.
this is just as setup-specific as it is modular.
doesnt matter to the backup at all whether the sync is done
over webdav, nfs, cifs or rsync-over-ssh.
the modularity isnt limited to the "sync" phase either.
want to make it respect the per-vm include_in_backup pref?
just add a filter stage based on qvm-ls/qvm-prefs early in bsnap.
want a more paranoid handling of the borg-pw?
use a detached borg repo header, with a different PW in dom0
than in your secret-stash masterkey backup repo, and no repo
header whatsoever in the remote bulk copy.
when restoring localy i tend to use "borg list" to see what snaps
are avail, then "borg list ::snap-12345" to see whats inside,
then "borg extract --sparse ::snap-12345 dev/whatevs".
these steps are done in dom0 when restoring from a local repo,
or in the sync-vm if restoring from remote repo.
the restored file is copied to the right blockdevice in dom0 by
"dd if=restored/blah of=/dev/mapper/something conv=sparse".
if the vm/volume used as a restore target isnt fresh/clean/unused,
make really, really sure to blkdiscard the target volume first.
(filesystems can get really upset if blocks they expect to be zeroed
are not)
there is a lot of flexibility and options in all this.
a lot of the options depends on your personal threat model and prefs.
this also means it is not suitable for users who dont know how
to delete a file without a mouse.
but those will need help with setting up qubes anyways.
feedback, questions, suggestions?
go ahead, either here, or on #qubes on freenode, or pullreqs ...