Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

write only storage.

60 views
Skip to first unread message

Tim Woodall

unread,
Sep 21, 2021, 12:10:04 PM9/21/21
to
I would like to have some WORM memory for my backups. At the moment
they're copied to an archive machine using a chrooted unprivileged user
and then moved via a cron job so that that user cannot delete them
(other than during a short window).

My though was to use a raspberry-pi4 to provide a USB mass storage
device that is modified to not permit deleting. If the pi4 is not
accessible via the network then other than bugs in the mass storage API
it should be impossible to delete things without physical access to the
pi.

Before I start reinventing the wheel, does anyone know of anything
similar to this already in existence?

Things like chattr don't achieve what I want as root can still override
that. I'm looking for something that requires physical access to delete.

Toni Mas Soler

unread,
Sep 21, 2021, 12:30:03 PM9/21/21
to
I use to backup my iPhone's photo library using a stfp connection (all in the same directory in my PC). Thus, I can chattr +i the only directory needed and nobody can remove.

I cannot understand why chattr does not achieve you.

Toni Mas
GPG 3F42A21D84D7E950

Sent with ProtonMail Secure Email.

‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐

El dimarts, 21 de setembre 2021 a les 17:53, Tim Woodall <debia...@woodall.me.uk> va escriure:
signature.asc

Andrew M.A. Cater

unread,
Sep 21, 2021, 1:10:04 PM9/21/21
to
On Tue, Sep 21, 2021 at 12:50:18PM -0400, Michael Stone wrote:
> Well, chattr -i turns that off
Write only storage - DVD-R or equivalent Blu-Ray - but make sure to end the
session. Deletion - feed through a paper shredder.

Or something with a physical write tab that can't be overwritten a la 3.5"
floppy disk.

All the very best,

Andy Cater

Michael Stone

unread,
Sep 21, 2021, 1:10:04 PM9/21/21
to
Well, chattr -i turns that off

On Tue, Sep 21, 2021 at 04:29:07PM +0000, Toni Mas Soler wrote:

Marco Möller

unread,
Sep 21, 2021, 1:30:03 PM9/21/21
to
The backup tool borg, or borgbackup (this latter is also the package
name in the Debian repository), has an option to create backup archives
to which only data can be added but not deleted. If you can get it
managed, that only borgbackup has access through the network to the
backup system but no other user can access the backup system from the
network, then this might be want you want.
Borgbackup appears to be quite professionally designed. I have never had
bad experience for my usage scenario backing up several home and data
directories with it and restoring data from the archives - luckily
restoring data just for testing the archives but not for indeed having
needed data from a backup. My impression is, that this tool is also in
use by the big professionals, those who have to keep up and running a
real big business. Well, maybe someone of those borgbackup users with
the big business pressure and experience should comment on this and not
me. At least for me and my laboratory measurement data distributed on
still less than 10 computers and all together comprising still less than
10 TB data volume, it is the perfect tool. Your question sounds like it
could also fit your needs.

Best wishes,
Marco

Steve McIntyre

unread,
Sep 21, 2021, 1:30:04 PM9/21/21
to
In article <alpine.DEB.2.21.2...@einstein.home.woodall.me.uk> you write:
>I would like to have some WORM memory for my backups. At the moment
>they're copied to an archive machine using a chrooted unprivileged user
>and then moved via a cron job so that that user cannot delete them
>(other than during a short window).

Sorry to butt in, but I used to be a filesystem developer in a
previous life, working on archive storage for things like medical and
financial data. Pet peeve:

WORM is Write *Once* , not Write *Only*

"Write only" storage is easy and fast - just throw things at /dev/null
and they can never be altered (or read back).

--
Steve McIntyre, Cambridge, UK. st...@einval.com
"We're the technical experts. We were hired so that management could
ignore our recommendations and tell us how to do our jobs." -- Mike Andrews

Tim Woodall

unread,
Sep 21, 2021, 1:40:04 PM9/21/21
to
On Tue, 21 Sep 2021, Andrew M.A. Cater wrote:

>
> Write only storage - DVD-R or equivalent Blu-Ray - but make sure to end the
> session. Deletion - feed through a paper shredder.
>
I already do that but currently that means I have roughly one month of
backups on network accessible storage before I write to disc.

A ransomware attack that exploits a zero day ssh vulnerability for
example wouldn't be a complete disaster - this is only home usage - but
it seems fairly trivial to create a 'worm' usb device using a pi. I
haven't tested yet but with a blu-ray burner attached too the pi could
write to disc once there's 25G written and then delete it.

I'm slightly surprised someone hasn't done something like this already.

James H. H. Lampert

unread,
Sep 21, 2021, 1:40:04 PM9/21/21
to
On 9/21/21 10:21 AM, Steve McIntyre wrote:
. . .
> WORM is Write *Once* , not Write *Only*
>
> "Write only" storage is easy and fast - just throw things at /dev/null
> and they can never be altered (or read back).

Quite.

Or to paraphrase something I said, that actually got published in some
magazine dealing with IBM Midrange systems, "A data Roach-Motel: data
goes in, but it doesn't come out."

--
JHHL

Tim Woodall

unread,
Sep 21, 2021, 1:50:03 PM9/21/21
to
On Tue, 21 Sep 2021, Marco M?ller wrote:

> On 21.09.21 17:53, Tim Woodall wrote:
>> I would like to have some WORM memory for my backups. At the moment
>> they're copied to an archive machine using a chrooted unprivileged user
>> and then moved via a cron job so that that user cannot delete them
>> (other than during a short window).
>>
>> My though was to use a raspberry-pi4 to provide a USB mass storage
>> device that is modified to not permit deleting. If the pi4 is not
>> accessible via the network then other than bugs in the mass storage API
>> it should be impossible to delete things without physical access to the
>> pi.
>>
>> Before I start reinventing the wheel, does anyone know of anything
>> similar to this already in existence?
>>
>> Things like chattr don't achieve what I want as root can still override
>> that. I'm looking for something that requires physical access to delete.
>>
>>
>
> The backup tool borg, or borgbackup (this latter is also the package name in
> the Debian repository), has an option to create backup archives to which only
> data can be added but not deleted. If you can get it managed, that only
> borgbackup has access through the network to the backup system but no other
> user can access the backup system from the network, then this might be want
> you want.

I'll take a look but this isn't far from what I have already. My
'online archive' machine is a VM though so can be erased from the host
too.

At the moment I explicitly allow rm in the chroot (easily removed) and
files can be truncated (can be fixed with chattr) but it didn't seem any
easier than going the whole hog and having a fully isolated pi.

Thomas Schmitt

unread,
Sep 21, 2021, 2:50:04 PM9/21/21
to
Hi,

Andrew M.A. Cater wrote:
> > Write only storage - DVD-R or equivalent Blu-Ray

Tim Woodall wrote:
> I already do that but currently that means I have roughly one month of
> backups on network accessible storage before I write to disc.

I do a daily incremental backup on BD-R (plus three on BD-RE and one
on DVD+RW).
All my burners can write at least 128 sessions to BD-R. My ASUS BW-16D1HT
does more than 230.
I hope to reach a new record in a few weeks with my current 11 o'clock
BD-R:

Drive type : vendor 'ASUS' product 'BW-16D1HT' revision '1.01'
...
Media current: BD-R sequential recording
Media product: CMCMAG/BA5/0 , CMC Magnetics Corporation
Media status : is written , is appendable
Media blocks : 8843072 readable , 3376320 writable , 12219392 overall
ISO offers : Rock_Ridge
ISO loaded : Rock_Ridge
TOC layout : Idx , sbsector , Size , Volume Id
ISO session : 1 , 0 , 1992263s , HOME_2021_03_02_110936
ISO session : 2 , 1992416 , 33546s , HOME_2021_03_03_110514
ISO session : 3 , 2026112 , 34060s , HOME_2021_03_04_111021
...
ISO session : 203 , 8802368 , 19349s , HOME_2021_09_20_121951
ISO session : 204 , 8821888 , 21020s , HOME_2021_09_21_123308
Media summary: 204 sessions, 8843072 data blocks, 16.9g data, 6594m free

The backup is done essentially according to man xorriso example
"Incremental backup of a few directory trees"
with more -update_r commands and some -not_paths commands.
Linux mounts by default the youngest state. But by help of mount(8)
option -o sbsector= and the numbers in the "sbsector" column it is
possible to mount older states. (With -o loop you may even mount more
than one and compare them.)

DVD+R can take 153 sessions.
DVD-R can theoretically take 99 sessions, but there is substantial waste
space between them. So many sessions means few payload.


Have a nice day :)

Thomas

Thomas Schmitt

unread,
Sep 21, 2021, 2:50:04 PM9/21/21
to
Hi,

Andrew M.A. Cater wrote:
> Write only storage - DVD-R or equivalent Blu-Ray -
> but make sure to end the session.

Do you have BD-R or DVD-R with unclosed sessions ?
(... and how come ? Burn programs normally close their sessions.)

If so, then i would be interested in the SCSI log of a medium assessment:

xorriso -scsi_log on -outdev /dev/sr0 -toc 2>&1 \
| tee -i /tmp/xorriso_toc.log

Send xorriso_toc.log in private, as it might become lengthy and contains
individual information like the serial number of your drive.

Depending on what the SCSI dialog between libburn and the drive shows,
it might be possible to repair the last session by a run of

xorriso -outdev /dev/sr0 -close_damaged force

Or, if you want to close the medium:

xorriso -outdev /dev/sr0 -close on -close_damaged force

Linux-Fan

unread,
Sep 21, 2021, 3:50:05 PM9/21/21
to
Marco Möller writes:

> On 21.09.21 17:53, Tim Woodall wrote:
>> I would like to have some WORM memory for my backups. At the moment
>> they're copied to an archive machine using a chrooted unprivileged user
>> and then moved via a cron job so that that user cannot delete them
>> (other than during a short window).
>>
>> My though was to use a raspberry-pi4 to provide a USB mass storage
>> device that is modified to not permit deleting. If the pi4 is not
>> accessible via the network then other than bugs in the mass storage API
>> it should be impossible to delete things without physical access to the
>> pi.

What about the overall storage size: Assume an adversary might corrupt your
local data and then invoke the backup procedure in an endless loop in an
attempt to reach the limit of the "isolated" pi's underlying storage. You
might need a way to ensure that the influx of data is somehow rate-limited.

>> Before I start reinventing the wheel, does anyone know of anything
>> similar to this already in existence?

I know of three schemes trying to deal with the situation:

(a) Have a pull-based or append-only scheme implemented in software.
Borg's append-only mode and your current method fall into that category.
I am using a variant of that approach, too: Have a backup server pull
the data off my local machine at irregular intervals.

(b) Use physically write-once media like CD-R/DVD-R/BD-R. I *very rarely*
backup the most important data to DVDs (no BD writer here and a single
one would not provide enought redundancy to rely on it in case of
need...).

(c) Use a media-rotation scheme with enough media to cover the interval you
need to notice the adversary's doings. E.g. you could use seven hard
drives all with redundant copies of your data and each day chose
the next drive to update with the "current data" by a clear schedule,
i.e. "Monday" drive on Mondays, "Tuesday" drive on Tuesdays etc.
If an adversary tampers with your data you would need to notice within
one week as to be able from the last drive to still contain unmodified
data.

>> Things like chattr don't achieve what I want as root can still override
>> that. I'm looking for something that requires physical access to delete.

My solution is to use a separate, dedicated, not-always-on machine that
pulls backups when its turned on and then shuts itself off as to reduce the
time frame in which an adversary might try to break into it via SSH. In
theory, one could leave out the SSH server on the backup server altogether,
but this would complicate the rare occasions where maintenance is needed.

> The backup tool borg, or borgbackup (this latter is also the package name in
> the Debian repository), has an option to create backup archives to which
> only data can be added but not deleted. If you can get it managed, that only
> borgbackup has access through the network to the backup system but no other
> user can access the backup system from the network, then this might be want
> you want.
> Borgbackup appears to be quite professionally designed. I have never had bad
> experience for my usage scenario backing up several home and data
> directories with it and restoring data from the archives - luckily restoring
> data just for testing the archives but not for indeed having needed data
> from a backup. My impression is, that this tool is also in use by the big
> professionals, those who have to keep up and running a real big business.
> Well, maybe someone of those borgbackup users with the big business pressure
> and experience should comment on this and not me. At least for me and my
> laboratory measurement data distributed on still less than 10 computers and
> all together comprising still less than 10 TB data volume, it is the perfect
> tool. Your question sounds like it could also fit your needs.

Its one tool that could be used for the purpose, yes.

Borg runs quite slowly if you have a lot of data (say > 1 TiB). If you can
accept that/deal with it, it is a tool worth considering. Some modern/faster
alternatives exist (e.g. Bupstash) but they are too new to be widely
deployed yet.

AFAIK in "business" contexts, tape libraries and rsync-style mirrors are
quite widespread.

HTH
Linux-Fan

öö

Michael Stone

unread,
Sep 21, 2021, 4:10:03 PM9/21/21
to
On Tue, Sep 21, 2021 at 06:37:41PM +0100, Tim Woodall wrote:
>A ransomware attack that exploits a zero day ssh vulnerability for
>example wouldn't be a complete disaster - this is only home usage - but
>it seems fairly trivial to create a 'worm' usb device using a pi. I
>haven't tested yet but with a blu-ray burner attached too the pi could
>write to disc once there's 25G written and then delete it.
>
>I'm slightly surprised someone hasn't done something like this already.

Because it's not actually easy to use such a thing. What would the pi
present itself as? A block device? Filesystems generally need to rewrite
specific blocks in order to work. You need to be able to access specific
objects. So maybe you expose the pi via CIFS or NFS or somesuch. Ok, but
files are often not written as one atomic operation, especially on
network filesystems. So you can't make the files completely immutable,
you need to be able to append to them while they're being written. So
what's your trigger condition to change from "appendable" to
"immutable"?

There are solutions for this, mostly in the compliance space, but
they're generally pretty niche.

Joe Pfeiffer

unread,
Sep 21, 2021, 7:30:03 PM9/21/21
to
Steve McIntyre <st...@einval.com> writes:

> In article <alpine.DEB.2.21.2...@einstein.home.woodall.me.uk> you write:
>>I would like to have some WORM memory for my backups. At the moment
>>they're copied to an archive machine using a chrooted unprivileged user
>>and then moved via a cron job so that that user cannot delete them
>>(other than during a short window).
>
> Sorry to butt in, but I used to be a filesystem developer in a
> previous life, working on archive storage for things like medical and
> financial data. Pet peeve:
>
> WORM is Write *Once* , not Write *Only*
>
> "Write only" storage is easy and fast - just throw things at /dev/null
> and they can never be altered (or read back).

Ah, yes...
http://www.ganssle.com/misc/wom1.jpg
http://www.ganssle.com/misc/wom2.jpg

David Christensen

unread,
Sep 22, 2021, 12:20:03 AM9/22/21
to
Have you considered snapshots -- e.g. btrfs, LVM, or ZFS?


David

Tim Woodall

unread,
Sep 22, 2021, 4:10:03 AM9/22/21
to
I don't see how they help me - I am already using snapshots to create
the backup. But if I can create the snapshot, I can delete it again?

I didn't put all this detail in the original as I didn't think it was
important (and it can all be changed) but, taking the example of
einstein, which is the machine that this email went through.

A cron job runs as root that takes an LVM snapshot, uses dump to dump
the filesystem and uses ssh to write that dump to backup@backup17. It
then runs restore to verify the backup. It then deletes the snapshot. I
also save the output of df, fdisk -l and mount along with a separate
copy of dumpdates (these are purely to make it as easy as possible to
recover after a total hard drive failure - of which I've only ever had
happen once. As I'm writing this I realise I should also save the output
of vgdisplay and lvdisplay)
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=940473 was a bug I
found via this process and I should have reported years earlier. I took
a look and couldn't fix it in a few hours so I just sat on it. :-( The
maintainer fixed it in a weekend :-)

On backup17, a cron job moves the files from where backup@backup17 put
them (which was in a chroot) to a different directory where they cannot
be accessed via backup@backup17. (Again, while writing this I realize
that they're still owned/writable by backup - I will change this so that
even if you managed to escape from the chroot you cannot
read/delete/modify them)

This is a pseudo "write only" filesystem. Within a couple of hours of
writing the file it cannot be read again (by the user that wrote it). I
cannot see a way of making it truely write only and preserve the
verification step (and that particular attack surface - someone copying
the backup while it's being written - isn't one that I'm particularly
concerned about)

Manually (but I ought to automate it too) I run a script that then takes
the backups and adds them to a udf image on a usb stick sized to fit on
a blu-ray disk. (I have both an encrypted and a plain image here. I use
the encrypted one for off-site backups (which, for example, I
occasionally post to a friend) and the plain one for my local backups
which I sometimes do use). I have also securely stored the key
off-site.

Finally, once the udf image is full I write the image to disc, verify
the hash, and then delete all the intermediate parts to free up space to
continue.

I've been doing this for nigh on 25 years now, from cd to dvd to blu-ray
with various tweaks along the way and I've never lost anything
important.

But I'm conscious that to an extent I've been lucky. I do my best to
keep secure but this is a hobby, not a full time job, and it's getting
harder and harder to do a belt-and-braces approach to security. The
ubiquitous use of javascript nowadays, https everywhere, ESNI,
everything now needing internet access to work, it's getting harder and
harder to ensure things are kept quarantined. Hopefully I'm too boring
for anyone to specifically target but I'd like to close the last few
gaps in the "just got unlucky" stakes. In particular, if anyone got root
access to the xen host then everything not yet written to blu-ray is
vulnerable. As of today, that would mean that einstein, for example,
could be restored to 20210904 but anything after that date would be
lost.

The suggestion by Thomas Schmitt to write multiple sessions is a good
one. I hadn't thought of it, partly because my blu-ray writer is an
external device that I don't leave permanently connected. But I could
resolve that. If I wrote one session per day that would be c 30 sessions
per disc. I will need to do some experimenting as I don't have any
experience of writing multi-session disks. I'd also need to find a drive
where I can verify what has been written without it ejecting the disk
first (or at least be able to reload the disk automatically).

Thomas Schmitt

unread,
Sep 22, 2021, 6:30:03 AM9/22/21
to
Hi,

Tim Woodall wrote:
> If I wrote one session per day that would be c 30 sessions per disc.

This should be doable on any BD-R capable drive.


> I'd also need to find a drive
> where I can verify what has been written without it ejecting the disk
> first (or at least be able to reload the disk automatically).

That's not a matter of the drive but of the Linux kernel which shows
few love towards DVD and BD media. (I have a patch for a new ioctl()
which would simulate newly loading of a disc without the need for
moving the drive tray. But the kernel shows few love for userland
programmers, too.)

But xorriso uses libburn which performs SCSI transactions directly with
the drive (via ioctl SG_IO). So it assesses the medium status on its own
and is able to read freshly written data without drive tray dance.


> I will need to do some experimenting as I don't have any
> experience of writing multi-session disks.

You may practice with a plain data file instead of optical media, by
replacing in the following example the command

-dev /dev/sr0

by e.g.

-dev "$HOME"/xorriso_test.iso


Detailed explanation of the example from man xorriso
"Incremental backup of a few directory trees" :

$ xorriso \

The first command tells xorriso to abort only if there is no hope for any
success. (Default is to abort if a substantial part of the job cannot be
performed.)

-abort_on FATAL \

The following enables recording of MD5 for data files and the whole session,
recording of ACL (getfacl(1)) and of extended file attributes (getfattr(1)).

-for_backup \

By the next command xorriso gets asserted that device numbers of disk
filesystems and the inode numbers of files in the filesystems remain stable
between xorriso runs, unless files get replaced. This assertion speeds up
the comparison of disk filesystem and ISO 9660 filesystem. (Slower would
be comparing the disk files with MD5s of ISO 9660 files, or comparing the
files directly by reading both contents.)

-disk_dev_ino on \

The next command makes sure that only blank media or written media with
a matching ISO 9660 volume id (lsblk FSLABEL) are accepted. This shall
keep you from inserting the wrong medium and thus causing an unduely
big new session. If the volume id of a written medium does not match,
then it emits a FATAL event which will abort the xorriso run:

-assert_volid 'PROJECTS_MAIL_*' FATAL \

Next choose the drive. If it already contains an ISO 9660 session it gets
loaded as base of the file tree comparison and as base of the upcomming
new session:

-dev /dev/sr0 \

Choose a volume id which matches above -assert_volid:

-volid PROJECTS_MAIL_"$(date '+%Y_%m_%d_%H%M%S')" \

Exclude files which end by .o or .swp:

-not_leaf '*.o' -not_leaf '*.swp' \

Check for differences between file tree /home/thomas/projects on hard disk
and ISO 9660 file tree /projects on the backup medium. Any file which
changed, appeared or disappeared since the last backup session will be
properly represented in the overall directory tree of the upcomming new
session.
Unchanged files will appear in the new overall tree too. Their data will
not get newly written but rather be represented by data which was already
recorded by a previous session:

-update_r /home/thomas/projects /projects \

Same for another pair of disk and ISO 9660 file trees. You may add as many
such commands as you need to describe your backup. Each of them gets a
pair of paths as parameter:

-update_r /home/thomas/personal_mail /personal_mail \

The next command causes the new session to be written to medium:

-commit \

After writing is complete, print the new medium state including the list of
sessions:

-toc \

Now comes the checkreading by help of the recorded MD5s for superblock,
directory tree, and file data area of the newly written session:

-check_md5 FAILURE -- \

And finally, if the program did not find reason to abort and if the
drive has a tray motor, eject the tray:

-eject all


Recorded ACL and xattr are not repesented by the Linux kernel when the
ISO gets mounted. But xorriso can extract files including those features
by help of its own commands. The man page has an example
"Restore directory trees from a particular ISO session to disk"

$ xorriso -for_backup \
-load volid 'PROJECTS_MAIL_2008_06_19*' \
-indev /dev/sr0 \
-osirrox on:auto_chmod_on \
-chmod u+rwx / -- \
-extract /projects /home/thomas/restored/projects \
-extract /personal_mail /home/thomas/restored/personal_mail \
-rollback_end

Tim Woodall

unread,
Sep 22, 2021, 10:40:04 AM9/22/21
to
Thank you very much! It might be a little while before I get time to
experiment properly with this but I've saved this email for a free
weekend.

Tim.

Marco Möller

unread,
Sep 22, 2021, 12:20:03 PM9/22/21
to
I forgot to mention, that the documentation contains a guide on how to
set up the system, so that a backup will be run automatically upon a
certain external device becoming connected. You could thus make up a
scheme with having more than one external HDD, keeping some of them
offline, so that they could not be harmed by an ransomware attack (as
you mentioned this concern in another of your posts in this thread, and
placing the current backup to the HDD becoming connected temporarily for
running the current backup. Afterwards offline again you might want to
check from a not network connected, supposedly always clean system if
the backup archives are still healthy and maybe even how they compare to
old backups for detecting unusual and not expected changes. At least for
the normal archive health check there are already build in features for
this.

If not satisfied with the build in archive check features or in general
not wanting to include borgbackup into your tool chain, then you could
still consider to adopt such concept to your preferred tools:
- configure to get a script started when some hardware device becomes
connected, and run your backup and log the hashes of the files going
into the backup and of course also the hash(s) of the backup file(s)
- offline check from a secure system that newly calculated hashes of
your files in the backup and of course also of the backup file(s) still
coincide with the hashes in your log
- compare the hashes from the current log with hashes from old logs to
detect unusual changes
- rotate hardware for the next backup to a next external storage and
repeat; you like this should be able to always keep some older and still
not so old backups and a register of logs offline at a safe place and
note if a current backup indicates that unexpected things might have
happened to your current data and thus current backup;

Best wishes,
Marco.

David Christensen

unread,
Sep 22, 2021, 7:00:04 PM9/22/21
to
On 9/22/21 1:04 AM, Tim Woodall wrote:
> On Tue, 21 Sep 2021, David Christensen wrote:
>
>> On 9/21/21 8:53 AM, Tim Woodall wrote:
>>> I would like to have some WORM memory for my backups.

>> Have you considered snapshots -- e.g. btrfs, LVM, or ZFS?

> I don't see how they help me - I am already using [LVM] snapshots to create
> the backup. But if I can create the snapshot, I can delete it again?


I have not used LVM in years, and never tried LVM snapshots. As for
deleting an LVM snapshot, erasing a HDD is the easy part. The real
question is whether or not there is metadata somewhere in the source LVM
that records the creation, existence, and/or deletion of snapshots. If
so, destroying a snapshot means adjusting the relevant metadata.


> uses dump to dump
> the filesystem and uses ssh to write that dump to backup@backup17.


I prefer a "pull" architecture from a hardened backup host.


Yet, it is important that users have the ability to easily restore files
without assistance from the backup sysadmin. (When the backup sysadmin
is the only user, it is easy to forget this feature.)


> It then runs restore to verify the backup. ...

> I've been doing this for nigh on 25 years now, from cd to dvd to blu-ray
> with various tweaks along the way and I've never lost anything
> important.


That is an impressive system.


I rebuilt my backup processes when I converted to ZFS a few years ago.
Most of the hard work was done for me -- ZFS, zfs-auto-snapshot,
replication, etc. But, I had to redesign my processes to fit the new
reality, and to write scripts to automate the various use-cases. Along
the way, I re-visited my archive and imaging processes.


Homebrew backup solutions are tough. They save money, but cost time;
lots of time. Crawling through the details forces understanding. There
is a definite sense of satisfaction when things work. There is a
definite sense of terror when they do not. There is always a
subconscious fear that you have missed something.


Once it is working, you dare not touch anything. The chores become
chores. Do the grind, watch the outputs, burn the disks, visit the
off-site, buy more hardware/ media, stockpile old hardware/ media, etc..


Over time, the memories fade of how it all works; but it's your monster
and no one can help you. Making changes involves significant risk.
Even the smallest change demands re-learning and re-validation. Yet,
the machines and data being backed up change, so the backup system must
change. And, you are always wanting another improvement or feature.


I have often considered switching to one of the many available FOSS
backup solutions. Using a mature tool with a helpful user base is very
appealing, and probably less work in the long run.


> But I'm conscious that to an extent I've been lucky. I do my best to
> keep secure ...


Different data sources have different security, backup, archive, and
retention/ destruction needs. As the differences grow, so does the
complexity of the backup system.


> The suggestion by Thomas Schmitt to write multiple sessions is a good
> one. ...


If you are burning partial discs frequently, multiple seasons might work
for you.


David

Stefan Monnier

unread,
Oct 1, 2021, 11:10:06 AM10/1/21
to
>> Write only storage - DVD-R or equivalent Blu-Ray - but make sure to end the
>> session. Deletion - feed through a paper shredder.
> I already do that but currently that means I have roughly one month of
> backups on network accessible storage before I write to disc.

Rather than WORM you can just take normal disks and once you don't want
to write to them any more, you unplug them ;-)

If you still want to have read access to the data, then you make it
accessible via another server, ideally in another administrative domain
(and another physical location, since fires and other events can be just
as likely as ransomware).


Stefan
0 new messages