Force diff/inc backups on the same vchanger device as full

15 views
Skip to first unread message

spadaj...@gmail.com

unread,
Nov 14, 2022, 11:33:51 AM11/14/22
to bareos-users
I'm sure I'm not the first to think about it but I don't even have a good idea how to look for any earlier questions.
I have an SD server which uses vchanger. In my case it's configured along with automounter so I can simply replace a disk and I have fresh batch of file devices (I don't create device per job, I have pre-created static-sized device files). Nothing fancy.
But since I'm swapping the entire disk at once in case it fails I lose all jobs stored on this disk. Which means that if I did a full backup on disk A then swapped for disk B and did diffs or incrementals there and the disk A crashed, I lose the ability to restore the machine.
The question is how to make sure in my setup that I don't end up with incrementals or diffs requiring full from another storage unit. That would mean that I need to contain full+inc/diff on the same disk.
Should I fiddle with creating separate pools and somehowtry to rotate them? (like create separate job set for separate media pool for each week and just repeat them each X weeks where X is number of disks in rotation, assuming that I do a full every week).
Or any better ideas?

aeron...@gmail.com

unread,
Nov 14, 2022, 6:08:44 PM11/14/22
to bareos...@googlegroups.com

I clearly do not know but maybe the always incremental might do the job.

--
You received this message because you are subscribed to the Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bareos-users/9caeccad-f580-44ef-bf0a-8b621ef50069n%40googlegroups.com.

Spadajspadaj

unread,
Nov 15, 2022, 3:19:01 AM11/15/22
to bareos...@googlegroups.com

I thought about the always incremental scheme but that would require additional storage which I do not have at the moment.

Reply all
Reply to author
Forward
0 new messages