Is there no way that bareos itself decides that it needs an additional volume in a certain pool and labels the tape itself?
I guess your workaround will work, but i would have to manually select the pool the tapes belong to.
Our plan is to run a full backup mondays and incremental backups from thursday to sunday.
After one week we will change the media set, up to ten tape sets.
You workaround would include running this every monday.
So any kind of automation would be highly appreciated.
Else, i guess i need to change our future plans..
Anyways, thanks for your answer.
Br Julian
Our tapes all have barcodes on the front, also our autochanger is able to read them, so i will not run into any problem indentifying tapes.
> Also how does bareos know which pool the tape should belong to? That
> kind of management is something that usually requires a human in the loop.
I expected bareos to label a tape, based on the job running behind it.
For example, if i am running a full backup job to tape, bareos should be able to identify an unused tape (or an appendable tape) to store the backup onto. The next day, the incremental backup will run, so now bareos should be able to identify an unused tape again and assign it to the incremental pool.
I would be able to tell what tapes were used by bareos, based on the volumenames created in bareos, since the names would match the barcode.
After reading your answer, am guessing i need to add a new pool, maybe something as simply as "tape", where i can assign all tapes to.
Afterwards bareos will be able to write all its job data onto the tapes and just keeps appending, instead of splitting the tapes into seperate pools.
Just thinking about it makes me believe, this will be the best solution.
What do you think?
> I would expect you'd just need to label all tapes once and be done.
> That's how my setup works. I have a 16 tape changer with a cleaning tape
> in slot 16, so I have 15 data tapes. I loaded all of them, then ran
> "label barcodes" once. I then assigned the tapes to 2 pools; one for
> local backups and one for offsite backups. The first couple of times
> that I pulled my offsite tapes to rotate out I called "label barcodes"
> again with the new empty tapes. Then as the tapes that I've sent offsite
> get sent back to me I just put them in the changer, run "update slots"
> and bareos uses whatever tapes are in the changer.
Okay, so basicially i need to run "label barcodes" ten times in my setup and i am done, got it! Thanks for clarification.
> Because I sometimes forget to run "update slots" after swapping tapes I
> have a cron job that runs daily to run "update slots". This ensures that
> bareos always knows what is in the changer.
> Unless you're buying a new batch of tapes each week, you should not need
> to run "label barcodes" much after the initial setup.
Very good idea! I will create a cronjob aswell.
Thank you already a lot!
Thanks!
However, do you know how bareos handles incremental backups?
For example i want to keep 14 restore points on disk, and one complete week of all backups per tape set.
This (imo) will require two jobs per client, where one of them writes to disk and the other to tape. But does bareos know, that the incremental backup stored on disk should be ignored, while creating the next incremental backup for tape?
Since this will probably be done by two seperated jobs, bareos should know the different data, based on the job history/job state, right?
Furthermore, i guess you do not have any experiences regarding multiple storage devices on a single director, do you?
I would be relieved, if somebody could confirm my guesses regarding the bug and fix.
Thanks again, so much!
Also, do you have bareos-director version 16.2.4 installed?
Thanks!
I think i unserstand your setup. But just to make sure, let me try to wrap it up in my own words.
Your jobs are basically all running with "incremental" job defs, using the "always-incr" pool for incrementals and "consolidated" for the full backups.
Then you consolidate every day a new full to the "consolidated" pool.
Saturdays, you will run a virtual full, from "consolidated" and "always-incr" pool, and store it in the "offsite" pool, which does mean it will be written to tapes.
So, if i am not mistaken in my explanation above, i just need to configure a jobdef for my tapejob, so it will be doing a virtual full, using the "full" and "incremental" pool (since i sticked to the defaults), and store it in my tape pool, where my tapes will belong to.
However, i am wondering why you have 15 storage devices configured, but at the same time you only allow 10 concurrent jobs.
Is it just performance tweaking, or is this actually recommended?
My goal is to have one large full backup file/volume every monday, and one big incremental for each day the rest of the week.
I will always keep two full chains and as soon as the third is complete, i will automatically prune the oldest, based on "maximum volumes" and "volume retention".
Any thoughts (for tweaking) and warning for my plans are much appreciated. :)
Were you able to run multiple storage devices in earlier versions, too?
Maybe using 16.2.4 is the issue here for me.
Thanks again Doug, for sharing your experience with such a newbie :)
I also attached all (imo) important configuration files, just in case you want to have a look aswell.