Automatically label tapes

1,098 views
Skip to first unread message

Julian Poß

unread,
Aug 30, 2017, 9:16:22 AM8/30/17
to bareos-users
Hi to all of you,

i just got a basic bareos installation running.

However, i got stuck on two more ore less little issues:
- Your bareos jobs will stop succeeding, as soon as you add a second storage device to the director (which will occur as soon as i get the autochanger configuration running as expected).
If i am not mistaken, this is fixed in 16.2.6
- My tape jobs/autochanger won‘t do anything, until i manually label a tape for bareos.
As soon as i label a tape by myself, bareos will ask the autochanger to load the tape and the job will succeed.
I was just able to find a single setting regarding this:
„Label Media = yes“ in device.conf.
Maybe also „Check Labels = yes“?
But somehow bareos keeps refusing to label the tapes automatically.

I hope somebody is able to give me a hint why it won’t work as expected.
You can find my configuration files below.
Thank you all a lot!

Best regards,
Julian

Autochanger.conf:
Autochanger {
Name = MSL-G3-Series
Device = Ultrium-5-SCSI
Changer Device = /dev/tape/by-id/scsi-35001438016024fa2
Changer Command = "/usr/lib/bareos/scripts/mtx-changer %c %o %S %a %d"
}

Device.conf:
Device {
Name = Ultrium-5-SCSI
Media Type = LTO-5
DeviceType = tape
Archive Device = /dev/tape/by-id/scsi-35001438016024fa9-nst
Drive Index = 1
Autochanger = yes
Check Labels = yes
Label Media = yes
Automatic Mount = yes
RemovableMedia = no
}

Storage.conf:
Storage {
Name = Tape
Address = 10.0.x.x
Password = "reallystrongpassword-donottrytobruteforce"
Auto Changer = yes
Device = MSL-G3-Series
Tape Device = Ultrium-5-SCSI
Changer Device = MSL-G3-Series
MediaType = LTO-5
Allow Compression = yes
Maximum Concurrent Jobs = 30
}

Jon SCHEWE

unread,
Aug 30, 2017, 9:24:30 AM8/30/17
to Julian Poß, bareos-users
If your changer has a barcode reader, then just print out barcodes for
all of your tapes and put them on the front. Then run "label barcodes"
from the bareos console. Then run "update slots". See chapter 19 of the
manual for other details on using an autochanger.
--
Research Scientist
Raytheon BBN Technologies
5775 Wayzata Blvd, Ste 630
Saint Louis Park, MN, 55416
Office: 952-545-5720

Julian Poß

unread,
Aug 30, 2017, 9:51:20 AM8/30/17
to bareos-users, julia...@gmx.de
Am Mittwoch, 30. August 2017 15:24:30 UTC+2 schrieb Jon SCHEWE:
> If your changer has a barcode reader, then just print out barcodes for
> all of your tapes and put them on the front. Then run "label barcodes"
> from the bareos console. Then run "update slots". See chapter 19 of the
> manual for other details on using an autochanger.

Is there no way that bareos itself decides that it needs an additional volume in a certain pool and labels the tape itself?
I guess your workaround will work, but i would have to manually select the pool the tapes belong to.
Our plan is to run a full backup mondays and incremental backups from thursday to sunday.
After one week we will change the media set, up to ten tape sets.

You workaround would include running this every monday.

So any kind of automation would be highly appreciated.
Else, i guess i need to change our future plans..
Anyways, thanks for your answer.

Br Julian

Jon SCHEWE

unread,
Aug 30, 2017, 10:01:34 AM8/30/17
to Julian Poß, bareos-users
I suppose that bareos could scan the changer for tapes that don't have
labels and then assume that it's supposed to use them and label them.
However this turns into a configuration problem when changing out tapes.
If you don't have labels on the front of your tapes and you want to pull
one out for offsite storage how do you know which one is which?

Also how does bareos know which pool the tape should belong to? That
kind of management is something that usually requires a human in the loop.

I would expect you'd just need to label all tapes once and be done.
That's how my setup works. I have a 16 tape changer with a cleaning tape
in slot 16, so I have 15 data tapes. I loaded all of them, then ran
"label barcodes" once. I then assigned the tapes to 2 pools; one for
local backups and one for offsite backups. The first couple of times
that I pulled my offsite tapes to rotate out I called "label barcodes"
again with the new empty tapes. Then as the tapes that I've sent offsite
get sent back to me I just put them in the changer, run "update slots"
and bareos uses whatever tapes are in the changer.

Because I sometimes forget to run "update slots" after swapping tapes I
have a cron job that runs daily to run "update slots". This ensures that
bareos always knows what is in the changer.
Unless you're buying a new batch of tapes each week, you should not need
to run "label barcodes" much after the initial setup.

Julian Poß

unread,
Aug 30, 2017, 10:27:54 AM8/30/17
to bareos-users, julia...@gmx.de

> I suppose that bareos could scan the changer for tapes that don't have
> labels and then assume that it's supposed to use them and label them.
> However this turns into a configuration problem when changing out tapes.
> If you don't have labels on the front of your tapes and you want to pull
> one out for offsite storage how do you know which one is which?

Our tapes all have barcodes on the front, also our autochanger is able to read them, so i will not run into any problem indentifying tapes.

> Also how does bareos know which pool the tape should belong to? That
> kind of management is something that usually requires a human in the loop.

I expected bareos to label a tape, based on the job running behind it.
For example, if i am running a full backup job to tape, bareos should be able to identify an unused tape (or an appendable tape) to store the backup onto. The next day, the incremental backup will run, so now bareos should be able to identify an unused tape again and assign it to the incremental pool.
I would be able to tell what tapes were used by bareos, based on the volumenames created in bareos, since the names would match the barcode.

After reading your answer, am guessing i need to add a new pool, maybe something as simply as "tape", where i can assign all tapes to.
Afterwards bareos will be able to write all its job data onto the tapes and just keeps appending, instead of splitting the tapes into seperate pools.

Just thinking about it makes me believe, this will be the best solution.
What do you think?

> I would expect you'd just need to label all tapes once and be done.
> That's how my setup works. I have a 16 tape changer with a cleaning tape
> in slot 16, so I have 15 data tapes. I loaded all of them, then ran
> "label barcodes" once. I then assigned the tapes to 2 pools; one for
> local backups and one for offsite backups. The first couple of times
> that I pulled my offsite tapes to rotate out I called "label barcodes"
> again with the new empty tapes. Then as the tapes that I've sent offsite
> get sent back to me I just put them in the changer, run "update slots"
> and bareos uses whatever tapes are in the changer.

Okay, so basicially i need to run "label barcodes" ten times in my setup and i am done, got it! Thanks for clarification.



> Because I sometimes forget to run "update slots" after swapping tapes I
> have a cron job that runs daily to run "update slots". This ensures that
> bareos always knows what is in the changer.
> Unless you're buying a new batch of tapes each week, you should not need
> to run "label barcodes" much after the initial setup.

Very good idea! I will create a cronjob aswell.

Thank you already a lot!

Jon SCHEWE

unread,
Aug 30, 2017, 10:52:33 AM8/30/17
to Julian Poß, bareos-users
On 8/30/17 9:27 AM, Julian Poß wrote:
>> I suppose that bareos could scan the changer for tapes that don't have
>> labels and then assume that it's supposed to use them and label them.
>> However this turns into a configuration problem when changing out tapes.
>> If you don't have labels on the front of your tapes and you want to pull
>> one out for offsite storage how do you know which one is which?
> Our tapes all have barcodes on the front, also our autochanger is able to read them, so i will not run into any problem indentifying tapes.
>
>> Also how does bareos know which pool the tape should belong to? That
>> kind of management is something that usually requires a human in the loop.
> I expected bareos to label a tape, based on the job running behind it.
> For example, if i am running a full backup job to tape, bareos should be able to identify an unused tape (or an appendable tape) to store the backup onto. The next day, the incremental backup will run, so now bareos should be able to identify an unused tape again and assign it to the incremental pool.
> I would be able to tell what tapes were used by bareos, based on the volumenames created in bareos, since the names would match the barcode.
>
> After reading your answer, am guessing i need to add a new pool, maybe something as simply as "tape", where i can assign all tapes to.
> Afterwards bareos will be able to write all its job data onto the tapes and just keeps appending, instead of splitting the tapes into seperate pools.
>
> Just thinking about it makes me believe, this will be the best solution.
> What do you think?
Yes, you should have 1 pool that contains all of your tapes. When Bareos
is starting a backup job it looks in the pool for a tape that is
appendable or empty.  You would only need a second pool if you wanted to
treat some of the tapes differently, such as my offsite pool.
>> I would expect you'd just need to label all tapes once and be done.
>> That's how my setup works. I have a 16 tape changer with a cleaning tape
>> in slot 16, so I have 15 data tapes. I loaded all of them, then ran
>> "label barcodes" once. I then assigned the tapes to 2 pools; one for
>> local backups and one for offsite backups. The first couple of times
>> that I pulled my offsite tapes to rotate out I called "label barcodes"
>> again with the new empty tapes. Then as the tapes that I've sent offsite
>> get sent back to me I just put them in the changer, run "update slots"
>> and bareos uses whatever tapes are in the changer.
> Okay, so basicially i need to run "label barcodes" ten times in my setup and i am done, got it! Thanks for clarification.
>
"label barecodes" will label all tapes in the changer at once, so you
probably don't need to run it 10 times, unless you have enough tapes to
fill your changer 10 times.
>> Because I sometimes forget to run "update slots" after swapping tapes I
>> have a cron job that runs daily to run "update slots". This ensures that
>> bareos always knows what is in the changer.
>> Unless you're buying a new batch of tapes each week, you should not need
>> to run "label barcodes" much after the initial setup.
> Very good idea! I will create a cronjob aswell.
This is the command I use:
echo "update slots" | bconsole > /dev/null

Julian Poß

unread,
Aug 31, 2017, 2:32:03 AM8/31/17
to bareos-users, julia...@gmx.de
> This is the command I use:
> echo "update slots" | bconsole > /dev/null

Thanks!

However, do you know how bareos handles incremental backups?
For example i want to keep 14 restore points on disk, and one complete week of all backups per tape set.

This (imo) will require two jobs per client, where one of them writes to disk and the other to tape. But does bareos know, that the incremental backup stored on disk should be ignored, while creating the next incremental backup for tape?
Since this will probably be done by two seperated jobs, bareos should know the different data, based on the job history/job state, right?


Furthermore, i guess you do not have any experiences regarding multiple storage devices on a single director, do you?
I would be relieved, if somebody could confirm my guesses regarding the bug and fix.
Thanks again, so much!

Douglas K. Rand

unread,
Aug 31, 2017, 9:13:48 AM8/31/17
to bareos...@googlegroups.com
So, I have a configuration that is fairly on point for you. I do always
incremental backups to disk with 30 restore points on disk.

And each week I do off-site backups to tape using virtual full backups.
This is quite neat in that the virtual full backups do not go back to
the client, instead they build a synthetic full backup based upon all of
the backups available on disk on the server.

The trick that I found for having Bareos then ignore the off-site
backups for other operations is this tiny script that gets run after
each virtual full backup via:

run after job =
"/usr/meridian/share/libexec/bareos/turn_virtual-full-into-archive %i"

The script is:

#!/bin/zsh

if [ $# -eq 1 ]; then
jobid="$1"
shift
else
echo Usage: $0 job-id
exit 255
fi

/usr/local/bin/psql --dbname=bareos --echo-errors --quiet \
--command="UPDATE job SET type='A' WHERE
jobid='${jobid}'"

exit 0

And I have two storage devices configured on my director, "disk" and
"lto6-1".

Jon SCHEWE

unread,
Aug 31, 2017, 9:49:26 AM8/31/17
to bareos...@googlegroups.com
In the manual there's a way to do this without a shell command. Section
23.6.2.
Run Script {
      console = "update jobid=%i jobtype=A"
    Runs When = After
    Runs On Client = No
    Runs On Failure = No

Douglas K. Rand

unread,
Aug 31, 2017, 10:08:31 AM8/31/17
to bareos...@googlegroups.com
Ohh, another nugget. Thanks, I'll try that.

Julian Poß

unread,
Sep 1, 2017, 5:38:44 AM9/1/17
to bareos-users
May i ask you how you configured this?
I would also prefer creating virutal full backups, instead of encumbering the clients again.

Also, do you have bareos-director version 16.2.4 installed?
Thanks!

Douglas K. Rand

unread,
Sep 1, 2017, 10:24:22 AM9/1/17
to bareos...@googlegroups.com
On 09/01/17 04:38, Julian Poß wrote:
> May i ask you how you configured this? I would also prefer creating
> virutal full backups, instead of encumbering the clients again.

OK, I figured that was next. :)

We use Puppet to manage our Bareos configs, so you'll notice repeated
things (like the identical comment spread over many different pieces of
a config). What I did was take the running configs from our director and
storage daemon and then washed them of either confidential stuff or
simply stuff that you won't care about. Which could mean that the
configs are no longer a drop in working setup.

My two cents would be to start with really small configuration setups,
and then as you get each bit working slowly add more. Get your director,
one file daemon (a.k.a. client), and one storage daemon working. Then
add the second storage device. Then add the next bit, and so on. Use the
documentation and these configs as patterns, ideas, and suggestions, but
don't start from a copy-n-paste from either.

If I have a complaint about the documentation it is that config snippets
aren't free-standing, they rely on Job Defs and other settings from
other pieces of the config. Which means that it is difficult to copy a
snippet from the docs to get it working. But it is one of the most fully
documented projects that we use.

I'm sure you'll also notice that the configs were built around Bareos
15.2 and there are several features in 16.x that we aren't (yet?) taking
advantage of. (I noticed an explicit comment about wildcards.) We also
haven't yet split our config down into the bareos-dir.d/ pattern.

Many of our clients are back on Bareos 15, which works fine for Always
Incremental as all of the extra bits are only in the Director. (Which is
cool!)

Configs are attached. Hope they help.

> Also, do you have bareos-director version 16.2.4 installed?

I'm running Bareos 16.2.5 on FreeBSD 11.1 out of ports.

Good Luck.
bareos-dir.conf
bareos-sd.conf
clients.conf
jobs.conf

Julian Poß

unread,
Sep 4, 2017, 5:02:53 AM9/4/17
to bareos-users
Thank you a lot! :)

I think i unserstand your setup. But just to make sure, let me try to wrap it up in my own words.

Your jobs are basically all running with "incremental" job defs, using the "always-incr" pool for incrementals and "consolidated" for the full backups.

Then you consolidate every day a new full to the "consolidated" pool.
Saturdays, you will run a virtual full, from "consolidated" and "always-incr" pool, and store it in the "offsite" pool, which does mean it will be written to tapes.

So, if i am not mistaken in my explanation above, i just need to configure a jobdef for my tapejob, so it will be doing a virtual full, using the "full" and "incremental" pool (since i sticked to the defaults), and store it in my tape pool, where my tapes will belong to.


However, i am wondering why you have 15 storage devices configured, but at the same time you only allow 10 concurrent jobs.
Is it just performance tweaking, or is this actually recommended?

My goal is to have one large full backup file/volume every monday, and one big incremental for each day the rest of the week.
I will always keep two full chains and as soon as the third is complete, i will automatically prune the oldest, based on "maximum volumes" and "volume retention".

Any thoughts (for tweaking) and warning for my plans are much appreciated. :)

Were you able to run multiple storage devices in earlier versions, too?
Maybe using 16.2.4 is the issue here for me.

Thanks again Doug, for sharing your experience with such a newbie :)

I also attached all (imo) important configuration files, just in case you want to have a look aswell.

Fileset_Windows.conf
JobDef.conf
Storage.conf
Fileset_Linux.conf
Pool_Tape.conf
Pool_Incremental.conf
Pool_Full.conf

Douglas K. Rand

unread,
Sep 4, 2017, 9:22:14 AM9/4/17
to bareos...@googlegroups.com
On 9/4/17 4:02 AM, Julian Poß wrote:
> Thank you a lot! :)

You bet.

> I think i understand your setup. But just to make sure, let me try to
> wrap it up in my own words.
>
> Your jobs are basically all running with "incremental" job defs,
> using the "always-incr" pool for incrementals and "consolidated" for
> the full backups.
>
> Then you consolidate every day a new full to the "consolidated"
> pool. Saturdays, you will run a virtual full, from "consolidated" and
> "always-incr" pool, and store it in the "offsite" pool, which does
> mean it will be written to tapes.

Yup, that is close enough. We started out with the normal full,
differential, and incremental approach and with the migration to always
incremental the names did wander from the doc samples.

The consolidation doesn't always end up re-building a full, there are
knobs to limit how often fulls are consolidated to reduce IO. Check out
section 23 of the Bareos docs.

> So, if i am not mistaken in my explanation above, i just need to
> configure a jobdef for my tapejob, so it will be doing a virtual
> full, using the "full" and "incremental" pool (since i sticked to the
> defaults), and store it in my tape pool, where my tapes will belong
> to.

Yup, that approach works.

> However, i am wondering why you have 15 storage devices configured,
> but at the same time you only allow 10 concurrent jobs. Is it just
> performance tweaking, or is this actually recommended?

I seemed to run into contention with 10 on-disk storage devices and 10
concurrent jobs. Although that was probably back on Bareos 15.x.

> My goal is to have one large full backup file/volume every monday,
> and one big incremental for each day the rest of the week. I will
> always keep two full chains and as soon as the third is complete, i
> will automatically prune the oldest, based on "maximum volumes" and
> "volume retention".
>
> Any thoughts (for tweaking) and warning for my plans are much
> appreciated. :)

That will work. We've been quite happy with using always incremental
backups of the clients since for some of our clients the full backup was
a noticeable load. You could consider not doing full backups (other than
the very first backup of a new client) at all and using always
incremental and consolidation. With your virtual full approach for
off-site you are kind of doing that already.

> Were you able to run multiple storage devices in earlier versions,
> too? Maybe using 16.2.4 is the issue here for me.

Yes, there were no problems that I encountered with multiple storage
devices going back to 15.2.

> Thanks again Doug, for sharing your experience with such a newbie :)
>
> I also attached all (imo) important configuration files, just in case
> you want to have a look aswell.

You are welcome.

Reply all
Reply to author
Forward
0 new messages