Backing up to multiple storage daemons

480 views
Skip to first unread message

Florian Panzer - PLUSTECH

unread,
Mar 30, 2021, 9:14:27 AM3/30/21
to bareos-users
Hi,

I have one director (running bareos-dir + postgres) and multiple storage
servers (each running 1 bareos-sd instance).

The storage servers are basically boxes with a lot of diskapace
available locally (/backup/). The plan is to use 50GB virtual (file) tapes.

So far, pretty straightforward.


The challenge I'm facing: I would like my bareos-filedaemons to write
backups to any available tape, on any of the available storages.


So I went "well, I'll just create a single pool with multiple storages",
like shown below.

The problem is: only the first defined storage (bareos-sd01) is ever
used, although the reference states that a list of storages can be definded.

Any ideas on how to fix this?

Can I use multiple pools with one storage each?

Regards,



++++++++++
on the director:

Pool {
Name = bareos-dir-pool
Pool Type = Backup
Storage = bareos-sd01
Storage = bareos-sd02
Recycle=yes
AutoPrune = yes
Volume Retention = 17 days
Maximum Volume Bytes = 50G
Label Format = "tape-"
Volume Use Duration = 23h
Action On Purge = Truncate
Maximum Volumes = 50
}
Storage {
Name = bareos-sd01
Address = 10.0.100.133
Device = bareos-sd01-device
Media Type = file01
Password = blah
Maximum Concurrent Jobs = 4
}

Storage {
Name = bareos-sd02
Address = 10.0.100.132
Device = bareos-sd02-device
Media Type = file02
Password = blah
Maximum Concurrent Jobs = 4
}

++++++++++
on storage01:

Device {
Name = bareos-sd01-device
Archive Device = /backup/bareos-sd01-device
Media Type = file01
Device Type = File
Random Access = Yes
LabelMedia = yes
AutomaticMount = yes
Requires Mount = no
RemovableMedia = no
AlwaysOpen = no
Maximum Concurrent Jobs = 4
}

on storage01:

Device {
Name = bareos-sd02-device
Archive Device = /backup/bareos-sd02-device
Media Type = file02
Device Type = File
Random Access = Yes
LabelMedia = yes
AutomaticMount = yes
Requires Mount = no
RemovableMedia = no
AlwaysOpen = no
Maximum Concurrent Jobs = 4
}
++++++++++

--
Florian Panzer

-----------------------------------
PLUSTECH GmbH
Jäckstraße 35
96052 Bamberg
Telefon: +49 951 299 09 716
https://plustech.de/
Geschäftsführung: Florian Panzer
Amtsgericht Bamberg - HRB 9680
-----------------------------------
AWS Certified Solution Architect - Associate
AWS APN Consulting Partner
Mitglied in der Allianz für Cybersicherheit
des Bundesamtes für Sicherheit in der Informationstechnik
Mitglied im IT-Cluster Oberfranken

Zu We

unread,
Apr 26, 2021, 4:13:43 AM4/26/21
to bareos-users
These two chapters from the docs are helpful to understand concurrent jobs:
In doc details you can see that only the first storage of the list is ever used:

 > Be aware that you theoretically can give a list of storages here but only the first item from the list is actually used for backup and restore jobs.

you will have to define in each job which storage the job should use to spread the jobs to the multiple storages

Florian Panzer - PLUSTECH GmbH

unread,
Apr 28, 2021, 5:33:59 PM4/28/21
to bareos-users

Am 26.04.21 um 10:13 schrieb Zu We:
> *
<https://docs.bareos.org/TasksAndConcepts/VolumeManagement.html#concurrentdiskjobs>
>
> Your problem is that the storage list in the pool is possible but not
> really usable.
> In doc details you can see that only the first storage of the list is
> ever used:
>
https://docs.bareos.org/Configuration/Director.html#config-Dir_Pool_Storage
>
<https://docs.bareos.org/Configuration/Director.html#config-Dir_Pool_Storage>

Yes, that is what I understood from the docs.
This looks like a "not-yet-implemented" function, or maybe even a bug
that hasn't been fixed.

IMHO bareos should
* either not load such configuration, instead of happily taking a list
of storages and silently only ever using the first
* or use any of the storage (e.g. fallback if the first storage is full).


Regarding the original question, it looks like there have been plans to
implement such functionality, but it's just broken for now.

Andreas Rogge

unread,
Apr 30, 2021, 12:52:50 PM4/30/21
to bareos...@googlegroups.com
Am 28.04.21 um 23:33 schrieb Florian Panzer - PLUSTECH GmbH:
> Yes, that is what I understood from the docs.
> This looks like a "not-yet-implemented" function, or maybe even a bug
> that hasn't been fixed.
The documentation states:
"""Be aware that you theoretically can give a list of storages here but
only the first item from the list is actually used for backup and
restore jobs."""

While I agree that there's room for improvement, there are lots of
issues to address when backing up to multiple storages - I really don't
want to go into more detail here. In the end you'll be better off with
separate pools per storage.

> IMHO bareos should
> * either not load such configuration, instead of happily taking a list
> of storages and silently only ever using the first
> * or use any of the storage (e.g. fallback if the first storage is full).
A storage is never "full" from Bareos' point-of-view. So this is
currently more or less impossible to implement.
However, if you can come up with a working solution, feel free to open a
pull request.

Best Regards,
Andreas

--
Andreas Rogge andrea...@bareos.com
Bareos GmbH & Co. KG Phone: +49 221-630693-86
http://www.bareos.com

Sitz der Gesellschaft: Köln | Amtsgericht Köln: HRA 29646
Komplementär: Bareos Verwaltungs-GmbH
Geschäftsführer: S. Dühr, M. Außendorf, J. Steffens, Philipp Storz

Matheus Inacio

unread,
Apr 30, 2021, 10:17:11 PM4/30/21
to bareos-users
Hi Florian,

You configure auto changer?

I use two storage daemons , but configured in jobs file

Zu We

unread,
Jul 20, 2021, 5:14:16 AM7/20/21
to bareos-users
> IMHO bareos should
> * either not load such configuration, instead of happily taking a list
> of storages and silently only ever using the first

But this is a valid point and should be changed in Bareos.

rivim...@gmail.com

unread,
Aug 5, 2021, 11:46:29 AM8/5/21
to bareos-users
I wanted to do something similar a while back, and took the time to trace the code of the director; the problem is that the director allocates a storage instance (a drive, basically) very early on in the job history and there is no mechanism to change it. So selecting "the next available drive" is not currently possible. I believe/hope it is something the bareos devs will change, as doing so will open up a lot more possibilities.

For anyone with some available cash, sponsoring feature development on this would be one way to make faster progress. I believe bareos have set up a scheme whereby more than one person can contribute to a feature.
Reply all
Reply to author
Forward
0 new messages