Always Incremental with One Tape Drive

134 views
Skip to first unread message

Russell Harmon

unread,
Dec 17, 2023, 7:38:29 PM12/17/23
to bareos-users
Hi there,

I see the note in https://docs.bareos.org/TasksAndConcepts/AlwaysIncrementalBackupScheme.html#storages-and-pools about "at least two storages are needed" for Always Incremental, but is there a way to make this work with a *temporary* secondary storage?

I have just one tape drive, and while I can temporarily use local disk as a secondary storage, I don't have enough for a full backup on disk... only enough for one tape's worth of data.

Is there any way to spool a consolidated job to disk, then let me swap in a tape for the consolidated pool (therefore removing the incremental pool's tape), despool it, then resume consolidating?

Thanks,
Russ Harmon

Brock Palen

unread,
Dec 23, 2023, 1:49:27 PM12/23/23
to Russell Harmon, bareos-users
I do this and it works but it has some bugs that are reported but not had progress in few years.

My setup is:

Disk Pools:
AI-Incremental
AI-Consolidated

Tape Pool:
LTO
Offsite

I run the AI setup on the disk pools but I use a Migration Job to migrate AI-Consolidated jobs to Offsite.
This is where one bug shows up ’sometimes’ Bareos will correctly understand that the job it needs is on tape but if you run parallel jobs it won’t correctly wait for the tape drive to free up if already busy. It will instead fail. It’s easy to work around you just run it again when that happens.

Sometimes it also gets hungup with multiple disk ‘devices’ where it won’t swap the needed disk volume even though it’s not being used. You can avoid all of these by forcing serial jobs, and all ‘bugs’ are more inconvenient than show stoppers.


I also use the Offsite pool (which for me is a second tape drive but not part of an autoloader) to write monthly offsite copies using VirutalFulls
https://docs.bareos.org/TasksAndConcepts/AlwaysIncrementalBackupScheme.html#long-term-storage-of-always-incremental-jobs

This is not AI related, but regular copy jobs are not reocmended because the consolidate pulls in those jobs. So this is more of an ‘emergency get it back no more than a month old’ offsite copy.


The migration job runs a script that checks and truncates all pruned volumes to free disk space rather than waiting for them to expire by age. Again this setup I find requires few TB of disk (need to run full backups to disk) and requires some watching.


#!/bin/bash
POOL=$1
for x in `echo "list volumes pool=${POOL}" | bconsole | grep -v "list volumes" | grep $POOL | awk -F\| '{print $3}'`
do
echo "prune volume=$x yes"
done | bconsole

# actaully free up disk space
echo "truncate volstatus=Purged pool=$POOL yes" \
| bconsole


Pool {
Name = AI-Incremental
Pool Type = Backup
Recycle = yes # Bareos can automatically recycle Volumes
Auto Prune = yes # Prune expired volumes
Volume Retention = 6 months # How long should jobs be kept?
Maximum Volume Bytes = 10G # Limit Volume size to something reasonable
Label Format = "AI-Incremental-"
Volume Use Duration = 7d
Storage = File
Next Pool = AI-Consolidated # consolidated jobs go to this pool
Action On Purge=Truncate
Migration High Bytes = 500G
Migration Low Bytes = 300G
}

Pool {
Name = AI-Consolidated
Pool Type = Backup
Recycle = yes # Bareos can automatically recycle Volumes
Auto Prune = yes # Prune expired volumes
Volume Retention = 6 months # How long should jobs be kept?
Maximum Volume Bytes = 50G # Limit Volume size to something reasonable
Label Format = "AI-Consolidated-"
Volume Use Duration = 2 days
Storage = File
Next Pool = Longterm # copy jobs write to this pool
Action On Purge=Truncate
Migration Time = 7 days
Migration High Bytes = 600G
Migration Low Bytes = 300G
}


Job {
Name = "Migrate-To-Offsite-AI-Consolidated-size"
Client = myth-fd
Type = Migrate
Purge Migration Job = yes
Pool = AI-Consolidated
Level = Full
Next Pool = LTO
Schedule = WeeklyCycleAfterBackup
Allow Duplicate Jobs = no
Priority = 4 #before catalog dump
Messages = Standard
Selection Type = PoolOccupancy
Spool Data = No
Selection Pattern = "."
RunAfterJob = "sudo /usr/local/bin/prune.sh AI-Consolidated"
Enabled = no
}

Brock Palen
bro...@mlds-networks.com
www.mlds-networks.com
Websites, Linux, Hosting, Joomla, Consulting
> --
> You received this message because you are subscribed to the Google Groups "bareos-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/bareos-users/6addbe69-f3a4-4bb4-b10d-849d713d5723n%40googlegroups.com.

Russell Harmon

unread,
Dec 23, 2023, 3:43:06 PM12/23/23
to Brock Palen, bareos-users
On Sat, Dec 23, 2023 at 10:49 Brock Palen <bro...@mlds-networks.com> wrote:
I do this and it works but it has some bugs that are reported but not had progress in few years.

My setup is:

Disk Pools:
AI-Incremental
AI-Consolidated

Tape Pool:
LTO
Offsite

I run the AI setup on the disk pools but I use a Migration Job to migrate AI-Consolidated jobs to Offsite.
This is where one bug shows up ’sometimes’ Bareos will correctly understand that the job it needs is on tape but if you run parallel jobs it won’t correctly wait for the tape drive to free up if already busy. It will instead fail.  It’s easy to work around you just run it again when that happens.

Sometimes it also gets hungup with multiple disk ‘devices’ where it won’t swap the needed disk volume even though it’s not being used.  You can avoid all of these by forcing serial jobs, and all ‘bugs’ are more inconvenient  than show stoppers.


I also use the Offsite pool (which for me is a second tape drive but not part of an autoloader) to write monthly offsite copies using VirutalFulls 
https://docs.bareos.org/TasksAndConcepts/AlwaysIncrementalBackupScheme.html#long-term-storage-of-always-incremental-jobs

This is not AI related, but regular copy jobs are not reocmended because the consolidate pulls in those jobs.  So this is more of an ‘emergency  get it back no more than a month old’  offsite copy.


The migration job runs a script that checks and truncates all pruned volumes to free disk space rather than waiting for them to expire by age.  Again this setup I find requires few TB of disk (need to run full backups to disk)  and requires some watching.

To make sure I understand, this requires enough disk space for a full backup? I can't write out each tape's worth of data to tape while the full backup is running?

Brock Palen

unread,
Dec 23, 2023, 8:21:29 PM12/23/23
to Russell Harmon, bareos-users
Correct. Because when you run your consolidate with a full it has to read the old full likely from your tape drive. So it has to write to disk. 


Sent from my iPhone
Brock Palen

On Dec 23, 2023, at 3:43 PM, Russell Harmon <eatnu...@gmail.com> wrote:



Russell Harmon

unread,
Dec 24, 2023, 12:55:10 AM12/24/23
to Brock Palen, bareos-users
On Sat, Dec 23, 2023 at 17:21 Brock Palen <bro...@mlds-networks.com> wrote:
Correct. Because when you run your consolidate with a full it has to read the old full likely from your tape drive. So it has to write to disk. 

What if I flip things around: use disk for incrementals and tape for full? Would I then just need to make sure I run a consolidate job before I run out of disk?

Brock Palen

unread,
Dec 26, 2023, 8:05:43 AM12/26/23
to Russell Harmon, bareos-users
Doesn’t work,

During the consolidation of a full, you have to read back from teh full pool and write to the full pool. So you always need to have 2 working devices, one to read, one to write to the AI-Consolidate pool.

That’s why in my setup tape is really a pool to ‘migrate to make space’ so I can read back from it (Bareos often correctly switches to read from that pool). But the AI-Consolidate pool is on disk and where all the shuffling happens.

There is no way to do AI without two devices, and enough disk to at least create one full backup for whatever you are backing up.


Brock Palen
bro...@mlds-networks.com
www.mlds-networks.com
Websites, Linux, Hosting, Joomla, Consulting



Thomas Kempf

unread,
Jul 19, 2024, 8:50:17 AM7/19/24
to Brock Palen, Russell Harmon, bareos-users
Hello
I'm reviving this old thread, because i think i have a setup like Brock
describes, but still have some error in the setup or found a bug in
bareos. I'd be really glad if you could help me...

There is One LTO-8 Drive with an associated Pool "Fulltape" (mediatype
"lto-8") and the Disk-Storage with Pools "AI-Incremental" (mediatype
"file") and "AI-Consolidated" (mediatype "filec").
At the beginning i did a Full in the "Fulltape" Pool on LTO-8.
After that daily AI Incrementals on Disk in the AI-Incremental Pool.
Additionaly daily Consolidation jobs to AI-Consolidated.
This ran errorfree and smooth for 8 months. Then "Always Incremental max
Full Age" was reached and i expected a full backup In AI-Consolidated,
which i wanted to migrate back to tape again afterwards.
Alas this Full Consolidation throws an error and fails to change the
read device to the tape.
Is this a bug in bareos ?

here is the error:

19-Jul 13:27 hueper-dir JobId 63962: Start Virtual Backup JobId 63962,
Job=BETTERONE-AI-MULTIPOLSTER.2024-07-19_13.27.23_47
19-Jul 13:27 hueper-dir JobId 63962: Bootstrap records written to
/var/lib/bareos/hueper-dir.restore.8.bsr
19-Jul 13:27 hueper-dir JobId 63962: Consolidating JobIds
59638,63851,62915,62939,62963 containing 215949 files
19-Jul 13:27 hueper-dir JobId 63962: Connected Storage daemon at
schorsch-sd.ad.hueper.de:9103, encryption: TLS_CHACHA20_POLY1305_SHA256
TLSv1.3
19-Jul 13:27 hueper-dir JobId 63962: Encryption:
TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
19-Jul 13:27 hueper-dir JobId 63962: Using Device "Bareos-AII0001" to read.
19-Jul 13:27 hueper-dir JobId 63962: Using Device "Bareos-AIC0001" to write.
19-Jul 13:27 schorsch-sd JobId 63962: Volume "BO-AI-Consolidated-31509"
previously written, moving to end of data.
19-Jul 13:27 schorsch-sd JobId 63962: Ready to append to end of Volume
"BO-AI-Consolidated-31509" size=238
19-Jul 13:27 schorsch-sd JobId 63962: stored/acquire.cc:157 Changing
read device. Want Media Type="LTO-8" have="File"
device="Bareos-AII0001" (/bareos-data/BO-AI-Incremental)
19-Jul 13:27 schorsch-sd JobId 63962: Releasing device "Bareos-AII0001"
(/bareos-data/BO-AI-Incremental).
19-Jul 13:27 schorsch-sd JobId 63962: Fatal error: stored/acquire.cc:214
No suitable device found to read Volume "B00006L8"
19-Jul 13:27 schorsch-sd JobId 63962: Releasing device "Bareos-AIC0001"
(/bareos-data/BO-AI-Consolidated).
19-Jul 13:27 schorsch-sd JobId 63962: Releasing device "Bareos-AII0001"
(/bareos-data/BO-AI-Incremental).
19-Jul 13:27 hueper-dir JobId 63962: Replicating deleted files from
jobids 59638,63851,62915,62939,62963 to jobid 63962
19-Jul 13:27 hueper-dir JobId 63962: Error: Bareos hueper-dir
23.0.3~pre135.a9e3d95ca (28May24):
Build OS: Debian GNU/Linux 11 (bullseye)
JobId: 63962
Job: BETTERONE-AI-MULTIPOLSTER.2024-07-19_13.27.23_47
Backup Level: Virtual Full
Client: "betterone-fd" 23.0.3~pre95.0aeaf0d6d
(15Apr24) 13.2-RELEASE,freebsd
FileSet: "betterone-ai-multipolster" 2021-02-12 12:02:55
Pool: "BO-AI-Consolidated" (From Job Pool's
NextPool resource)
Catalog: "MyCatalog" (From Client resource)
Storage: "Bareos-AIC" (From Storage from Pool's
NextPool resource)
Scheduled time: 19-Jul-2024 13:27:23
Start time: 23-Mai-2024 19:06:26
End time: 23-Mai-2024 19:23:02
Elapsed time: 16 mins 36 secs
Priority: 25
Allow Mixed Priority: yes
SD Files Written: 0
SD Bytes Written: 0 (0 B)
Rate: 0.0 KB/s
Volume name(s):
Volume Session Id: 9
Volume Session Time: 1721383710
Last Volume Bytes: 0 (0 B)
SD Errors: 1
SD termination status: Fatal Error
Accurate: yes
Bareos binary info: Bareos community build (UNSUPPORTED): Get
professional support from https://www.bareos.com
Job triggered by: User
Termination: *** Backup Error ***

Kind Regards
Tom

Bruno Friedmann (bruno-at-bareos)

unread,
Jul 22, 2024, 3:33:14 AM7/22/24
to bareos-users
This setup will not work, if your first full is on Tape, the consolidation won't be able to have a read and write storage of the same media type.

The tape should be used to create a VF copy of consolidated jobs for example.

Thomas Kempf

unread,
Jul 22, 2024, 3:48:59 AM7/22/24
to bareos...@googlegroups.com
Hello Bruno,
i thougt that i tried to do what you proposed...
Write the full consolidation job, including the first full from tape to
the Consolidation pool and them migrate all back to tape. So i thought
there would be no need to write on the tape during the consolidated full.
As it works on incremental consolidation, data is read of type (file)
and written to type (filec) too. At least, that's how i interpret the log.
> 19-Jul 13:27 hueper-dir JobId 63962: Using Device "Bareos-AII0001"
> to read.
> 19-Jul 13:27 hueper-dir JobId 63962: Using Device "Bareos-AIC0001"
> to write.
Or am i missing something here?
> <http://schorsch-sd.ad.hueper.de:9103>, encryption:
> > www.mlds-networks.com <http://www.mlds-networks.com>
> > Websites, Linux, Hosting, Joomla, Consulting
> >
> >
> >
> >> On Dec 24, 2023, at 12:54 AM, Russell Harmon
> <eatnu...@gmail.com> wrote:
> >>
> >> On Sat, Dec 23, 2023 at 17:21 Brock Palen
> <bro...@mlds-networks.com> wrote:
> >> Correct. Because when you run your consolidate with a full it
> has to read the old full likely from your tape drive. So it has to
> write to disk.
> >>
> >> What if I flip things around: use disk for incrementals and tape
> for full? Would I then just need to make sure I run a consolidate
> job before I run out of disk?
> >
>
> --
> You received this message because you are subscribed to the Google
> Groups "bareos-users" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to bareos-users...@googlegroups.com
> <mailto:bareos-users...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/bareos-users/46fbea15-db58-4afd-893c-2099fea15ec4n%40googlegroups.com <https://groups.google.com/d/msgid/bareos-users/46fbea15-db58-4afd-893c-2099fea15ec4n%40googlegroups.com?utm_medium=email&utm_source=footer>.

Bruno Friedmann (bruno-at-bareos)

unread,
Jul 22, 2024, 8:50:00 AM7/22/24
to bareos-users
Well it try (for good reason, configuration) to also consolidate the full, so it failed when it tries to access data stored on tape 

>19-Jul 13:27 schorsch-sd JobId 63962: Fatal error: stored/acquire.cc:214
>No suitable device found to read Volume "B00006L8"

AI work with 2 distinct storage, but having the same media type.
so in your case, full and inc would be on disk, and then tape can be used to get a Virtual Full consolidated (like every week, month or so)
as described in documentation.

Thomas Kempf

unread,
Jul 22, 2024, 10:13:24 AM7/22/24
to bareos...@googlegroups.com
ok, what i don't understand is the necessity of having the same media
type for read access. I just read from tape
Wouldn't it be possible to read full from one device (media type),
consolidate with incrementals from other device (media type) and write
the consolidated full data to consolidated pool.
Like that You could avoid having to keep a full of all AI jobs on disk
at the same time...

Am 22.07.2024 um 14:50 schrieb Bruno Friedmann (bruno-at-bareos):> Well
> > <http://schorsch-sd.ad.hueper.de:9103
> <http://www.mlds-networks.com <http://www.mlds-networks.com>>
> > > Websites, Linux, Hosting, Joomla, Consulting
> > >
> > >
> > >
> > >> On Dec 24, 2023, at 12:54 AM, Russell Harmon
> > <eatnu...@gmail.com> wrote:
> > >>
> > >> On Sat, Dec 23, 2023 at 17:21 Brock Palen
> > <bro...@mlds-networks.com> wrote:
> > >> Correct. Because when you run your consolidate with a full it
> > has to read the old full likely from your tape drive. So it has to
> > write to disk.
> > >>
> > >> What if I flip things around: use disk for incrementals and tape
> > for full? Would I then just need to make sure I run a consolidate
> > job before I run out of disk?
> > >
> >
> > --
> > You received this message because you are subscribed to the Google
> > Groups "bareos-users" group.
> > To unsubscribe from this group and stop receiving emails from it,
> send
> > an email to bareos-users...@googlegroups.com
> > <mailto:bareos-users...@googlegroups.com>.
> > To view this discussion on the web visit
> >
> https://groups.google.com/d/msgid/bareos-users/46fbea15-db58-4afd-893c-2099fea15ec4n%40googlegroups.com <https://groups.google.com/d/msgid/bareos-users/46fbea15-db58-4afd-893c-2099fea15ec4n%40googlegroups.com> <https://groups.google.com/d/msgid/bareos-users/46fbea15-db58-4afd-893c-2099fea15ec4n%40googlegroups.com?utm_medium=email&utm_source=footer <https://groups.google.com/d/msgid/bareos-users/46fbea15-db58-4afd-893c-2099fea15ec4n%40googlegroups.com?utm_medium=email&utm_source=footer>>.
>
> --
> You received this message because you are subscribed to the Google
> Groups "bareos-users" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to bareos-users...@googlegroups.com
> <mailto:bareos-users...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/bareos-users/f5be2a6c-3e6b-4b55-9cc7-b1396986b4c4n%40googlegroups.com <https://groups.google.com/d/msgid/bareos-users/f5be2a6c-3e6b-4b55-9cc7-b1396986b4c4n%40googlegroups.com?utm_medium=email&utm_source=footer>.

Bruno Friedmann (bruno-at-bareos)

unread,
Jul 23, 2024, 2:57:29 AM7/23/24
to bareos-users
I would said because it is designed as it is actually: The full is also moving in the timeline like documented, so from time to time it needs to be read and rewrite.

To cover your need you may want to achieve almost what AI do but without AI facilities, like running a virtual full say each month this one going to Tape, and
then you can recycle your previous Full and incremental ?
Check Maximum virtual full interval parameter for example.

Just a rough idea.

Thomas Kempf

unread,
Jul 23, 2024, 4:04:45 AM7/23/24
to bareos...@googlegroups.com
A pity that it is not possible. Don't you agree, that would be a good
strategy when diskspace is critical and thus a good enhancement for
bareos ?

For me, it would be okay to write one! whole Full during consolidatin,
when "Always Incremental Max Full Age" is hit into consolidation pool on
disk and the then migrate it back to tape. You could set the timespan so
that each month only one Full is consolidated and then migrate it away
from disk before the next on is scheduled.

Anyhow, thank you for your explanation and proposals. I'll check my
options then, especially the "Max virtual full interval", or gathering
more diskspace...

Kind Regards
Tom
> https://groups.google.com/d/msgid/bareos-users/46fbea15-db58-4afd-893c-2099fea15ec4n%40googlegroups.com <https://groups.google.com/d/msgid/bareos-users/46fbea15-db58-4afd-893c-2099fea15ec4n%40googlegroups.com> <https://groups.google.com/d/msgid/bareos-users/46fbea15-db58-4afd-893c-2099fea15ec4n%40googlegroups.com <https://groups.google.com/d/msgid/bareos-users/46fbea15-db58-4afd-893c-2099fea15ec4n%40googlegroups.com>> <https://groups.google.com/d/msgid/bareos-users/46fbea15-db58-4afd-893c-2099fea15ec4n%40googlegroups.com?utm_medium=email&utm_source=footer <https://groups.google.com/d/msgid/bareos-users/46fbea15-db58-4afd-893c-2099fea15ec4n%40googlegroups.com?utm_medium=email&utm_source=footer> <https://groups.google.com/d/msgid/bareos-users/46fbea15-db58-4afd-893c-2099fea15ec4n%40googlegroups.com?utm_medium=email&utm_source=footer <https://groups.google.com/d/msgid/bareos-users/46fbea15-db58-4afd-893c-2099fea15ec4n%40googlegroups.com?utm_medium=email&utm_source=footer>>>.
> >
> > --
> > You received this message because you are subscribed to the Google
> > Groups "bareos-users" group.
> > To unsubscribe from this group and stop receiving emails from it,
> send
> > an email to bareos-users...@googlegroups.com
> > <mailto:bareos-users...@googlegroups.com>.
> > To view this discussion on the web visit
> >
> https://groups.google.com/d/msgid/bareos-users/f5be2a6c-3e6b-4b55-9cc7-b1396986b4c4n%40googlegroups.com <https://groups.google.com/d/msgid/bareos-users/f5be2a6c-3e6b-4b55-9cc7-b1396986b4c4n%40googlegroups.com> <https://groups.google.com/d/msgid/bareos-users/f5be2a6c-3e6b-4b55-9cc7-b1396986b4c4n%40googlegroups.com?utm_medium=email&utm_source=footer <https://groups.google.com/d/msgid/bareos-users/f5be2a6c-3e6b-4b55-9cc7-b1396986b4c4n%40googlegroups.com?utm_medium=email&utm_source=footer>>.
>
> --
> You received this message because you are subscribed to the Google
> Groups "bareos-users" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to bareos-users...@googlegroups.com
> <mailto:bareos-users...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/bareos-users/5ce93779-e5fb-44d0-85b0-d688f59d2540n%40googlegroups.com <https://groups.google.com/d/msgid/bareos-users/5ce93779-e5fb-44d0-85b0-d688f59d2540n%40googlegroups.com?utm_medium=email&utm_source=footer>.

Reply all
Reply to author
Forward
0 new messages