To detail my experiment that I've mentioned above, I have a test
case with about 200MB of data. I then created a chron job that will copy
small amounts of data every 5 minutes. Incremental backups occur every 5
minutes as well, so when I look to restore my files, I can map exactly
which files line up with which incremental jobs to make sure that the
scheme is working properly. Consolidation happens every half hour, full
backups every hour
Job {
Name = "BackupClient1"
Client = bareos-fd
FileSet="FileSetTest"
Type = Backup
#Level = Incremental
#Storage = File
Messages = Standard
#JobDefs = "DefaultJob"
# Always incremental settings
AlwaysIncremental = yes
AlwaysIncrementalJobRetention = 27 min #incrementals older than this will be consolidated
AlwaysIncrementalMaxFullAge = 57 min #if the full backup is older than this the consolidations will be rolled into it
Accurate = yes
Since the data I'm backing up + consolidations comes to about ~240MB, I gave the AI-Consolidated pool a space of about 360MB
Maximum Volume Bytes = 15 MB
Maximum Volumes = 24
When
Bareos tries to do a second full backup, it will start to create more
AI-Consolidated volumes, but it eventually runs out of space per the Max
Volume number limit. The backup job gives me the message
bareos-sd JobId 35: Job BackupClient1.2020-10-09_19.10.26_46 is waiting. Cannot find any appendable volumes.
Please use the "label" command to create a new Volume for:
Storage: "FileStorageCons1" (C:/bareos-storage)
Pool: AI-Consolidated
Media type: FileCons
And then hangs forever.
Ideally
I'd like to recycle/prune/purge the volumes as they are read to
conserve space. If anyone has any ideas on how to do this, please let me
know.