Hi. I'm trying to implement always incremental scheme, but I've run into an issue.
At the bottom is my config for one client. I have the same config for 10 clients except that I have only one consolidate job for each client. That means I have two storeage daemon device files, two director storage files and two director pool files.
The problem occured when a consolidate job queued a virtual full backup for my second client.
These are the logs from the database for the job with jobid=111:
"""
10-Aug 13:00 bareos-dir JobId 111: Connected Storage daemon at
bareos.backups.com:9103, encryption: AES256-GCM-SHA384
10-Aug 13:00 bareos-dir JobId 111: Using Device "FileStorage1-client2.com" to read.
10-Aug 13:00 bareos-dir JobId 111: Using Device "FileStorage2-client2.com" to write.
10-Aug 13:00 bareos-sd JobId 111: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=AI-Consolidated-client2.com-00 from dev="FileStorage2-client2.com" (/var/lib/bareos/storage/
client2.com) to "FileStorage1-client2.com" (/var/lib/bareos/storage/
client2.com)
10-Aug 13:00 bareos-sd JobId 111: Warning: stored/acquire.cc:331 Read acquire: stored/label.cc:264 Could not reserve volume AI-Consolidated-client2.com-00 on "FileStorage1-client2.com" (/var/lib/bareos/storage/
client2.com)
10-Aug 13:00 bareos-sd JobId 111: Please mount read Volume "AI-Consolidated-client2.com-00" for:
Job: backup-client2.com.2019-08-10_13.00.03_06
Storage: "FileStorage1-client2.com" (/var/lib/bareos/storage/
client2.com)
Pool: AI-Incremental-client2.com
Media type: File
10-Aug 13:00 bareos-dir JobId 111: Start Virtual Backup JobId 111, Job=backup-client2.com.2019-08-10_13.00.03_06
10-Aug 13:00 bareos-dir JobId 111: Consolidating JobIds 97,27
10-Aug 13:00 bareos-dir JobId 111: Bootstrap records written to /var/lib/bareos/bareos-dir.restore.6.bsr
10-Aug 13:00 bareos-dir JobId 111: Connected Storage daemon at
bareos.backups.com:9103, encryption: AES256-GCM-SHA384
10-Aug 13:00 bareos-dir JobId 111: Using Device "FileStorage1-client2.com" to read.
10-Aug 13:00 bareos-dir JobId 111: Using Device "FileStorage2-client2.com" to write.
10-Aug 13:00 bareos-sd JobId 111: Warning: stored/vol_mgr.cc:548 Need volume from other drive, but swap not possible. Status: read=0 num_writers=0 num_reserve=1 swap=0 vol=AI-Consolidated-client2.com-00 from dev="FileStorage2-client2.com" (/var/lib/bareos/storage/
client2.com) to "FileStorage1-client2.com" (/var/lib/bareos/storage/
client2.com)
10-Aug 13:00 bareos-sd JobId 111: Warning: stored/acquire.cc:331 Read acquire: stored/label.cc:264 Could not reserve volume AI-Consolidated-client2.com-00 on "FileStorage1-client2.com" (/var/lib/bareos/storage/
client2.com)
10-Aug 13:00 bareos-sd JobId 111: Please mount read Volume "AI-Consolidated-client2.com-00" for:
Job: backup-client2.com.2019-08-10_13.00.03_06
Storage: "FileStorage1-client2.com" (/var/lib/bareos/storage/
client2.com)
Pool: AI-Incremental-client2.com
Media type: File
"""
After this happened this job was in running status. A couple of other jobs went through ok (incrementals, I think). After a while (day, maybe two) bareos stopped working. Every job except for the one in question was queued.
I tried to replicate this locally in vagrant, but I can't replicate this.
What am I doing wrong? Why did this happen? And how can I replicate this locally?
Any help is very appreciated!
Other:
We're using bareos with client initiated connection, TLS enabled, encryption enabled.
I've tried to add all relevant info without it being too much, but perhaps I've missed something important. Please let me know if that's the case.
Storage daemon config:
STORAGE:
BOF - storage/bareos-sd.conf
Storage {
Name = bareos-sd
Maximum Concurrent Jobs = 20
Heartbeat Interval = 60 seconds
}
EOF
DEVICES:
BOF - device/FileStorage1-client1.com.conf
Device {
Name = FileStorage1-client1.com
Media Type = File
Archive Device = /var/lib/bareos/storage/
client1.com LabelMedia = yes;
Random Access = yes;
AutomaticMount = yes;
RemovableMedia = no;
AlwaysOpen = no;
}
EOF
BOF - device/FileStorage2-client1.com.conf
Device {
Name = FileStorage2-client1.com
Media Type = File
Archive Device = /var/lib/bareos/storage/
client1.com LabelMedia = yes;
Random Access = yes;
AutomaticMount = yes;
RemovableMedia = no;
AlwaysOpen = no;
}
EOF
Director daemon config:
STORAGE:
BOF - storage/File1-client1.com.conf
Storage {
Name = File1-client1.com
Address =
bareos.backups.com Password = "[password]"
Device = FileStorage1-client1.com
Media Type = File
Heartbeat Interval = 60 seconds
}
EOF
BOF - storage/File2-client1.com.conf
Storage {
Name = File2-client1.com
Address =
bareos.backups.com Password = "[password]"
Device = FileStorage2-client1.com
Media Type = File
Heartbeat Interval = 60 seconds
}
EOF
POOLS:
BOF - pool/AI-Incremental-client1.com.conf
Pool {
Name = AI-Incremental-client1.com
Pool Type = Backup
Recycle = yes
AutoPrune = no
Volume Use Duration = 23 hours
Label Format = "AI-Incremental-client1.com-${NumVols:p/2/0/r}"
Storage = File1-client1.com
Next Pool = AI-Consolidated-client1.com
}
EOF
BOF - pool/AI-Consolidated-client1.com.conf
Pool {
Name = AI-Consolidated-client1.com
Pool Type = Backup
Recycle = yes
AutoPrune = no
Volume Use Duration = 23 hours
Label Format = "AI-Consolidated-client1.com-${NumVols:p/2/0/r}"
Storage = File2-client1.com
}
EOF
JOBS:
BOF - job/backup-client1.com.conf
Job {
Name = "
backup-client1.com"
JobDefs =
backup_client1.com Pool = AI-Incremental-client1.com
Incremental Backup Pool = AI-Incremental-client1.com
Full Backup Pool = AI-Consolidated-client1.com
Type = Backup
Level = Incremental
Accurate = yes
Prune Volumes = yes
Schedule = "NightlyBackup" [Every night at 3 in the morning]
Priority = 20
Always Incremental = yes
Always Incremental Max Full Age = 14 days
Always Incremental Job Retention = 7 days
Always Incremental Keep Number = 7
}
EOF
BOF - job/Consolidate.conf
Job {
Name = "Consolidate"
Type = Consolidate
Accurate = yes
Prune Volumes = yes
JobDefs = DefaultJob
Schedule = "DailyConsolidate" [Every day at 1 in the afternoon]
Max Full Consolidations = 4
}
Thanks!