..since recently i have the issue that a long running full backup (15 TB) runs ....
...until the daily schedule starts the expected incremental job.
earlier in the time when everything was working, the incremental job wanted to start a full, as it doesn´t find one, but got blocked by the "no duplicates" directive.
this no longer works now.
I have the situation, that the full job got silently killed and the new earlier incremental, "now wanna be full" job takes over.
This is unfortunate as until now i had no "Max Run" time configured, what should mean "forever". Now i´ve set 48h and look tonight if the full runs trough.
Here is my job config and my jobdefs
Job {
Name = "AIbackup-omvserver"
Client = "omvserver"
FileSet = omvserver
Type = Backup
Level = Incremental
Schedule = "AISchedule"
Storage = FileCons
Priority = 50
Messages = Standard
Pool = AI-Incremental
Max Run Time = 48h
Spool Attributes = yes
Maximum Concurrent Jobs = 1
Full Backup Pool = AI-Consolidated
Incremental Backup Pool = AI-Incremental
Accurate = yes
Allow Mixed Priority = no
Allow Duplicate Jobs = no
Always Incremental = yes
Always Incremental Job Retention = 7 days
Always Incremental Keep Number = 7
Always Incremental Max Full Age = 11 days
Max Virtual Full Interval = 14 days
Run Script {
Console = ".bvfs_update jobid=%i"
RunsWhen = After
RunsOnClient = No
}
Run Script {
Command = "/var/lib/bareos/scripts/jobcheck.sh"
RunsWhen = Before
RunsOnClient = No
Fail job on error = No
}
Run Script {
Command = "rm -f /var/lib/bareos/scripts/job.running"
RunsWhen = After
RunsOnClient = No
Fail job on error = No
}
}
JobDefs {
Name = "DefaultJob"
Type = Backup
Level = Incremental
Client = nasbackup
FileSet = "Catalog" # selftest fileset (#13)
Schedule = "WeeklyCycle"
Storage = File
Messages = Standard
Pool = AI-Incremental-Catalog
Priority = 10
Write Bootstrap = "/var/lib/bareos/%c.bsr"
Run Script {
Console = ".bvfs_update jobid=%i"
RunsWhen = After
RunsOnClient = No
}
Full Backup Pool = AI-Consolidated-Catalog # write Full Backups into "Full" Pool (#05)
}