Hi,
i am experimenting right now with AI configuration on our setup with an autoloader and 2 LTO Tapedrives (i use for any transfer spooling D2D2T)
Most of it is functioning in this test ok - but after some days strangely my consolidating jobs are using one Tape from the consolidating pool for saving consolidated incrementals leven them as "Incremental".
When now a job wants to make a consolidation of the above mentioned incremental (on one Tape from the Consol. Pool) the systems runs into something like a deadlock - because the system uses wants to read and write to the same tape -reading the incremental on it and then writing the consolidated on the same tape. This lets to the following error:
20-Jun 00:01 bareos-dir JobId 664: Start Consolidate JobId 664, Job=ai-consolidate-testserver.2020-06-20_00.01.00_52
20-Jun 00:01 bareos-dir JobId 664: Looking at always incremental job ai-inc-testserver
20-Jun 00:01 bareos-dir JobId 664: ai-inc-testserver: considering jobs older than 13-Jun-2020 00:01:02 for consolidation.
20-Jun 00:01 bareos-dir JobId 664: before ConsolidateFull: jobids: 641,660,629
20-Jun 00:01 bareos-dir JobId 664: check full age: full is 07-Jun-2020 20:07:53, allowed is 08-Jun-2020 00:01:02
20-Jun 00:01 bareos-dir JobId 664: Full is older than AlwaysIncrementalMaxFullAge -> also consolidating Full jobid 641
20-Jun 00:01 bareos-dir JobId 664: after ConsolidateFull: jobids: 641,660,629
20-Jun 00:01 bareos-dir JobId 664: ai-inc-testserver: Start new consolidation
20-Jun 00:01 bareos-dir JobId 664: Using Catalog "MyCatalog"
20-Jun 00:01 bareos-dir JobId 664: Job queued. JobId=665
20-Jun 00:01 bareos-dir JobId 664: Consolidating JobId 665 started.
20-Jun 00:01 bareos-dir JobId 664: BAREOS 19.2.7 (16Apr20): 20-Jun-2020 00:01:02
JobId: 664
Job: ai-consolidate-testserver.2020-06-20_00.01.00_52
Scheduled time: 20-Jun-2020 00:01:00
Start time: 20-Jun-2020 00:01:02
End time: 20-Jun-2020 00:01:02
Termination: Consolidate OK
20-Jun 00:01 bareos-dir JobId 665: Start Virtual Backup JobId 665, Job=ai-inc-testserver.2020-06-20_00.01.02_53
20-Jun 00:01 bareos-dir JobId 665: Consolidating JobIds 641,660,629
20-Jun 00:01 bareos-dir JobId 665: Bootstrap records written to /var/lib/bareos/bareos-dir.restore.2.bsr
20-Jun 00:01 bareos-dir JobId 665: Connected Storage daemon at xxxxxxxxxxxxxxxxxxxx:9103, encryption: TLS_CHACHA20_POLY1305_SHA256
20-Jun 00:01 bareos-dir JobId 665: Using Device "lto8-0" to read.
20-Jun 00:01 bareos-dir JobId 665: Using Device "lto8-1" to write.
20-Jun 00:01 bareos-sd JobId 665: 3307 Issuing autochanger "unload slot 10, drive 0" command.
20-Jun 00:05 bareos-sd JobId 665: Warning: Volume "TAP011M8" wanted on "lto8-0" (/dev/tape/by-id/scsi-35000e111c23a90b5-nst) is in use by device "lto8-1" (/dev/tape/by-id/scsi-35000e111c23a90bf-nst)
20-Jun 00:05 bareos-sd JobId 665: Warning: stored/acquire.cc:290 Read open device "lto8-0" (/dev/tape/by-id/scsi-35000e111c23a90b5-nst) Volume "TAP011M8" failed: ERR=backends/generic_tape_device.cc:141 Unable to open device "lto8-0" (/dev/tape/by-id/scsi-35000e111c23a90b5-nst): ERR=No medium found
At this state nothing is going forward anymore because TAP011M8 was already in the "lto8-1" to write.and could/would not be moved to "lto8-0" to read...
In this example
TAP011M8 is in the consolidating pool and had the jobid 660 in it - 641 and 629 are on different Tapes in the incremental pool
Now it looks like the system want the tape the same time i the dedicated read drive and in the dedicated write drive??!?!
What did i wrong?
Is there a way to bring the system into a state to use the tapedrives more intelligent and maybe parallel for reading into the spool and
after that despooling the whole thing?
Maybe you have an idea...
Tnanks in advance...
Director Config:
***************
Jobs:
*****
Job {
Name = "ai-inc-testserver"
Description = "ai-inc-testserver Job"
Type = Backup
Level = Incremental
Client = "testserver"
FileSet = "fs-windows-testserver-u"
Storage = "Tape"
Accurate = yes
Spool Data = yes
Write Bootstrap = "/var/lib/bareos/%c.bsr"
Messages = "Standard"
Priority = 15
Schedule = "ai-testserver-schedule"
Always Incremental = yes
Always Incremental Job Retention = 7 days
Always Incremental Keep Number = 7
Always Incremental Max Full Age = 12 days
Pool = "ai-inc-testserver" #ai pool
Incremental Backup Pool = "ai-inc-testserver" #ai pool
Full Backup Pool = "ai-consolidated-testserver" #consolidated pool
}
Job {
Name = "ai-consolidate-testserver"
Type = Consolidate
Client = "testserver"
FileSet = "fs-windows-testserver-u"
Storage = "Tape"
Accurate = yes
Spool Data = yes
Write Bootstrap = "/var/lib/bareos/%c.bsr"
Messages = "Standard"
Priority = 25
Schedule = "ai-testserver-consolidated-schedule"
Pool = "ai-consolidated-testserver"
Incremental Backup Pool = "ai-inc-testserver" #ai pool
Full Backup Pool = "ai-consolidated-testserver" #consolidated pool
Prune Volumes = yes
Max Full Consolidations = 1
}
Pools:
******
Pool {
Name = ai-inc-testserver
Pool Type = Backup
Storage = Tape
Recycle = yes
AutoPrune = yes
Recycle Pool = Scratch
Scratch Pool = Scratch
Maximum Block Size = 1048576
Volume Use Duration = 23h
Volume Retention = 16 days
File Retention = 16 days
Job Retention = 16 days
Maximum Volumes = 5
Next Pool = ai-consolidated-testserver
}
Pool {
Name = ai-consolidated-testserver
Pool Type = Backup
STorage = Tape
Recycle = yes
AutoPrune = yes
Recycle Pool = Scratch
Scratch Pool = Scratch
Maximum Block Size = 1048576
Volume Use Duration = 23h
Volume Retention = 18 days
File Retention = 18 days
Job Retention = 18 days
Maximum Volumes = 5 # maximale anzahl an Tapes in Benutzung-ggf neu bestimmen anhand der Backupmenge
}
Schedules:
Schedule {
Name = "ai-testserver-schedule"
Run = Incremental daily at 19:00
}
Schedule {
Name = "ai-testserver-consolidated-schedule"
Run = Incremental daily at 00:01 #
}
Director {
Name = bareos-dir
QueryFile = "/usr/lib/bareos/scripts/query.sql"
Maximum Concurrent Jobs = 10
Password = "secret pass" # Console password
Messages = Daemon
Auditing = yes
Optimize For Speed = yes
}
Storage {
Name = Tape
Address = xxx.xxxx.xx # N.B. Use a fully qualified name here (do not use "localhost" here).
Password = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
Device = changer
Media Type = LTO-8
Auto Changer = yes
Collect Statistics = yes
Maximum Concurrent Jobs = 2
}
Storage Config:
***************
Storage {
Name = bareos-sd
Maximum Concurrent Jobs = 20
}
Autochanger {
Name = "changer"
Changer Device = /dev/tape/by-id/scsi-35000e111c23a90b8
Device = lto8-0, lto8-1
Changer Command = "/usr/lib/bareos/scripts/mtx-changer %c %o %S %a %d"
}
Device {
Name = "lto8-0"
DeviceType = Tape
DriveIndex = 0
ArchiveDevice = /dev/tape/by-id/scsi-35000e111c23a90b5-nst
MediaType = LTO-8
LabelMedia = no
Check Labels = no
AutoChanger = yes # default: no
AutomaticMount = yes # default: no
Maximum File Size = 50GB # default: 1000000000 (1GB)
Always Open = yes
Removable Media = yes # Media can be removed from the device - like tapes
Random Access = no
Spool Directory = /120TB/bareosspool-lto0
Maximum Spool Size = 1109951162777600 # 100TB
Collect Statistics = yes
}
Device {
Name = "lto8-1"
DeviceType = Tape
DriveIndex = 1
ArchiveDevice = /dev/tape/by-id/scsi-35000e111c23a90bf-nst
MediaType = LTO-8
LabelMedia = no
Check Labels = no
AutoChanger = yes # default: no
AutomaticMount = yes # default: no
Maximum File Size = 50GB # default: 1000000000 (1GB)
Always Open = yes
Removable Media = yes # Media can be removed from the device - like tapes
Random Access = no
Spool Directory = /120TB/bareosspool-lto1
Maximum Spool Size = 1109951162777600 # 100TB
Collect Statistics = yes
}