Virtual Full after Always Incremental and consolidate: wrong read storage

880 views
Skip to first unread message

Tim Banchi

unread,
Oct 26, 2017, 12:16:46 PM10/26/17
to bareos-users
Dear bareos-user,

I'm at loss to create a working VirtualFull job after Always Incremental and consolidate. I read the documentation and many forum posts here, but the virtual-full job always picks the wrong storage. I should get different a read storage and write storage.

Always incremental and consolidate work as expected (two devices on one storage daemon, I read the chapters concerning multiple storage devices and concurrent disk jobs, so I think it's fine)

My planned setup:
Always incremental and consolidate to local disk on bareos director server (pavlov). A VirtualFull backup to tape on another server/storage daemon (delaunay).

I always get:
...
2017-10-26 17:24:13 pavlov-dir JobId 269: Start Virtual Backup JobId 269, Job=pavlov_sys_ai_vf.2017-10-26_17.24.11_04
2017-10-26 17:24:13 pavlov-dir JobId 269: Consolidating JobIds 254,251,252,255,256,257
2017-10-26 17:24:13 pavlov-dir JobId 269: Bootstrap records written to /var/lib/bareos/pavlov-dir.restore.1.bsr
2017-10-26 17:24:14 delaunay-sd JobId 269: Fatal error: Device reservation failed for JobId=269:
2017-10-26 17:24:14 pavlov-dir JobId 269: Fatal error:
Storage daemon didn't accept Device "pavlov-file-consolidate" because:
3924 Device "pavlov-file-consolidate" not in SD Device resources or no matching Media Type.
2017-10-26 17:24:14 pavlov-dir JobId 269: Error: Bareos pavlov-dir 16.2.4 (01Jul16):
...

While a consolidate virtualfull job is successful:
....
Using Catalog "MyCatalog"
2017-10-26 13:51:39 pavlov-dir JobId 254: Start Virtual Backup JobId 254, Job=pavlov_sys_ai.2017-10-26_13.51.37_40
2017-10-26 13:51:39 pavlov-dir JobId 254: Consolidating JobIds 248,245,246,250
2017-10-26 13:51:39 pavlov-dir JobId 254: Bootstrap records written to /var/lib/bareos/pavlov-dir.restore.4.bsr
2017-10-26 13:51:40 pavlov-dir JobId 254: Using Device "pavlov-file" to read.
2017-10-26 13:51:40 pavlov-dir JobId 254: Using Device "pavlov-file-consolidate" to write.
2017-10-26 13:51:40 pavlov-sd JobId 254: Ready to read from volume "ai_consolidate-0031" on device "pavlov-file" (/mnt/xyz).
2017-10-26 13:51:40 pavlov-sd JobId 254: Volume "ai_consolidate-0023" previously written, moving to end of data.
2017-10-26 13:51:40 pavlov-sd JobId 254: Ready to append to end of Volume "ai_consolidate-0023" size=7437114
2017-10-26 13:51:40 pavlov-sd JobId 254: Forward spacing Volume "ai_consolidate-0031" to file:block 0:215.
2017-10-26 13:51:40 pavlov-sd JobId 254: End of Volume at file 0 on device "pavlov-file" (/mnt/xyz), Volume "ai_consolidate-0031"
2017-10-26 13:51:40 pavlov-sd JobId 254: Ready to read from volume "ai_inc-0030" on device "pavlov-file" (/mnt/xyz).
2017-10-26 13:51:40 pavlov-sd JobId 254: Forward spacing Volume "ai_inc-0030" to file:block 0:1517550.
2017-10-26 13:51:40 pavlov-sd JobId 254: Elapsed time=00:00:01, Transfer rate=7.128 M Bytes/second
2017-10-26 13:51:40 pavlov-dir JobId 254: Joblevel was set to joblevel of first consolidated job: Full
2017-10-26 13:51:41 pavlov-dir JobId 254: Bareos pavlov-dir 16.2.4 (01Jul16):
Build OS: x86_64-pc-linux-gnu ubuntu Ubuntu 16.04 LTS
JobId: 254
Job: pavlov_sys_ai.2017-10-26_13.51.37_40
Backup Level: Virtual Full
Client: "pavlov-fd" 16.2.4 (01Jul16) x86_64-pc-linux-gnu,ubuntu,Ubuntu 16.04 LTS,xUbuntu_16.04,x86_64
FileSet: "linux_system" 2017-10-19 16:11:21
Pool: "disk_ai_consolidate" (From Job Pool's NextPool resource)
Catalog: "MyCatalog" (From Client resource)
Storage: "pavlov-file-consolidate" (From Storage from Pool's NextPool resource)
Scheduled time: 26-Oct-2017 13:51:37
Start time: 26-Oct-2017 13:48:10
End time: 26-Oct-2017 13:48:11
Elapsed time: 1 sec
Priority: 10
SD Files Written: 138
SD Bytes Written: 7,128,227 (7.128 MB)
Rate: 7128.2 KB/s
Volume name(s): ai_consolidate-0023
Volume Session Id: 18
Volume Session Time: 1509016221
Last Volume Bytes: 14,582,726 (14.58 MB)
SD Errors: 0
SD termination status: OK
Accurate: yes
Termination: Backup OK

2017-10-26 13:51:41 pavlov-dir JobId 254: purged JobIds 248,245,246,250 as they were consolidated into Job 254
You have messages.
....


I tried different things, adding, removing storage attribute from the jobs, etc. I think I followed the examples in the manual and online, but helas, the job never gets the correct read storage. AFAIK, the pool (and not the jobs) should define the different storages, not the jobs. Also the Media Type is different (File vs LTO), so the job should pick the right storage, but ... it just does not.

my configuration:

A) director pavlov (to disk storage daemon + director)
1) template for always incremental jobs
JobDefs {
Name = "default_ai"
Type = Backup
Level = Incremental
Client = pavlov-fd
Storage = pavlov-file
Messages = Standard
Priority = 10
Write Bootstrap = "|/usr/local/bin/bareos-messages.sh \"[Bootstrap] %d: %j (jobid %i)\" %i \"it@XXXXX\" %c-%n"
Maximum Concurrent Jobs = 7

#always incremental config
Pool = disk_ai
Incremental Backup Pool = disk_ai
Full Backup Pool = disk_ai_consolidate
Accurate = yes
Always Incremental = yes
Always Incremental Job Retention = 20 seconds #7 days
Always Incremental Keep Number = 2 #7
Always Incremental Max Full Age = 1 minutes # 14 days
}


2) template for virtual full jobs, should run on read storage pavlov and write storage delaunay:
job defs {
Name = "default_ai_vf"
Type = Backup
Level = VirtualFull
Messages = Standard
Priority = 13
Accurate = yes
Pool = disk_ai_consolidate

#I tried different settings below, nothing worked
#Full Backup Pool = disk_ai_consolidate
#Virtual Full Backup Pool = tape_automated
#Incremental Backup Pool = disk_ai
#Next Pool = tape_automated
#Storage = delaunay_HP_G2_Autochanger
#Storage = pavlov-file

# run after Consolidate
Run Script {
console = "update jobid=%i jobtype=A"
Runs When = After
Runs On Client = No
Runs On Failure = No
}

Write Bootstrap = "|/usr/local/bin/bareos-messages.sh \"[Bootstrap] %d: %j (jobid %i)\" %i \"it@XXXXX\" %c-%n"
}

3) consolidate job
Job {
Name = ai_consolidate
Type = Consolidate
Accurate = yes
Max Full Consolidations = 1
Client = pavlov-fd #value which should be ignored by Consolidate job
FileSet = "none" #value which should be ignored by Consolidate job
Pool = disk_ai_consolidate #value which should be ignored by Consolidate job
Incremental Backup Pool = disk_ai_consolidate
Full Backup Pool = disk_ai_consolidate
# JobDefs = DefaultJob
# Level = Incremental
Schedule = "ai_consolidate"
# Storage = pavlov-file-consolidate #commented out for VirtualFull-Tape testing
Messages = Standard
Priority = 10
Write Bootstrap = "|/usr/local/bin/bareos-messages.sh \"[Bootstrap] %d: %j (jobid %i)\" %i \"it@XXXXXX\" %c-%n"
}

4) always incremental job for client pavlov (works)
Job {
Name = "pavlov_sys_ai"
JobDefs = "default_ai"
Client = "pavlov-fd"
FileSet = linux_system
Schedule = manual
}


5) virtualfull job for pavlov (doesn't work)
Job {
Name = "pavlov_sys_ai_vf"
JobDefs = "default_ai_vf"
Client = "pavlov-fd"
FileSet = linux_system
Schedule = manual
#Storage = pavlov-file
#Next Pool = tape_automated #doesn't matter whether commented or not
}

6) pool always incremental
Pool {
Name = disk_ai
Pool Type = Backup
Recycle = yes # Bareos can automatically recycle Volumes
AutoPrune = yes # Prune expired volumes
Volume Retention = 4 weeks
Maximum Volume Bytes = 30G # Limit Volume size to something reasonable
Maximum Volumes = 200 # Limit number of Volumes in Pool
Label Format = "ai_inc-" # Volumes will be labeled "Full-<volume-id>"
Volume Use Duration = 23h
Storage = pavlov-file
Next Pool = disk_ai_consolidate
}

7) pool always incremental consolidate
Pool {
Name = disk_ai_consolidate
Pool Type = Backup
Recycle = yes # Bareos can automatically recycle Volumes
AutoPrune = yes # Prune expired volumes
Volume Retention = 4 weeks
Maximum Volume Bytes = 30G # Limit Volume size to something reasonable
Maximum Volumes = 200 # Limit number of Volumes in Pool
Label Format = "ai_consolidate-" # Volumes will be labeled "Full-<volume-id>"
Volume Use Duration = 23h
Storage = pavlov-file-consolidate
Next Pool = tape_automated
}

8) pool tape_automated (for virtualfull jobs to tape)
Pool {
Name = tape_automated
Pool Type = Backup
Storage = delaunay_HP_G2_Autochanger
Recycle = yes # Bareos can automatically recycle Volumes
AutoPrune = yes # Prune expired volumes
Recycle Oldest Volume = yes
RecyclePool = Scratch
Maximum Volume Bytes = 0
Volume Retention = 4 weeks
Cleaning Prefix = "CLN"
Catalog Files = yes
}

9) 1st storage device for disk backup (writes always incremental jobs + other normal jobs)
Storage {
Name = pavlov-file
Address = pavlov.XX # N.B. Use a fully qualified name here (do not use "localhost" here).
Password = "X"
Maximum Concurrent Jobs = 1
Device = pavlov-file
Media Type = File
TLS Certificate = X
TLS Key = X
TLS CA Certificate File = X
TLS DH File = X
TLS Enable = X
TLS Require = X
TLS Verify Peer = X
TLS Allowed CN = pavlov.X
}

10) 2nd storage device for disk backup (consolidates AI jobs)
Storage {
Name = pavlov-file-consolidate
Address = pavlov.X # N.B. Use a fully qualified name here (do not use "localhost" here).
Password = "X"
Maximum Concurrent Jobs = 1
Device = pavlov-file-consolidate
Media Type = File
TLS Certificate = X
TLS Key = X
TLS CA Certificate File = X
TLS DH File = X
TLS Enable = yes
TLS Require = yes
TLS Verify Peer = yes
TLS Allowed CN = pavlov.X
}

11) 3rd storage device for tape backup
Storage {
Name = delaunay_HP_G2_Autochanger
Address = "delaunay.XX"
Password = "X"
Device = "HP_G2_Autochanger"
Media Type = LTO
Autochanger = yes
TLS Certificate = X
TLS Key = X
TLS CA Certificate File = X
TLS DH File = X
TLS Enable = yes
TLS Require = yes
TLS Verify Peer = yes
TLS Allowed CN = delaunay.X
}


B) storage daemon pavlov (to disk)
1) to disk storage daemon

Storage {
Name = pavlov-sd
Maximum Concurrent Jobs = 20

# remove comment from "Plugin Directory" to load plugins from specified directory.
# if "Plugin Names" is defined, only the specified plugins will be loaded,
# otherwise all storage plugins (*-sd.so) from the "Plugin Directory".
#
# Plugin Directory = /usr/lib/bareos/plugins
# Plugin Names = ""
TLS Certificate = X
TLS Key = X
TLS CA Certificate File = X
TLS DH File = X
TLS Enable = yes
TLS Require = yes
TLS Verify Peer = yes
TLS Allowed CN = pavlov.X
TLS Allowed CN = edite.X
TLS Allowed CN = delaunay.X
}

2) to disk device (AI + others)
Device {
Name = pavlov-file
Media Type = File
Maximum Open Volumes = 1
Maximum Concurrent Jobs = 1
Archive Device = /mnt/xyz #(same for both)
LabelMedia = yes; # lets Bareos label unlabeled media
Random Access = yes;
AutomaticMount = yes; # when device opened, read it
RemovableMedia = no;
AlwaysOpen = no;
Description = "File device. A connecting Director must have the same Name and MediaType."
}

3) consolidate to disk
Device {
Name = pavlov-file-consolidate
Media Type = File
Maximum Open Volumes = 1
Maximum Concurrent Jobs = 1
Archive Device = /mnt/xyz #(same for both)
LabelMedia = yes; # lets Bareos label unlabeled media
Random Access = yes;
AutomaticMount = yes; # when device opened, read it
RemovableMedia = no;
AlwaysOpen = no;
Description = "File device. A connecting Director must have the same Name and MediaType."
}

C) to tape storage daemon (different server)
1) allowed director
Director {
Name = pavlov-dir
Password = "[md5]X"
Description = "Director, who is permitted to contact this storage daemon."
TLS Certificate = X
TLS Key = /X
TLS CA Certificate File = X
TLS DH File = X
TLS Enable = yes
TLS Require = yes
TLS Verify Peer = yes
TLS Allowed CN = pavlov.X
}


2) storage daemon config
Storage {
Name = delaunay-sd
Maximum Concurrent Jobs = 20
Maximum Network Buffer Size = 32768
# Maximum Network Buffer Size = 65536

# remove comment from "Plugin Directory" to load plugins from specified directory.
# if "Plugin Names" is defined, only the specified plugins will be loaded,
# otherwise all storage plugins (*-sd.so) from the "Plugin Directory".
#
# Plugin Directory = /usr/lib/bareos/plugins
# Plugin Names = ""
TLS Certificate = X
TLS Key = X
TLS DH File = X
TLS CA Certificate File = X
TLS Enable = yes
TLS Require = yes
TLS Verify Peer = yes
TLS Allowed CN = pavlov.X
TLS Allowed CN = edite.X
}


3) autochanger config
Autochanger {
Name = "HP_G2_Autochanger"
Device = Ultrium920
Changer Device = /dev/sg5
Changer Command = "/usr/lib/bareos/scripts/mtx-changer %c %o %S %a %d"
}

4) device config
Device {
Name = "Ultrium920"
Media Type = LTO
Archive Device = /dev/st2
Autochanger = yes
LabelMedia = no
AutomaticMount = yes
AlwaysOpen = yes
RemovableMedia = yes
Maximum Spool Size = 50G
Spool Directory = /var/lib/bareos/spool
Maximum Block Size = 2097152
# Maximum Block Size = 4194304
Maximum Network Buffer Size = 32768
Maximum File Size = 50G
}

Jon SCHEWE

unread,
Oct 30, 2017, 10:00:29 AM10/30/17
to bareos...@googlegroups.com
You're setup looks very close to mine. I am doing the same thing that
you want to do. I have my other job called "offsite", but it's the same
idea. The scripts that are running before the jobs are to ensure the
appropriate USB drives are attached.

I set both next pool and virtual full backup pool and that seems to work.

JobDefs {
  Name = "AlwaysIncremental"
  Type = Backup
  Level = Incremental
  Schedule = "WeeklyCycle"
  Storage = File
  Messages = Standard
  Priority = 10
  Write Bootstrap = "/mnt/bareos-file/bootstrap/%c.bsr"
  Pool = AI-Incremental
  Full Backup Pool = AI-Consolidated                

  Accurate = yes
  Always Incremental = yes
  Always Incremental Job Retention = 7 days
  Always Incremental Keep Number = 14

  RunScript {
    RunsOnClient = no
    RunsWhen = Before
    FailJobOnError = yes
    Command = "/etc/bareos/check-local-backup-disk.sh"
  }
}

JobDefs {
  Name = "DefaultJob"
  Type = Backup
  Level = Full
  Client = gemini-fd
  FileSet = "SelfTest"                     # selftest
fileset                            (#13)
  Schedule = "WeeklyCycle"
  Storage = File
  Messages = Standard
  Pool = Full
  Priority = 10
  Write Bootstrap = "/mnt/bareos-file/bootstrap/%c.bsr"

  RunScript {
    RunsOnClient = no
    RunsWhen = Before
    FailJobOnError = yes
    Command = "/etc/bareos/check-local-backup-disk.sh"
  }

}
JobDefs {
  Name = "OffsiteJob"
  Type = Backup
  Level = VirtualFull
  Client = gemini-fd
  FileSet = "SelfTest"                     # selftest
fileset                            (#13)
  Schedule = "OffsiteSchedule"
  Storage = Offsite
  Messages = Standard
  Pool = AI-Consolidated
  Incremental Backup Pool = AI-Incremental
  Next Pool = Offsite
  Virtual Full Backup Pool = Offsite
  Priority = 10
  Accurate = yes
  Write Bootstrap = "/mnt/bareos-file/bootstrap/%c.bsr"

  RunScript {
    RunsOnClient = no
    RunsWhen = Before
    FailJobOnError = yes
    Command = "/etc/bareos/check-offsite-backup-disk.sh"
  }
  RunScript {
    console = "update jobid=%i jobtype=A"
    RunsOnClient = no
    RunsOnFailure = No
    RunsWhen = After
    FailJobOnError = yes
  }
 
}

Job {
  Name = "backup-gemini-fd"
  JobDefs = "AlwaysIncremental"
  Client = "gemini-fd"
  FileSet = "gemini-all"
  ClientRunBeforeJob = "/etc/bareos/before-backup.sh"
}

Job {
  Name = "offsite-gemini-fd"
  JobDefs = "OffsiteJob"
  Client = "gemini-fd"
  FileSet = "gemini-all"
}
Job {
  Client = gemini-fd
  Name = "Consolidate"
  Type = "Consolidate"
  Accurate = "yes"
  JobDefs = "DefaultJob"
  FileSet = "LinuxAll"

  Max Full Consolidations = 1
}

Storage {
  Name = File
  Address = gemini                # N.B. Use a fully qualified name here
(do not use "localhost" here).
  Password = "XXXXXXXXX"
  Device = FileStorage
  Media Type = File

  # TLS setup
  ...
}


Storage {
  Name = Offsite
  Address = gemini                # N.B. Use a fully qualified name here
(do not use "localhost" here).
  Password = "XXXXXX"
  Device = OffsiteStorage
  Media Type = File

  # TLS setup
  ...
--
Research Scientist
Raytheon BBN Technologies
5775 Wayzata Blvd, Ste 630
Saint Louis Park, MN, 55416
Office: 952-545-5720

Tim Banchi

unread,
Nov 6, 2017, 9:40:04 AM11/6/17
to bareos-users
Hi Jon,

thank you, but unfortunately it doesn't work. Same problem as before.

Two questions:
1) you use the same storage/device for both always incremental and for consolidate. according to the manual (chapter 23.3.3) at least 2 storages are needed). How does that work out in practice?

2) could you also post your pool configurations? I tried to configure storage and next pool in the jobs, and comment them in the pools. I thought this might problably work out better. But then I get the error message: No Next Pool specification found in Pool "disk_ai_consolidate". As soon as I add the next pool (being offsite/tape_automated), I get the same error message again ...

Jon SCHEWE

unread,
Nov 6, 2017, 12:12:20 PM11/6/17
to bareos-users
> Hi Jon,
>
> thank you, but unfortunately it doesn't work. Same problem as before.
>
> Two questions:
> 1) you use the same storage/device for both always incremental and for consolidate. according to the manual (chapter 23.3.3) at least 2 storages are needed). How does that work out in practice?
I'm not sure, it just seems to be working.
> 2) could you also post your pool configurations? I tried to configure storage and next pool in the jobs, and comment them in the pools. I thought this might problably work out better. But then I get the error message: No Next Pool specification found in Pool "disk_ai_consolidate". As soon as I add the next pool (being offsite/tape_automated), I get the same error message again ...
>

Pool {
  Name = AI-Consolidated
  Pool Type = Backup
  Recycle = yes
  Auto Prune = yes
  Volume Retention = 360 days
  Maximum Volume Bytes = 50G
  Label Format = "AI-Consolidated-"
  Volume Use Duration = 23h
  Storage = File
  Action On Purge = Truncate
  Next Pool = Offsite
}

Pool {
  Name = AI-Incremental
  Pool Type = Backup
  Recycle = yes
  Auto Prune = yes
  Volume Retention = 360 days
  Maximum Volume Bytes = 50G
  Label Format = "AI-Incremental-"
  Volume Use Duration = 23h
  Storage = File
  Next Pool = AI-Consolidated
  Action On Purge = Truncate
}

Pool {
  Name = Full
  Pool Type = Backup
  Recycle = yes                       # Bareos can automatically recycle
Volumes
  AutoPrune = yes                     # Prune expired volumes
  Volume Retention = 365 days         # How long should the Full Backups
be kept? (#06)
  Maximum Volume Bytes = 50G          # Limit Volume size to something
reasonable
  Maximum Volumes = 100               # Limit number of Volumes in Pool
  Label Format = "Full-"              # Volumes will be labeled
"Full-<volume-id>"
  Storage = File
  Action On Purge = Truncate
}

Pool {
  Name = Offsite
  Pool Type = Backup
  Recycle = yes                       # Bareos can automatically recycle
Volumes
  Next Pool = Offsite
  AutoPrune = yes                     # Prune expired volumes
  Volume Retention = 60 days         # How long should the Full Backups
be kept? (#06)
  Maximum Volume Bytes = 50G          # Limit Volume size to something
reasonable
  Maximum Volumes = 100               # Limit number of Volumes in Pool
  Label Format = "Offsite-"              # Volumes will be labeled
"Full-<volume-id>"
  Storage = Offsite
  Action On Purge = Truncate
}

Something that I've realized since I set this up is that I may need to
change the recycling of my offsite volumes to not reuse files and
instead delete them when they are recycled. This is because the drive
that is currently local may not have the file that bareos is looking for
to append to.

Tim Banchi

unread,
Nov 7, 2017, 4:54:25 AM11/7/17
to bareos-users
Thank you. It still doesn't work, same error message. However, when changing the storage daemon to the same machine where the consolidated + AI backups are backed-up, it works! So I assume this is a bug, I will make a bug report.


> Something that I've realized since I set this up is that I may need to
> change the recycling of my offsite volumes to not reuse files and
> instead delete them when they are recycled. This is because the drive
> that is currently local may not have the file that bareos is looking for
> to append to.
>
Could this be solved with a shorter Volume Use Duration?

Tim Banchi

unread,
Nov 7, 2017, 6:15:04 AM11/7/17
to bareos-users
Voila, bug report is here: https://bugs.bareos.org/view.php?id=874

Anthony Melentev

unread,
Nov 14, 2017, 2:03:17 PM11/14/17
to bareos-users
четверг, 26 октября 2017 г., 21:16:46 UTC+5 пользователь Tim Banchi написал:
I've faced the same problem, and after some researching I've found this reply from developer: https://groups.google.com/d/msg/bareos-users/CKOO-Zd9CdE/D-thqZiyGFkJ

So, it is not a bug for software, it's just not implemented. May be it worth to be mentioned in documentation.

Tim Banchi

unread,
Nov 15, 2017, 4:56:31 AM11/15/17
to bareos-users
Hi Anthony,
thanks a lot. It's not mentioned in documentation and I wonder what the reasons are that this is not possible. I will update my bugreport.

Reply all
Reply to author
Forward
0 new messages