Proper setup for always incremental and virtual full backups

2,516 views
Skip to first unread message

Jon SCHEWE

unread,
Sep 14, 2017, 10:11:58 AM9/14/17
to bareos...@googlegroups.com
I've got the always incremental backup strategy working. It backs up to
the pool AI-Incremental and then the consolidate job goes to
AI-Consolidate.

Now I would like to add virtual full backup for offsite. Below is what I
have for configuration. I'm unsure about the setting of the pool in the
virtual full job defs and the next pool for the virtual full.

Pool {
  Name = AI-Consolidated
  Pool Type = Backup
  Recycle = yes
  Auto Prune = yes
  Volume Retention = 360 days
  Maximum Volume Bytes = 50G
  Label Format = "AI-Consolidated-"
  Volume Use Duration = 23h
  Storage = File
  Action On Purge = Truncate
  Next Pool = Offsite  ### Is this right???
}

Pool {
  Name = AI-Incremental
  Pool Type = Backup
  Recycle = yes
  Auto Prune = yes
  Volume Retention = 360 days
  Maximum Volume Bytes = 50G
  Label Format = "AI-Incremental-"
  Volume Use Duration = 23h
  Storage = File
  Next Pool = AI-Consolidated
  Action On Purge = Truncate
}


Pool {
  Name = Offsite
  Pool Type = Backup
  Recycle = yes                    
  Next Pool = Offsite
  AutoPrune = yes                  
  Volume Retention = 60 days
  Maximum Volume Bytes = 50G
  Maximum Volumes = 100        
  Label Format = "Offsite-"         
  Storage = Offsite
  Action On Purge = Truncate
}


JobDefs {
  Name = "AlwaysIncremental"
  Type = Backup
  Level = Incremental
  Schedule = "WeeklyCycle"
  Storage = File
  Messages = Standard
  Priority = 10
  Write Bootstrap = "/mnt/bareos-file/bootstrap/%c.bsr"
  Pool = AI-Incremental
  Full Backup Pool = AI-Consolidated

  Accurate = yes
  Always Incremental = yes
  Always Incremental Job Retention = 7 days
  Always Incremental Keep Number = 14
}
JobDefs {
  Name = "OffsiteJob"
  Type = Backup
  Level = VirtualFull
  Client = gemini-fd
  FileSet = "SelfTest"
  Schedule = "OffsiteSchedule"
  Storage = Offsite
  Messages = Standard
  Pool = AI-Consolidated  #### What should here? AI-Incremental or
AI-Consolidated or something else?
  Virtual Full Backup Pool = Offsite ### Is this right?
  Priority = 10
  Accurate = yes
  Write Bootstrap = "/mnt/bareos-file/bootstrap/%c.bsr"

  RunScript {
    console = "update jobid=%i jobtype=A"
    RunsOnClient = no
    RunsOnFailure = No
    RunsWhen = After
    FailJobOnError = yes
  }

}
Job {
  Name = "backup-gemini-fd"
  JobDefs = "AlwaysIncremental"
  Client = "gemini-fd"
  FileSet = "gemini-all"
}
Job {
  Name = "offsite-gemini-fd"
  JobDefs = "OffsiteJob"
  Client = "gemini-fd"
  FileSet = "gemini-all"
}

Thank you

- Jon Schewe

--
Research Scientist
Raytheon BBN Technologies
5775 Wayzata Blvd, Ste 630
Saint Louis Park, MN, 55416
Office: 952-545-5720

Douglas K. Rand

unread,
Sep 14, 2017, 10:38:50 AM9/14/17
to bareos...@googlegroups.com
On 09/14/17 09:11, Jon SCHEWE wrote:
> I've got the always incremental backup strategy working. It backs up to
> the pool AI-Incremental and then the consolidate job goes to
> AI-Consolidate.
>
> Now I would like to add virtual full backup for offsite. Below is what I
> have for configuration. I'm unsure about the setting of the pool in the
> virtual full job defs and the next pool for the virtual full.

Yes, you want the name of your consolidated pool in the job defs for your
offsite jobs. I also set Full Backup Pool and Incremental Backup Pool, along
with Next Pool.

The Next Pool is, I believe, the key for the virtual full jobs to work, that
is where the virtual job is written. I think, but am not sure, that setting
the Full and Incremental pools tells the virtual job where to get the source
backups.

Here is my setup for doing virtual full backups to tape from my on-disk always
incrementals. We also have a host named gemini, so I'll include that as
samples. (I wonder if using star names is the most popular naming scheme with
computer geeks.)

Note that our pool names don't follow the samples in the docs:
* "consolidated" instead of "AI-Consolidated"
* "always-incr" instead of "AI-Incremental"


job {
name = "gemini-offsite"
client = "gemini"
job defs = "ai-offsite"
}

job defs {
name = "ai-offsite"
type = backup
level = virtual full
schedule = "offsite"
# Gotta turn "virtual full" jobs into Archive jobs so we don't try to use
# the backup for the next "virtual full" job.
run script {
console = "update jobid=%i jobtype=A"
runs when = After
runs on client = No
}
file set = "standard"
accurate = yes
messages = "standard"
write bootstrap = "/var/db/bareos/bootstrap/%c.bsr"
pool = "consolidated"
full backup pool = "consolidated"
incremental backup pool = "always-incr"
spool data = yes
spool attributes = yes
next pool = "offsite"
}

pool {
# No "Label Format" disables automatic volume labeling
name = offsite
pool type = Backup # Sigh, case sensitive
storage = lto6-1
file retention = 30 days
job retention = 90 days
volume retention = 90 days
maximum volume bytes = 2500 GB
volume use duration = 48 hours
recycle oldest volume = yes
}

storage {
name = lto6-1
address = bareos-sd.meridian-enviro.com
password = "******"
device = lto6-1
maximum concurrent jobs = 1
media type = "lto-6"
}

Hope this helps. When you get it working I think you'll really like the setup.

Jon SCHEWE

unread,
Sep 25, 2017, 9:25:49 AM9/25/17
to bareos...@googlegroups.com
That didn't work. My offsite ran last night and wrote to the
Consolidated pool rather than the offsite Pool. Here's my job defs:
JobDefs {
  Name = "OffsiteJob"
  Type = Backup
  Level = VirtualFull
  Client = gemini-fd
  FileSet = "SelfTest"                     # selftest
fileset                            (#13)
  Schedule = "OffsiteSchedule"
  Storage = Offsite
  Messages = Standard
  Pool = AI-Consolidated
  Incremental Backup Pool = AI-Incremental
  Next Pool = Offsite
  Virtual Full Backup Pool = Offsite
  Priority = 10
  Accurate = yes
  Write Bootstrap = "/mnt/bareos-file/bootstrap/%c.bsr"

  RunScript {
    RunsOnClient = no
    RunsWhen = Before
    FailJobOnError = yes
    Command = "/etc/bareos/check-offsite-backup-disk.sh"
  }
  RunScript {
    console = "update jobid=%i jobtype=A"
    RunsOnClient = no
    RunsOnFailure = No
    RunsWhen = After
    FailJobOnError = yes
  }

}

And the jobs that ran:
Job {
  Name = "offsite-andromeda"
  JobDefs = "OffsiteJob"
  Client = "andromeda-fd"
  FileSet = "andromeda"
}
Job {
  Name = "offsite-gemini-fd"
  JobDefs = "OffsiteJob"
  Client = "gemini-fd"
  FileSet = "gemini-all"
}

It seems that the jobs ran at level "Full" rather than "Virtual Full"

Douglas K. Rand

unread,
Sep 25, 2017, 9:58:33 AM9/25/17
to bareos...@googlegroups.com
I'm not using the "Virtual Full Backup Pool" directive. Not sure if that is
affecting things, mostly because I don't know what it does and the manual
doesn't seem to say either. But I think that "Next Pool", which you have, is
the key for setting the destination on Virtual Full backups.

What is your Offsite pool and storage defined as?

I'm not setting Storage in the Job Def, I do it in the Pool instead. Again,
I'm not sure that matters.

In short, I don't see what is causing the problem from what you posted. Sorry.



Jon SCHEWE

unread,
Sep 25, 2017, 10:39:06 AM9/25/17
to bareos...@googlegroups.com
When you run your backups and look at the messages, what does the level
say? Mine are saying "Backup Level: Full". I would have expected
"Virtual Full".

Pool {
  Name = Offsite
  Pool Type = Backup
  Recycle = yes                       # Bareos can automatically recycle
Volumes
  Next Pool = Offsite
  AutoPrune = yes                     # Prune expired volumes
  Volume Retention = 60 days         # How long should the Full Backups
be kept? (#06)
  Maximum Volume Bytes = 50G          # Limit Volume size to something
reasonable
  Maximum Volumes = 100               # Limit number of Volumes in Pool
  Label Format = "Offsite-"              # Volumes will be labeled
"Full-<volume-id>"
  Storage = Offsite
  Action On Purge = Truncate
}
Storage {
  Name = Offsite
  Address = gemini                # N.B. Use a fully qualified name here
(do not use "localhost" here).
  Password = "XXXXXXX"
  Device = OffsiteStorage
  Media Type = File

Douglas K. Rand

unread,
Sep 25, 2017, 10:52:33 AM9/25/17
to bareos...@googlegroups.com
Yes, my Backup Level is Virtual Full:

Build OS: amd64-portbld-freebsd11.1 freebsd 11.1-RELEASE-p1
JobId: 36416
Job: gemini-offsite.2017-09-23_11.48.02_37
Backup Level: Virtual Full
Client: "gemini" 15.2.2 (16Nov15)
amd64-portbld-freebsd10.2,freebsd,10.2-RELEASE-p14
FileSet: "standard" 2016-10-25 19:20:00
Pool: "offsite" (From Job's NextPool resource)
Catalog: "catalog" (From Client resource)
Storage: "lto6-1" (From Storage from Job's NextPool resource)
Scheduled time: 23-Sep-2017 11:48:02
Start time: 22-Sep-2017 21:12:42
End time: 22-Sep-2017 21:12:57
Elapsed time: 15 secs
Priority: 10
SD Files Written: 245,633
SD Bytes Written: 2,778,402,119 (2.778 GB)
Rate: 185226.8 KB/s
Volume name(s): ND0014
Volume Session Id: 34
Volume Session Time: 1506185271
Last Volume Bytes: 591,369,763,219 (591.3 GB)
SD Errors: 0
SD termination status: OK
Accurate: yes
Termination: Backup OK


And I don't see anything from your Pool settings that jump out at me.

Do you get the appropriate devices at the start of the job? In my log I have:

23-Sep 14:36 bareos JobId 36416: Using Device "disk-0" to read.
23-Sep 14:36 bareos JobId 36416: Using Device "lto6-1" to write.

For me the "disk-{0..14}" devices are my usual daily on-disk storage. And the
"lto6-1" device is where I send my offsite backups.

Here is the full log from that job:

23-Sep 14:35 bareos JobId 36416: Start Virtual Backup JobId 36416,
Job=gemini-offsite.2017-09-23_11.48.02_37
23-Sep 14:35 bareos JobId 36416: Consolidating JobIds
34672,35855,32770,32883,32982,33070,33180,33352,33438,33524,33637,33719,33832,33947,34153,34353,34449,34600,34758,34898,35060,35300,35455,35543,35631,35719,35928,36088
23-Sep 14:36 bareos JobId 36416: Bootstrap records written to
/var/db/bareos/bareos.restore.34.bsr
23-Sep 14:36 bareos JobId 36416: Using Device "disk-0" to read.
23-Sep 14:36 bareos JobId 36416: Using Device "lto6-1" to write.
23-Sep 14:36 bareos-sd JobId 36416: Ready to read from volume "incr-0889" on
device "disk-0" (/local-project/tmp/bareos/backups).
23-Sep 14:36 bareos-sd JobId 36416: Ready to read from volume "incr-0889" on
device "disk-0" (/local-project/tmp/bareos/backups).
23-Sep 14:36 bareos-sd JobId 36416: Spooling data ...
23-Sep 14:36 bareos-sd JobId 36416: Spooling data ...
23-Sep 14:36 bareos-sd JobId 36416: Forward spacing Volume "incr-0889" to
file:block 0:2343709518.
23-Sep 14:36 bareos-sd JobId 36416: End of Volume at file 0 on device "disk-0"
(/local-project/tmp/bareos/backups), Volume "incr-0889"
23-Sep 14:36 bareos-sd JobId 36416: Forward spacing Volume "incr-0889" to
file:block 0:2343709518.
23-Sep 14:36 bareos-sd JobId 36416: End of Volume at file 0 on device "disk-0"
(/local-project/tmp/bareos/backups), Volume "incr-0889"
23-Sep 14:36 bareos-sd JobId 36416: Ready to read from volume "incr-0935" on
device "disk-0" (/local-project/tmp/bareos/backups).
23-Sep 14:36 bareos-sd JobId 36416: Ready to read from volume "incr-0935" on
device "disk-0" (/local-project/tmp/bareos/backups).
23-Sep 14:36 bareos-sd JobId 36416: Forward spacing Volume "incr-0935" to
file:block 1:51625413.
23-Sep 14:36 bareos-sd JobId 36416: Forward spacing Volume "incr-0935" to
file:block 1:51625413.
23-Sep 14:36 bareos-sd JobId 36416: End of Volume at file 1 on device "disk-0"
(/local-project/tmp/bareos/backups), Volume "incr-0935"
23-Sep 14:36 bareos-sd JobId 36416: End of Volume at file 1 on device "disk-0"
(/local-project/tmp/bareos/backups), Volume "incr-0935"
23-Sep 14:36 bareos-sd JobId 36416: Ready to read from volume "incr-0986" on
device "disk-0" (/local-project/tmp/bareos/backups).
23-Sep 14:36 bareos-sd JobId 36416: Forward spacing Volume "incr-0986" to
file:block 0:737681179.
23-Sep 14:36 bareos-sd JobId 36416: Ready to read from volume "incr-0986" on
device "disk-0" (/local-project/tmp/bareos/backups).
23-Sep 14:36 bareos-sd JobId 36416: Forward spacing Volume "incr-0986" to
file:block 0:737681179.
23-Sep 14:36 bareos-sd JobId 36416: End of Volume at file 0 on device "disk-0"
(/local-project/tmp/bareos/backups), Volume "incr-0986"
23-Sep 14:36 bareos-sd JobId 36416: End of Volume at file 0 on device "disk-0"
(/local-project/tmp/bareos/backups), Volume "incr-0986"
23-Sep 14:36 bareos-sd JobId 36416: Ready to read from volume "incr-1013" on
device "disk-0" (/local-project/tmp/bareos/backups).
23-Sep 14:36 bareos-sd JobId 36416: Forward spacing Volume "incr-1013" to
file:block 0:3943724168.
23-Sep 14:36 bareos-sd JobId 36416: Ready to read from volume "incr-1013" on
device "disk-0" (/local-project/tmp/bareos/backups).
23-Sep 14:36 bareos-sd JobId 36416: End of Volume at file 0 on device "disk-0"
(/local-project/tmp/bareos/backups), Volume "incr-1013"
23-Sep 14:36 bareos-sd JobId 36416: Forward spacing Volume "incr-1013" to
file:block 0:3943724168.
23-Sep 14:36 bareos-sd JobId 36416: End of Volume at file 0 on device "disk-0"
(/local-project/tmp/bareos/backups), Volume "incr-1013"
23-Sep 14:36 bareos-sd JobId 36416: Ready to read from volume "incr-1034" on
device "disk-0" (/local-project/tmp/bareos/backups).
23-Sep 14:36 bareos-sd JobId 36416: Forward spacing Volume "incr-1034" to
file:block 0:1292880712.
23-Sep 14:36 bareos-sd JobId 36416: Ready to read from volume "incr-1034" on
device "disk-0" (/local-project/tmp/bareos/backups).
23-Sep 14:36 bareos-sd JobId 36416: End of Volume at file 0 on device "disk-0"
(/local-project/tmp/bareos/backups), Volume "incr-1034"
23-Sep 14:36 bareos-sd JobId 36416: Forward spacing Volume "incr-1034" to
file:block 0:1292880712.
23-Sep 14:36 bareos-sd JobId 36416: End of Volume at file 0 on device "disk-0"
(/local-project/tmp/bareos/backups), Volume "incr-1034"
23-Sep 14:36 bareos-sd JobId 36416: Ready to read from volume "incr-1049" on
device "disk-0" (/local-project/tmp/bareos/backups).
23-Sep 14:36 bareos-sd JobId 36416: Forward spacing Volume "incr-1049" to
file:block 0:950912635.
23-Sep 14:36 bareos-sd JobId 36416: Ready to read from volume "incr-1049" on
device "disk-0" (/local-project/tmp/bareos/backups).
23-Sep 14:36 bareos-sd JobId 36416: Forward spacing Volume "incr-1049" to
file:block 0:950912635.
23-Sep 14:36 bareos-sd JobId 36416: Committing spooled data to Volume
"ND0014". Despooling 2,797,605,711 bytes ...
23-Sep 14:36 bareos-sd JobId 36416: Committing spooled data to Volume
"ND0014". Despooling 2,797,605,711 bytes ...
23-Sep 14:36 bareos-sd JobId 36416: Despooling elapsed time = 00:00:19,
Transfer rate = 147.2 M Bytes/second
23-Sep 14:36 bareos-sd JobId 36416: Despooling elapsed time = 00:00:19,
Transfer rate = 147.2 M Bytes/second
23-Sep 14:36 bareos-sd JobId 36416: Elapsed time=00:00:37, Transfer rate=75.09
M Bytes/second
23-Sep 14:36 bareos-sd JobId 36416: Elapsed time=00:00:37, Transfer rate=75.09
M Bytes/second
23-Sep 14:36 bareos-sd JobId 36416: Sending spooled attrs to the Director.
Despooling 79,177,233 bytes ...
23-Sep 14:36 bareos-sd JobId 36416: Sending spooled attrs to the Director.
Despooling 79,177,233 bytes ...
23-Sep 14:37 bareos JobId 36416: Joblevel was set to joblevel of first
consolidated job: Full
23-Sep 14:37 bareos JobId 36416: Bareos bareos 16.2.5 (03Mar17):
23-Sep 14:37 bareos JobId 36416: console command: run AfterJob "update
jobid=36416 jobtype=A

Jon SCHEWE

unread,
Sep 25, 2017, 12:06:48 PM9/25/17
to bareos...@googlegroups.com
Well, my most recently job that was kicked off manually seems to have
worked. I do see the appropriate devices in there now. I guess I will
need to see what happens with the future jobs.

Thank you for the configuration help and log entries.
Reply all
Reply to author
Forward
0 new messages