Always Incremental: Copy Job for the longterm pool

52 views
Skip to first unread message

Toni Burger

unread,
Aug 7, 2020, 9:35:03 AM8/7/20
to bareos-users
Hi again, 

I setup a Always Incremental bareos configuration, like described hier: https://docs.bareos.org/master/TasksAndConcepts/AlwaysIncrementalBackupScheme.html

Most of it is working correct, wihtout the last step. 
What i want to achieve: 

1) Incremental Backups from my clients -> go to AI-Pool [OK]
2) Consolidate the incrementals every ~month -> go to AI-Consolidated-Pool [OK]
3) Make a VirtualFullBackup every 3 Month -> go to AI-Longterm-Pool [OK]
4) Copy all Jobs from AI-Longterm-Pool to AI-Longterm-Extern-Pool, manually triggerd [not working]

Steps 1-3 are ok. I got a VirtualFullBackup in the AI-Longterm pool: 

Choose a query (1-21): 14
Enter Volume name: AI-Longterm-0011
+-------+-----------------+---------------------+------+-------+---------+---------+--------+
| jobid | name            | starttime           | type | level | files   | gb      | status |
+-------+-----------------+---------------------+------+-------+---------+---------+--------+
|     4 | VirtualLongTerm | 2020-08-06 22:27:05 | A    | F     | 247,188 | 417.346 | T      |
+-------+-----------------+---------------------+------+-------+---------+---------+--------+
*


This is the pool definition for my AI-Longterm-Pool: 

Pool {
  Name = AI-Longterm
  Pool Type = Backup
  Next Pool = AI-Longterm-Extern
  Recycle = yes                       # Bareos can automatically recycle Volumes
  Auto Prune = yes                    # Prune expired volumes
  Volume Retention = 7 months         # How long should jobs be kept?
  Maximum Volume Bytes = 50G          # Limit Volume size to something reasonable
  Label Format = "AI-Longterm-"
  Volume Use Duration = 23h
  Storage = File
}

And this my AI-Longertm-Extern-Pool: 

Pool {
  Name = AI-Longterm-Extern
  Pool Type = Backup
  Recycle = yes                       # Bacula can automatically recycle Volumes
  AutoPrune = yes                     # Prune expired volumes
  Volume Retention = 2 years          # 2 years fullbacks are stored / fullbackup is do>
  Maximum Volume Bytes = 50G          # Limit Volume size to something reasonable
  Volume Use Duration = 23h
  Storage = ExternFile
  Label Format = AI-Longterm-Extern-
}

And here is the copy job i definied: 

Job {
  Name = "LongtermCopyToExtern"
  Type = Copy
  Selection Type = PoolUncopiedJobs
  Messages = Standard
  Pool = AI-Longterm
  Level = Full
  Full Backup Pool = AI-Longterm
  Write Bootstrap = "/media/bareosExternBackup/bareos/bootstrap/LongtermCopyToExtern-%n.bsr"
}


Sadly, if i run this one, no jobs are found to be copied: 

07-Aug 15:18 storageSrv-dir JobId 11: No JobIds found to copy.
07-Aug 15:18 storageSrv-dir JobId 11: Bareos storageSrv-dir 19.2.7 (16Apr20):
  Build OS:               Linux-3.10.0-1062.18.1.el7.x86_64 debian Debian GNU/Linux 10 (buster)
  Current JobId:          11
  Current Job:            LongtermCopyToExtern.2020-08-07_15.18.54_04
  Catalog:                "MyCatalog" (From Client resource)
  Start time:             07-Aug-2020 15:18:56
  End time:               07-Aug-2020 15:18:56
  Elapsed time:           0 secs
  Priority:               10
  Bareos binary info:     bareos.org build: Get official binaries and vendor support on bareos.com
  Termination:            Copying -- no files to copy


Maybe, it's because the jobs in AI-Longterm are of the type: Archive? How to select them? 

Or what else I'm missing? 

Thanks a lot. 
Toni

Toni Burger

unread,
Aug 10, 2020, 10:39:25 AM8/10/20
to bareos-users
Hello again, 
now, I'm sure. My longterm Jobs are marked as archive job type, like described in the manual (with the script step). Archive Jobs are not selected by the "PoolUncopiedJobs" selector to create a copy. 

I could select these job with a own sql query: 

SELECT DISTINCT Job.JobId,Job.StartTime FROM Job,Pool WHERE Pool.Name = 'AI-Longterm' AND Pool.PoolId = Job.PoolId AND Job.Type = 'A' AND Job.JobStatus IN ('T','W') AND Job.Level = 'F' AND Job.jobBytes > 0 AND Job.JobId NOT IN (SELECT PriorJobId FROM Job WHERE Type IN ('B','C') AND Job.JobStatus IN ('T','W') AND PriorJobId != 0) ORDER by Job.StartTime;

But doing this ... again some question came up. What are the purpose of Archive-Jobs? Archive Jobs are not available for restore using bconsole. Even the manual don't describe archive jobs at all. 


Is it the complete wrong direction i try to solve my scenario? 

Best regards
Toni

Toni Burger

unread,
Aug 13, 2020, 10:40:33 AM8/13/20
to bareos-users
Hi again, 
now, i tried some different approaches. But can't get to a solution :(. 

1) I used my sql selection query above
-> Bareos select the jobs, but don't copy it. It says there is already a copy for the job. That's right ... beacuse the job i try to copy is marked as Archive :(

2) the next Idea was, to define a second VirtualLongTerm job which uses my "ExternPool". But this also failed, because the target pool is selected from the "Next Pool" parameter which is set in AI-Consolidated. And it is not allowed to define multiple Next Pools in a Pool Configuration. 

Now, I can't find any solution how to get a offsite LongTerm Copy :/. I'm also opened for a compelte different solutions :) if there is one. 

In short: 
1) I want to use the Always Incremental backups
2) I need a LongTerm Pool for VirtualFull Backups
3) I need a solution to get VirtualFullBackups on a external / offsite Storage

Best regards
Toni

Brock Palen

unread,
Aug 13, 2020, 9:41:29 PM8/13/20
to Toni Burger, bareos-users
Toni,

I can tell you what I do. It’s probably not all what you want but it works for me.

I use Always incremental but it’s only ever on the main server.

I use an archive job to a different pool. This makes a virtual full copy of how that fileset looks right now.

I do this monthly and bailed me out a few times where tape gets eaten, fat finger etc.


You can easily setup a second storage daemon on the remote site, and use a pool defined there as the target for your archive job. In my case it’s same SD but different tape loader.

Then if you need to ever ‘recover’ from your archive job, you do the following:

# make the archive job a back up job
update jobid=<jobid> jobtype=B

# run a new job of the missing job as a virtual full
# This pulls the files from the archive and rebuilds a new Full for the primary job
run job=<normal job for host> level=VirtualFull

This will purge your Archive job (it just got consolidated into the new Full for the different job so I recommend you manually run your archive job again to build a new archive.

It’s imperfect, your daily incremental are only on the prod site, so you're off site full is always behind unless you burn a lot of media and you're doing a full copy of all data over the network every time you do this.

You could maybe argument by doing a replication of your incremental volumes not in Bareos but just rsync or something. This way you can pull back volumes. You could try doing a Copy job on just the Incremental,

In general Copy jobs with AI doesn’t work IMHO in the way it should. Whatever you do test test test recovery from your remote site.


Job {
Name = "archive-mlds-host"
JobDefs = "DefaultArchive"
FileSet = "mls_std"
Client = "mlds"

Enabled = no
}

JobDefs {
Name = "DefaultArchive"
Type = Backup
Level = VirtualFull
Client = myth-fd
Storage = T-LTO4
Messages = Standard
Allow Mixed Priority = yes
Priority = 4
#Allow Duplicate Jobs = no # see: https://bugs.bareos.org/view.php?id=792 Can't use this setting
Write Bootstrap = "/var/lib/bareos/%c.bsr"
Spool Data = no
Accurate = yes

Virtual Full Backup Pool = LTO4
Next Pool = LTO4
Pool = AI-Consolidated

Run Script {
console = "update jobid=%i jobtype=A"
Runs When = After
Runs On Client = No
Runs On Failure = No
}
Enabled = no

}


Brock Palen
1 (989) 277-6075
bro...@mlds-networks.com
www.mlds-networks.com
Websites, Linux, Hosting, Joomla, Consulting
> --
> You received this message because you are subscribed to the Google Groups "bareos-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/bareos-users/98061b83-447b-4e8f-b053-4f30dbbba8e7n%40googlegroups.com.

Toni Burger

unread,
Aug 19, 2020, 7:05:34 AM8/19/20
to bareos-users
Hi Brock, 
thanks for your response. If I got it right, your setup is very similar (the same) as described in: https://docs.bareos.org/TasksAndConcepts/AlwaysIncrementalBackupScheme.html -> The AI-Longterm Pool. 

I didn't know that I could recover the primary (full) backup from the archive as you described. Thats intersting. 

As it looks like, there is no way to copy a existing "LongTerm-Pool" again to a off-site pool. So ... I will also use your approach and set the Longterm-Pool as the off-site pool. I think it's better to have a more "simple" configuration with less redudance pools, instead of a complicated one with more pools :/. 

Thanks. 
Best regards. 
Toni
Reply all
Reply to author
Forward
0 new messages