Backup to disk and then stage/move to tape

871 views
Skip to first unread message

Ronny Egner

unread,
Aug 21, 2015, 1:23:13 AM8/21/15
to bareos...@googlegroups.com
Hi List,

i would like to backup my data to disk first and then - after some time -
migrate/stage/move the data to tape.
The important bit here is that the data that was tored on disk before is
*moved* and not copied and the space on disk is freed in that progress.

I would appreciate a sample if possible.

Thanks.

Mit freundlichen Grüßen
Ronny Egner
--
Ronny Egner
Oracle Certified Master 11g (OCM)

Mobile: +49 170 8139903
EMail: ronny...@ronnyegner-consulting.de

lst_...@kwsoft.de

unread,
Aug 21, 2015, 3:39:07 AM8/21/15
to bareos...@googlegroups.com

Zitat von Ronny Egner <ronny...@ronnyegner-consulting.de>:

> Hi List,
>
> i would like to backup my data to disk first and then - after some time -
> migrate/stage/move the data to tape.
> The important bit here is that the data that was tored on disk before is
> *moved* and not copied and the space on disk is freed in that progress.

We do something similar with a backup to disk Pool with a short
retention time and a weekly full backup which is copied to tape with a
much longer retention time. With this the disk space is not "free" but
ready to reuse by the next backups soon.
If you really want to "free" the disk space for non backup usage you
must do a migration job and configure Bareos to really delete (Action
On Purge = Truncate) the disk volumes when purged.

http://doc.bareos.org/master/html/bareos-manual-main-reference.html#x1-23900021

Regards

Andreas


Vladislav Solovei

unread,
Feb 17, 2018, 12:31:03 PM2/17/18
to bareos-users
One more question about the jobs migration.

It is possible to automatically truncate the volume from which i have migrated all jobs previously? After migration, the volume has no jobs, but volume file size is not reduced. I can truncate that volume manually, but, maybe, it is possible to do that automatically?

Dan

unread,
Feb 18, 2018, 11:17:49 PM2/18/18
to bareos-users
Vladislav -

The migration job is designed to do exactly what you have requested. It will copy the jobs from your disk pool(s) to your tape pool(s) and then purge the initial disk backup. You asked for an example. I have a copy job example I'll provide you, I don't use migrate. They are identical except the copy job doesn't purge the original jobs when completed. The below job selects all backup jobs completed in the past 30 days that have not yet been copied forward. I run it more often than 30 days, but I put that limit on it so that if it fails to copy one time it will try again the next for up to 30 days. I haven't tested this as a migration job, so I'm assuming the PriorJobId gets set the same way and that this query will still work.

----------
Job {
Name = Copy-All
Type = Copy
Messages = Standard
Priority = 35
Pool = Full
Selection Type = SQL Query
Selection Pattern = "SELECT J.JobId
FROM Job J
WHERE J.Type = 'B'
AND J.JobStatus IN ('T','W')
AND J.jobBytes > 0
AND NOT EXISTS (SELECT 1
FROM Media M, JobMedia JM
WHERE JM.JobId = J.JobId
AND M.MediaId = JM.MediaID
AND M.MediaType = 'FileCopy')
AND J.JobId NOT IN (SELECT PriorJobId
FROM Job
WHERE Type IN ('B','C')
AND Job.JobStatus IN ('T','W')
AND PriorJobId != 0)
AND J.RealEndTime > DATE_ADD(now(), INTERVAL -30 DAY)
AND J.EndTime > DATE_ADD(now(), INTERVAL -30 DAY)
;"
Next Pool = DR-Copy
Schedule = CopySched
}
----------

As far as the automatic truncate. My testing on 16.2 (I haven't tested 17.2) shows that the auto-prune does NOT truncate the volumes. You can set the retention period on the disk pool to minimize the time that you have wasted space in your storage, but there's a danger that the migration job will fail and then the volume will expire and be overwritten before it is migrated. If I were doing this I might think about using RunAfterJob to run a script script after the migration job that will purge and truncate all empty volumes. Make sure you that you understand the query in the below script example and are confident that it will never purge active backup data, the purge command is dangerous that way. Here's a sample script that reads the database credentials from the catalog configuration and then queries for a list of 'empty' volumes to purge.

----------
# grab the database credentials from existing configuration files
catalogFile=`find /etc/bareos/bareos-dir.d/catalog/ -type f`
dbUser=`grep dbuser $catalogFile | grep -o '".*"' | sed 's/"//g'`
dbPwd=`grep dbpassword $catalogFile | grep -o '".*"' | sed 's/"//g'`

# Get a list of volumes no longer in use and submit them to the console for purging
# Query for a list of volumes (exclude DR copy volumes)
emptyVols=$(mysql bareos -u $dbUser -p$dbPwd -se "SELECT m.VolumeName FROM bareos.Media m where m.MediaType <> 'FileCopy' and m.VolStatus not in ('Append','Purged') and not exists (select 1 from bareos.JobMedia jm where jm.MediaId=m.MediaId);")
# Submit volumes to bconsole for purging
for volName in $emptyVols
do
poolName=$(mysql bareos -u $dbUser -p$dbPwd -se "SELECT p.Name FROM bareos.Pool p where p.PoolId = (select m.PoolId from bareos.Media m where m.VolumeName='$volName');")
storageName=$(mysql bareos -u $dbUser -p$dbPwd -se "SELECT s.Name FROM bareos.Storage s where s.StorageId = (select m.StorageId from bareos.Media m where m.VolumeName='$volName');")
/bin/bconsole << EOD
purge volume=$volName action=Truncate pool=$poolName storage=$storageName yes $emptyVol
quit
EOD
done

exit
----------

Hope that gives you a starting point.

Dan

Dan

unread,
Feb 18, 2018, 11:24:01 PM2/18/18
to bareos-users
Ronny -

This was meant for you too! You can see in my example job I am moving and jobs that are less than 30 days old and haven't already been moved. You can easily extend that query to be '14 < job time < 30' or something similar if you don't want to immediately migrate the job to tape and set a minimum time as well as a maximum.

Reply all
Reply to author
Forward
0 new messages