Re: [bareos-users] New to bareos had limited success

98 views
Skip to first unread message
Message has been deleted

Brock Palen

unread,
Jun 25, 2020, 10:31:15 AM6/25/20
to Waanie Simon, bareos-users
If you are using disk volumes, you probably want to auto label to get new volumes created as needed.
Note bareos tries to preserve as much data as possilbe and with disk volumes I find it likes to fill the disk and eventually fail.

I run an ‘admin’ job that just runs a script on a schedule, you could use cron etc that checks volumes so they correctly get pruned and then truncated (made zero size) to free disk space, you will need to modify for your pool names:

#!/bin/bash
for x in `echo "list volumes pool=AI-Consolidated" | bconsole | grep -v "list volumes" | grep AI-Consolid | awk -F\| '{print $3}'`
do
echo "prune volume=$x yes"
done | bconsole

# actaully free up disk space
echo "truncate volstatus=Purged pool=AI-Consolidated yes" \
| bconsole


As for the very large backup few ideas

* use globs and break it into multiple jobs (this won’t impact restores)
* number of files will dictate scan time for incremental rather than size of data.
Test scan time with estimate: estimate job=<jobname> accurate=yes level=Incremental
* Fulls are dominated by bandwidth
** Compression will cause CPU to peak. and limit performanceif not IO/Network bound
** If using compression look at the low cpu/compression trade off options
** Maybe not compress your backup but use a migrate job with a compress filter to compress all on the backup server
* fd compression is single threaded, if you break it into multiple jobs with globs you can run multiple at a time

You're going to want to benchmark all along your system, I like dstat over iostat/top etc for monitoring. but a 90TB single volume backup will take some time for a full. If you have the space on your server maybe look at Always Incremental, so you never actually make a full download of that volume again, though you will copy 90TByte of data on your SD every few days depending on settings, just like multiple fulls.

Myself I have ‘archive’ jobs where I take VirtualFull of each of my jobs in a different volume on a different media monthly for safety. Bailed me out a lot when I was learning.

Brock Palen
1 (989) 277-6075
bro...@mlds-networks.com
www.mlds-networks.com
Websites, Linux, Hosting, Joomla, Consulting



> On Jun 24, 2020, at 4:05 PM, Waanie Simon <iriw...@gmail.com> wrote:
>
> Hi all
>
> I am working on building a backup solution with bareos. the system will be used to replace arkeia which is no longer supported. Arkeia backups were quite easy to install and monitor but we are.
>
> I have install bareos-director and bareos-sd on the same server. They are running on the same network interface.
>
> My initial backups went fine but as time went on the backups would queued. Including the catalog backup.
> I sometime see that the system is asking for a labeling to happen but it doesn't always fix the problem.
>
> mostly linux vms and proxmox physical servers.
>
> Some of the harder ones to backup is our file server with home volume of 90 TB. This is something that we could never backup and has become a source of many problems.
>
> How would you backup such a volume. Should it be split? Since there is a lot of static content in there, should it be archived?
>
> I am not sure if config files are is set correctly.
>
> I have incremental backups happening daily and differentials happening weekly but I often have full backups happening multiple times during a month.
>
> Currently we have no tape library in place so all is running on disk.
>
> Thought I could add some code snippets to give an idea of what I have going on here
>
>
> client
>
>
> Client {
> Name = dr001-fd
> Address = 10.60.100.12
> Password = <passwd>
> }
>
> Job
>
> Job {
> Name = dr001-Daily-inc
> JobDefs = linux-daily-inc
> Type = backup
> Messages = Standard
> Client = dr001-fd
> }
>
>
> JobDefs {
> Name = linux-daily-inc
> Type = Backup
> Level = Incremental
> Storage = File
> Pool = Incremental
> FileSet = LinuxAll
> }
>
> each job has its own schedule. is this necessary
>
> Schedule {
> Name = dr001-daily-inc
> Description = Incremental Daily for dr001-fd
> Run = daily at 11:00
> }
>
>
> storage under the bareos-dir.d folder
> File.conf file
>
> Storage {
> Name = File
> Address = ctbackup.cape.saao.ac.za # N.B. Use a fully qualified name here (do not use "localhost" here).
> Password = <password>
> Device = FileStorage
> Media Type = File
> Maximum Concurrent Jobs = 5
> }
>
>
> The Pools are configured as
>
> Pool {
> Name = Incremental
> Pool Type = Backup
> Recycle = yes # Bareos can automatically recycle Volumes
> AutoPrune = yes # Prune expired volumes
> Volume Retention = 30 days # How long should the Incremental Backups be kept? (#12)
> Maximum Volume Bytes = 150G # Limit Volume size to something reasonable
> Maximum Volumes = 30 # Limit number of Volumes in Pool
> Label Format = "Incremental-" # Volumes will be labeled "Incremental-<volume-id>"
> }
>
> Pool {
> Name = Differential
> Pool Type = Backup
> Recycle = yes # Bareos can automatically recycle Volumes
> AutoPrune = yes # Prune expired volumes
> Volume Retention = 90 days # How long should the Differential Backups be kept? (#09)
> Maximum Volume Bytes = 100G # Limit Volume size to something reasonable
> Maximum Volumes = 60 # Limit number of Volumes in Pool
> Label Format = "Differential-" # Volumes will be labeled "Differential-<volume-id>"
> }
>
> Pool {
> Name = Full
> Pool Type = Backup
> Recycle = yes # Bareos can automatically recycle Volumes
> AutoPrune = yes # Prune expired volumes
> Volume Retention = 365 days # How long should the Full Backups be kept? (#06)
> Maximum Volume Bytes = 350G # Limit Volume size to something reasonable
> Maximum Volumes = 100 # Limit number of Volumes in Pool
> Label Format = "Full-" # Volumes will be labeled "Full-<volume-id>"
> }
>
>
> unfortunately there has been a bit of a thumb suck regarding the numbers here.
>
> The storage config looks like this
>
>
> Devices
>
> Device {
> Name = FileStorage
> Media Type = File
> Archive Device = /data1/bareos/FileStorage
> LabelMedia = yes; # lets Bareos label unlabeled media
> Random Access = yes;
> AutomaticMount = yes; # when device opened, read it
> RemovableMedia = no;
> Collect Statistics = yes
> AlwaysOpen = no;
> Description = "File device. A connecting Director must have the same Name and MediaType."
> }
>
>
> bareos-sd.conf
>
> Storage {
> Name = bareos-sd
> Maximum Concurrent Jobs = 20
>
> # remove comment from "Plugin Directory" to load plugins from specified directory.
> # if "Plugin Names" is defined, only the specified plugins will be loaded,
> # otherwise all storage plugins (*-sd.so) from the "Plugin Directory".
> #
> # Plugin Directory = "/usr/lib/bareos/plugins"
> # Plugin Names = ""
> Collect Device Statistics = yes
> Collect Job Statistics = yes
> #Statistics Collect Intevals = 60
> }
> ~
>
>
>
> I know that my largest backups will probably not work since my capacity I can write to is only about 70 Tb
>
> I have the web gui working but there is no easy way to see progress on a job.
>
> Any improvements would be appreciated
>
> Regards
> Waanie
> ~
>
> --
> You received this message because you are subscribed to the Google Groups "bareos-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/bareos-users/c8c2054a-3d75-4bad-9a97-f2674e08c4b0o%40googlegroups.com.

Waanie Simon

unread,
Jun 25, 2020, 12:21:12 PM6/25/20
to bareos-users
Hi Brock

Thanks for the quick response

The filling up of diskspace seem to be the cause of the failing of my jobs. I will apply compression to save on disk space.

I have a question about globs you mention. I can't say I know what that is. How would you apply this in the a config file?

The large volume I picked is the home folder for users. So i thought of creating a jobdef that includes certain number of folders and a second and even a third. Then creating jobs for each of the separately.

What is your opinion about this?

Regards
Waanie
> To unsubscribe from this group and stop receiving emails from it, send an email to bareos...@googlegroups.com.

Brock Palen

unread,
Jun 25, 2020, 1:07:02 PM6/25/20
to Waanie Simon, bareos-users
Bareos will keep labeling volumes into the future if you don’t force recycling of them. That’s at least my experience, even though I have auto prune on I have to run prune on all volumes to get them to auto purge and then truncate to get them recycled (truncate probably not required but helped in my cases). Compression will save space but could slow and won’t solve the issue of Bareos eating space until it’s all used. YMMV how you handle this. Go about checking how the older volumes are treated with

list volumes pool=<poolname>
prune volume=<volume>

etc. If your setup is like mine volumes will not get pruned automatically thus the need for my admin job to force it. FYI I don’t think this is the way bareos is supposed to work, but it works that way for me and probably does for others also.


As for “globs” I was thinking classic unix blobs

eg ls abc*.txt

In the FileSet config Include you can use wild cards
https://docs.bareos.org/Configuration/Director.html#config-Dir_Fileset_Include_Options_Wilddir

You can setup jobs that use these wild options that way if you add top level directories you don’t miss them.

It will be a huge list if your $HOME is anything like ours but you can use
echo "estimate job=<jobname> level=<Full|Incrementa> listing “ | bconsole > listing.txt

To have baroes build the list it would backup for that job to make sure nothing is missed.

Note if you ever change the Job definition it will trigger a new full backup. So plan for growth.



Brock Palen
1 (989) 277-6075
bro...@mlds-networks.com
www.mlds-networks.com
Websites, Linux, Hosting, Joomla, Consulting



> To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/bareos-users/eed2aa23-cf27-406c-944a-f1d6a57823c7o%40googlegroups.com.

Message has been deleted

Stefan Fuhrmann

unread,
Jun 26, 2020, 4:41:56 PM6/26/20
to bareos...@googlegroups.com

Ahoi,

I found the attached scripts anywhere. Edit to your needs and tell how it works!


greets

Stefan


Am 24.06.20 um 22:05 schrieb Waanie Simon:
To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users...@googlegroups.com.
1-prune-all-volumes-Full.sh
2-prune-all-volumes-Incr.sh
3-prune-all-volumes-Diff.sh
4-delete-purged-volumes.sh
5-prune-all-volumesInError-Incr.sh
6-delete-InError-volumes.sh
readme

Stefan Fuhrmann

unread,
Jun 26, 2020, 4:49:54 PM6/26/20
to bareos...@googlegroups.com

Ahoi,


attaching files isnt working, sorry.

Get it from my cloud:

https://next.nopanic.systems/index.php/s/xjyStrP9M7QNoZz

Reply all
Reply to author
Forward
0 new messages