Dedicated restore device

42 views
Skip to first unread message

Jurgen Goedbloed

unread,
Feb 11, 2021, 5:18:26 AM2/11/21
to bareos-users
Hi all,

We're using bareos-19.2 and are backing up servers to one storage device.
We have set up a maximum of 4 concurrent running backup jobs and this works fine.

Sometimes we need to urgently restore something while 4 backup jobs are running and other backup jobs have been queued.

In our current setup, when I schedule a restore job, it waits until all 4 running jobs are finished, then execute the restore job and then continues with the other queued backup jobs. Snippet of configuration:

Storage configuration in director:

Storage {
Name = store1
Address = 172.20.36.21
Password = "secret"
Device = store1
Media Type = File
Maximum Concurrent Jobs = 4
}
Storage {
Name = restore
Address = 172.20.36.21
Password = "secret
Device = restore
Media Type = File
}

Device definition on storage daemon:

Device {
  Name = restore
  Media Type = File
  Archive Device = /backup/bareos-store1
  LabelMedia = yes
  Random Access = yes
  AutomaticMount = yes
  RemovableMedia = no
  AlwaysOpen = no
  Spool Directory = /backup/bareos-spool
  Maximum Spool Size = 100g
}
Device {
  Name = store1
  Media Type = File
  Archive Device = /backup/bareos-store1
  LabelMedia = yes
  Random Access = yes
  AutomaticMount = yes
  RemovableMedia = no
  AlwaysOpen = no
  Description = "Bareos store1: Almere outside openstack"
  Spool Directory = /backup/bareos-spool
  Maximum Spool Size = 100g
  Maximum Concurrent Jobs = 4
}


All backup jobs use the 'store1' storage device
The restore job uses the 'restore' storage device. Restore job definition:
Job {
  Name = "Restore"
  Type = Restore
  Client = prd-mgmt-bareosstore1
  Storage = restore
  Pool = daily
  Fileset = "restore"
  Messages = Standard
  Allow Mixed Priority = yes
  Priority = 5
  Where = /tmp/bareos-restore
}

There is no storage defiined in any pool config (there was, but I removed it from the pool definition and reloaded bareos director)

In the log of the restore job I still see that the 'store1' device is used, which explains why it will wait, but why does the restore job not use the 'restore' device as I have set in the job definition?

Is it possible to create a restore job that will be executed immediately?

Thanks in advance!
/Jurgen

Kjetil Torgrim Homme

unread,
Feb 12, 2021, 11:41:19 AM2/12/21
to bareos...@googlegroups.com
Jurgen Goedbloed <jurg...@gmail.com> writes:

> We're using bareos-19.2 and are backing up servers to one storage
> device. We have set up a maximum of 4 concurrent running backup jobs
> and this works fine.
>
> Sometimes we need to urgently restore something while 4 backup jobs
> are running and other backup jobs have been queued.
>
> In our current setup, when I schedule a restore job, it waits until
> all 4 running jobs are finished, then execute the restore job and then
> continues with the other queued backup jobs.

> There is no storage defiined in any pool config (there was, but I
> removed it from the pool definition and reloaded bareos director)
>
> In the log of the restore job I still see that the 'store1' device is
> used, which explains why it will wait, but why does the restore job
> not use the 'restore' device as I have set in the job definition?
>
> Is it possible to create a restore job that will be executed
> immediately?

I have not been able to do this either. Bareos will strongly prefer to
use the same storage the volume was mounted as last, or something like
that.

When running restores, I run them with storage=RestoreRobot1 or
storage=RestoreDisk. The really annoying thing is that a restore job
which requires both tape and disk will abort after reading from tape,
since storage=RestoreRobot1 will override what storage to use for the
differential and incremental part of the restore as well (and a disk
file can not be mounted on the tape robot). So I have to run the
restore in two jobs, once of the tape with storage=RestoreRobot1, and
once with the other jobs (using restore type 3, comma separated jobids)
and storage=RestoreDisk.

Please let me know if it is possible to fix this.

--
Kjetil T. Homme
Redpill Linpro AS

Jurgen Goedbloed

unread,
Feb 18, 2021, 4:59:42 AM2/18/21
to bareos-users
When running restores, I run them with storage=RestoreRobot1 or
storage=RestoreDisk.

Thanks Kjetil. I also have a 'storage=' directive in my restore job, but in my case this storage is not used.

I am restoring with bareos-webui, but I assume you are restoring from bconsole (command line)?

Kjetil Torgrim Homme

unread,
Feb 23, 2021, 2:05:34 PM2/23/21
to bareos...@googlegroups.com
Jurgen Goedbloed <jurg...@gmail.com> writes:

> When running restores, I run them with storage=RestoreRobot1 or
> storage=RestoreDisk.
>
> Thanks Kjetil. I also have a 'storage=' directive in my restore job,
> but in my case this storage is not used.

Right, me too, but this default value from the job definition is ignored
(Storage from the Pool the tape is in takes precedence, I think), I have
to override it for the specific job instance.

Hmm, I could test this - change the pool for the media temporarily
before running the restore - unfortunately then I need to make extra
RestorePools as well as RestoreStorages.

> I am restoring with bareos-webui, but I assume you are restoring from
> bconsole (command line)?

Yes, I am.

Jurgen Goedbloed

unread,
Mar 5, 2021, 3:10:54 AM3/5/21
to bareos-users
I see. From bconsole you can override the storage device used for restoring while you cannot override it in bareos-webui.

Is this something that can be changed so that a restore from bareos-webui forces bareos to use the storage device used in the restore job?
Reply all
Reply to author
Forward
0 new messages