Troubleshooting/solving job "waiting on max Storage jobs"

408 views
Skip to first unread message

James Youngman

unread,
Jun 4, 2021, 4:27:01 PM6/4/21
to bareos-users
I recently added a spool directory and was hoping that my incremental jobs would be able to spool concurrently since in total all my incremental jobs shoudl add up to much less than the capacity of the spool directory.

Yet all except the running job are "waiting on max Storage jobs".  What do I need to change?   Suggestions & advice appreciated.

* status dir
bareos-dir Version: 16.2.6 (02 June 2017) x86_64-pc-linux-gnu debian Debian GNU/Linux buster/sid
Daemon started 01-Jun-21 17:31. Jobs: run=36, running=4 mode=0 db=postgresql
 Heap: heap=73,728 smbytes=454,055 max_bytes=1,456,735 bufs=1,632 max_bufs=2,701

Scheduled Jobs:
Level          Type     Pri  Scheduled          Name               Volume
===================================================================================
Full           Backup    11  04-Jun-21 21:10    BackupCatalog      RYGTC2L6
Full           Backup    10  05-Jun-21 21:00    backup-terminator-fd-everything RYGTC2L6
Full           Backup    10  05-Jun-21 21:00    backup-substrate-fd-all RYGTC2L6
Full           Backup    10  05-Jun-21 21:00    backup-jupiter-fd-home RYGTC2L6
Full           Backup    10  05-Jun-21 21:00    backup-jupiter-fd-all RYGTC2L6
Full           Backup    10  05-Jun-21 21:00    backup-horizon-fd-all RYGTC2L6
Full           Backup    10  05-Jun-21 21:00    backup-Big-in-Japan-fd-all RYGTC2L6
====

Running Jobs:
Console connected at 04-Jun-21 21:03
 JobId Level   Name                       Status
======================================================================
    37 Increme  backup-jupiter-fd-home.2021-06-04_21.00.00_36 is running
    38 Increme  backup-jupiter-fd-all.2021-06-04_21.00.00_37 is waiting on max Storage jobs
    39 Increme  backup-horizon-fd-all.2021-06-04_21.00.00_38 is waiting on max Storage jobs
    40 Increme  backup-Big-in-Japan-fd-all.2021-06-04_21.00.00_39 is waiting on max Storage jobs
====

Terminated Jobs:
 JobId  Level    Files      Bytes   Status   Finished        Name 
====================================================================
    27  Incr        828    226.4 G  OK       03-Jun-21 21:53 backup-jupiter-fd-all
    28  Incr         34    246.5 M  OK       03-Jun-21 21:53 backup-horizon-fd-all
    29  Full         85    653.1 M  OK       03-Jun-21 21:58 BackupCatalog
    30                1    365.5 M  OK       04-Jun-21 09:57 RestoreFiles
    31                1    2.987 K  OK       04-Jun-21 10:09 RestoreFiles
    32  Full    192,023    13.08 G  OK       04-Jun-21 13:44 backup-Big-in-Japan-fd-all
    33  Full     52,027    8.347 G  Error    04-Jun-21 13:54 backup-Big-in-Japan-fd-all
    34  Full    192,156    13.17 G  OK       04-Jun-21 14:11 backup-Big-in-Japan-fd-all
    35  Incr        223    182.6 M  OK       04-Jun-21 21:02 backup-terminator-fd-everything
    36  Incr        155    126.9 M  OK       04-Jun-21 21:02 backup-substrate-fd-all


Client Initiated Connections (waiting for jobs):
Connect time        Protocol            Authenticated       Name                                    
====================================================================================================
====



I don't understand why those jobs are "is waiting on max Storage jobs".


# Here is bareos-sd.conf:
Storage {
  Name = bareos-sd
  Maximum Concurrent Jobs = 20
}


Here is /etc/bareos/bareos-dir.d/storage/Tape.conf:

Storage {
  Name = Tape
  Address  = jupiter
  Password = ",,,"
  Device = autochanger-0
  Auto Changer = yes
  Media Type = LTO
  Maximum Concurrent Jobs = 20
}

Here is /etc/bareos/bareos-dir.d/director/bareos-dir.conf:

Director {                            # define myself
  Name = bareos-dir
  QueryFile = "/usr/lib/bareos/scripts/query.sql"
  Maximum Concurrent Jobs = 10
  Password = "hunter2"         # Console password
  Messages = Daemon
  Auditing = yes
}

Here is /etc/bareos/bareos-sd.d/autochanger/autochanger-0.conf :

Autochanger {
  Name = "autochanger-0"
  Changer Device = /dev/changer0
  Device = tapedrive-0
  Changer Command = "/usr/lib/bareos/scripts/mtx-changer %c %o %S %a %d"
}

Here is /etc/bareos/bareos-sd.d/device/tapedrive-0.conf:

Device {
    Name = "tapedrive-0"
    DeviceType = tape

    # default:0, only required if the autoloader have multiple drives.
    DriveIndex = 0

    ArchiveDevice = /dev/tape/by-id/scsi-35000e11118ec6001-nst
    MediaType = LTO
    AutoChanger = yes                       # default: no
    AutomaticMount = yes                    # default: no
    MaximumFileSize = 10GB                  # default: 1000000000 (1GB)

    # Default maximum block size is 64,512 bytes (126 * 512).
    Maximum block size = 524288
    Spool Directory = /var/spool/bareos
    Maximum Spool Size = 820GB
    Maximum Job Spool Size = 400GB
}

Rick Sutphin

unread,
Jun 4, 2021, 4:29:30 PM6/4/21
to bareos...@googlegroups.com

Just ran into this. All of my backups failed with "waiting on max storage jobs".  Restarting postgres, bareos-dir, bareos-sd & bareos-fd resolved the problem.

--
You received this message because you are subscribed to the Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bareos-users/cfc8e121-5b6b-46d4-a1c2-873108a99047n%40googlegroups.com.

Rick Sutphin
Project Manager
Delta Technologies, Inc.
P.O. Box 2301
Tallahassee, FL 32316-2301
Ofc: 850.575.3977
Fax: 850.575.3908
Cell: 850.251.2345
https://delta-tech.com

Licenses: EF-20000414, ES-0000212 (FL) & LVU405002 (GA)

Reply all
Reply to author
Forward
0 new messages