Having issues with incrementals and virtual fulls in Bacula and wondering about Bareos

22 views
Skip to first unread message

John Lockard

unread,
Jul 30, 2025, 5:24:43 PMJul 30
to bareos...@googlegroups.com
I'm running into this issue with Virtual Full backups.  I think I know what the issue is but am not 100% positive and want to bounce this idea...

I have a Full backup, then some number of Incrementals.  Every week I run a Virtual full, rolling in all but the most recent 10 Incremental backups.
My Virtual Fulls are failing with "No files found to read/Found 0 files to consolidate into Virtual Full"

29-Jul 02:36 bacula-dir JobId 5622: Start Virtual Backup JobId 5622, Job=Taco-Data-A.2025-07-25_21.15.00_33
29-Jul 02:36 bacula-dir JobId 5622: Consolidating JobIds=5400,4698,4728,4758,4788,4818
29-Jul 02:36 bacula-dir JobId 5622: No files found to read. No bootstrap file written.
29-Jul 02:36 bacula-dir JobId 5622: Found 0 files to consolidate into Virtual Full.
29-Jul 02:36 bacula-dir JobId 5622: Fatal error: Could not get or create the FileSet record.

What I think is happening is that my disk volume is not being updated (no changes to files, no new files, no files deleted), and when an incremental backup runs it stores nothing on the backup "tape".

05-Jul 20:19 bacula-dir JobId 4728: Start Backup JobId 4728, Job=Taco-Data-A.2025-06-30_19.10.00_42
05-Jul 20:19 bacula-dir JobId 4728: Connected to Storage "FileChanger" at si-scott.miserver.it.umich.edu:9103 with TLS
05-Jul 20:19 bacula-dir JobId 4728: Using Device "FileChanger-Dev10" to write.
05-Jul 20:19 bacula-dir JobId 4728: Connected to Client "taco" at taco.si.umich.edu:9102 with TLS
05-Jul 20:19 taco JobId 4728: Connected to Storage at si-scott.miserver.it.umich.edu:9103 with TLS
05-Jul 20:19 bacula-dir JobId 4728: Sending Accurate information to the FD.
05-Jul 20:23 bacula-sd JobId 4728: Elapsed time=00:01:42, Transfer rate=0  Bytes/second
05-Jul 20:23 bacula-sd JobId 4728: Sending spooled attrs to the Director. Despooling 0 bytes ...
05-Jul 20:23 bacula-dir JobId 4728: Bacula bacula-dir 15.0.3 (25Mar25):
  Build OS:               x86_64-pc-linux-gnu ubuntu 24.04
  JobId:                  4728
  Job:                    Taco-Data-A.2025-06-30_19.10.00_42
  Backup Level:           Incremental, since=2025-06-25 09:16:04
  Client:                 "taco" 15.0.3 (25Mar25) x86_64-pc-linux-gnu,ubuntu,22.04
  FileSet:                "Taco-Data-A" 2025-05-02 15:31:06
  Pool:                   "Taco-Incr" (From Job IncPool override)
  Catalog:                "MyCatalog" (From Client resource)
  Storage:                "FileChanger" (From Pool resource)
  Scheduled time:         30-Jun-2025 19:10:00
  Start time:             05-Jul-2025 20:19:45
  End time:               05-Jul-2025 20:23:12
  Elapsed time:           3 mins 27 secs
  Priority:               10
  FD Files Written:       0
  SD Files Written:       0
  FD Bytes Written:       0 (0 B)
  SD Bytes Written:       0 (0 B)
  Rate:                   0.0 KB/s
  Software Compression:   None
  Comm Line Compression:  None
  Snapshot/VSS:           no
  Encryption:             no
  Accurate:               yes
  Volume name(s):         
  Volume Session Id:      619
  Volume Session Time:    1749606283
  Last Volume Bytes:      2,295 (2.295 KB)
  Non-fatal FD errors:    0
  SD Errors:              0
  FD termination status:  OK
  SD termination status:  OK
  Termination:            Backup OK

So, when it comes time to consolidate it grabs each incremental and finds there's nothing, and the Virtual Full fails.  I can find these "tapes" in my Jobs table, but no joining entries in Media, or JobMedia.

If I scan through the "tape" on the filesystem, I find backups on the tape, only for jobs which actually wrote files.

The end result is that I have a Full backup, which is getting increasingly older, VirtualFulls which fail, so no new Full backup, and once I hit a point where the Full backup hits its prune-date it gets pruned from the file table.  After that, the database knows there was a successful Full, and incrementals still run, but now every file the incremental comes across is "brand new" and my incremental backup is basically a Full backup.

I'm wondering if this was ever seen in Bareos, and if so, what it "fixed"?

Thanks,
-John

-- 
- Adaptability -- Analytical --- Ideation ---- Input ----- Belief - 
-------------------------------------------------------------------
         John M. Lockard |  U of Michigan - School of Information
          Unix Sys Admin |      Suite 205 | 309 Maynard Street
      jloc...@umich.edu |        Ann Arbor, MI  48104-2211
 www.umich.edu/~jlockard |     734-615-8776 | 734-763-9677 FAX
-------------------------------------------------------------------
- The University of Michigan will never ask you for your password -

Bruno Friedmann (bruno-at-bareos)

unread,
Jul 31, 2025, 3:58:55 AMJul 31
to bareos-users
I'm guessing that what you're looking for is in Bareos since 2022 

Brock Palen

unread,
Jul 31, 2025, 1:49:23 PMJul 31
to Bruno Friedmann (bruno-at-bareos), bareos-users
John,
When I ran into this issue I did a simple fix which was for the bareos job to drop a file that would cause every backup to always have 1 file change.

Bruno says this is nolonger an issue so I might test it but the fix is simple:

Add a script to run when the job runs:

Client Run Before Job = "/etc/bareos/timestamp.sh /mnt/media/bareos.txt" # needed to keep things consolidating

https://docs.bareos.org/Configuration/Director.html#config-Dir_Job_RunScript

The timestamp.sh is simple:


#!/bin/bash
date > $1

This ensures there is at least one file being updated in that path every day.


Brock Palen
bro...@mlds-networks.com
www.mlds-networks.com
Websites, Linux, Hosting, Joomla, Consulting
> --
> You received this message because you are subscribed to the Google Groups "bareos-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users...@googlegroups.com.
> To view this discussion visit https://groups.google.com/d/msgid/bareos-users/b50a9d54-eccb-4476-86b6-538d0d9ef54cn%40googlegroups.com.


Reply all
Reply to author
Forward
0 new messages