Issues switching between dedupe storage and tape/standard disk

58 views
Skip to first unread message

Brock Palen

unread,
Mar 4, 2026, 1:00:28 PMMar 4
to bareos-users
I’m starting to move to dedupe storage backend with BTRFS. My current system uses disk volumes with migration jobs to tape. These are almost all AI jobs.

When consoliations started happening it appears there is differnet behavior and I’m not sure writes are happening as expected.

Bareos: Backup Unknown term code of sch-hp-desktop-fd Virtual Full. Now I see the line:


04-Mar 06:57 myth-sd JobId 116215: stored/acquire.cc:156 Changing read device. Want Media Type="File" have="File-Dedupe”

With my tape migraitons all the time, allowing it to switch from File to LTO pull in the blocks that were migrated to tape and I expeced this to just work but it appears if you open a dedupe device and then not use it it tosses an error:

04-Mar 06:57 myth-sd JobId 116215: stored/acquire.cc:156 Changing read device. Want Media Type="File" have="File-Dedupe"
device="BtrfsStorage1" (/mnt/bareos)
04-Mar 06:57 myth-sd JobId 116215: Releasing device "BtrfsStorage1" (/mnt/bareos).
04-Mar 06:57 myth-sd: ERROR in backends/dedupable_device.cc:459 Trying to flush dedup volume when none are open.
04-Mar 06:57 myth-sd JobId 116215: Fatal error: Failed to flush device "BtrfsStorage1" (/mnt/bareos).
04-Mar 06:57 myth-sd JobId 116215: Fatal error: stored/acquire.cc:206 No suitable device found to read Volume "AI-Incremental-2304"
04-Mar 06:57 myth-sd JobId 116215: Fatal error: stored/mount.cc:885 Cannot open Dev="BtrfsStorage1" (/mnt/bareos), Vol=AI-Incremental-2304
04-Mar 06:57 myth-sd JobId 116215: End of all volumes.
04-Mar 06:57 myth-sd JobId 116215: Fatal error: stored/mac.cc:698 Fatal append error on device "BtrfsStorage" (/mnt/bareos): ERR=Attempt to read past end of tape or file.


Full Job Log
Note the defaulte stuff I used client compression and didn’t deflate before. I want to keep client compression (minimize bandwidth over the WAN) but decompress for the dedupe and BTFS compression to take over. I don’t think that is part of this issue.



04-Mar 06:57 myth-dir JobId 116215: Version: 25.0.3~pre7.511272b66 (18 February 2026) Ubuntu 24.04.4 LTS
04-Mar 06:57 myth-dir JobId 116215: Start Virtual Backup JobId 116215, Job=sch-hp-desktop-Users-No-Pictures.2026-03-04_04.00.05_24
04-Mar 06:57 myth-dir JobId 116215: Bootstrap records written to /var/lib/bareos/myth-dir.restore.48.bsr
04-Mar 06:57 myth-dir JobId 116215: Consolidating JobIds 116137,115099,115141 containing 9 files
04-Mar 06:57 myth-dir JobId 116215: Connected Storage daemon at myth.sheptechllc.com:9103, encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
04-Mar 06:57 myth-dir JobId 116215: Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3
04-Mar 06:57 myth-dir JobId 116215: Using Device "FileStorage" to read.
04-Mar 06:57 myth-sd JobId 116215: Using just in time reservation for job 116215
04-Mar 06:57 myth-dir JobId 116215: Using Device "JustInTime Device" to write.
04-Mar 06:57 myth-sd JobId 116215: Moving to end of data on volume "AI-Consol-Dedup-2928"
04-Mar 06:57 myth-sd JobId 116215: Ready to append to end of Volume "AI-Consol-Dedup-2928" at file=0.
04-Mar 06:57 myth-sd JobId 116215: stored/acquire.cc:156 Changing read device. Want Media Type="File-Dedupe" have="File"
device="FileStorage" (/mnt/bacula)
04-Mar 06:57 myth-sd JobId 116215: Releasing device "FileStorage" (/mnt/bacula).
04-Mar 06:57 myth-sd JobId 116215: Media Type change. New read device "BtrfsStorage1" (/mnt/bareos) chosen.
04-Mar 06:57 myth-sd JobId 116215: Ready to read from volume "AI-Consol-Dedup-2918" on device "BtrfsStorage1" (/mnt/bareos).
04-Mar 06:57 myth-sd JobId 116215: autoxflate-sd: Compressor on device BtrfsStorage is GZIP
04-Mar 06:57 myth-sd JobId 116215: autoxflate-sd: BtrfsStorage OUT:[SD->inflate=yes->deflate=no->DEV] IN:[DEV->inflate=no->deflate=yes->SD]
04-Mar 06:57 myth-sd JobId 116215: autoxflate-sd: Compressor on device BtrfsStorage is GZIP
04-Mar 06:57 myth-sd JobId 116215: autoxflate-sd: BtrfsStorage OUT:[SD->inflate=yes->deflate=no->DEV] IN:[DEV->inflate=no->deflate=yes->SD]
04-Mar 06:57 myth-sd JobId 116215: Spooling data ...
04-Mar 06:57 myth-sd JobId 116215: Forward spacing Volume "AI-Consol-Dedup-2918" to file:block 0:40942.
04-Mar 06:57 myth-sd JobId 116215: End of Volume at file 0 on device "BtrfsStorage1" (/mnt/bareos), Volume "AI-Consol-Dedup-2918"
04-Mar 06:57 myth-sd JobId 116215: stored/acquire.cc:156 Changing read device. Want Media Type="File" have="File-Dedupe"
device="BtrfsStorage1" (/mnt/bareos)
04-Mar 06:57 myth-sd JobId 116215: Releasing device "BtrfsStorage1" (/mnt/bareos).
04-Mar 06:57 myth-sd: ERROR in backends/dedupable_device.cc:459 Trying to flush dedup volume when none are open.
04-Mar 06:57 myth-sd JobId 116215: Fatal error: Failed to flush device "BtrfsStorage1" (/mnt/bareos).
04-Mar 06:57 myth-sd JobId 116215: Fatal error: stored/acquire.cc:206 No suitable device found to read Volume "AI-Incremental-2304"
04-Mar 06:57 myth-sd JobId 116215: Fatal error: stored/mount.cc:885 Cannot open Dev="BtrfsStorage1" (/mnt/bareos), Vol=AI-Incremental-2304
04-Mar 06:57 myth-sd JobId 116215: End of all volumes.
04-Mar 06:57 myth-sd JobId 116215: Fatal error: stored/mac.cc:698 Fatal append error on device "BtrfsStorage" (/mnt/bareos): ERR=Attempt to read past end of tape or file.

04-Mar 06:57 myth-sd JobId 116215: Elapsed time=00:00:01, Transfer rate=210.0 K Bytes/second
04-Mar 06:57 myth-sd JobId 116215: Releasing device "BtrfsStorage" (/mnt/bareos).
04-Mar 06:57 myth-sd JobId 116215: Releasing device "FileStorage5" (/mnt/bacula).
04-Mar 06:57 myth-sd JobId 116215: autoxflate-sd: inflate ratio: 378.56%
04-Mar 06:57 myth-sd JobId 116215: autoxflate-sd: deflate ratio: 26.42%
04-Mar 06:57 myth-dir JobId 116215: Replicating deleted files from jobids 116137,115099,115141 to jobid 116215
04-Mar 06:57 myth-dir JobId 116215: Error: Bareos myth-dir 25.0.3~pre7.511272b66 (18Feb26):
OS Information: Ubuntu 24.04.4 LTS
JobId: 116215
Job: sch-hp-desktop-Users-No-Pictures.2026-03-04_04.00.05_24
Backup Level: Virtual Full
Client: "sch-hp-desktop-fd" 24.0.1~pre27.250812184 (24Jan25) Microsoft Windows 8 (build 9200), 64-bit,Windows-native
FileSet: "Windows All Users No Pictures" 2019-03-24 21:32:52
Pool: "AI-Consolidated-Dedupe" (From Job Pool's NextPool resource)
Catalog: "myth_catalog" (From Client resource)
Storage: "BtrfsStorage" (From Storage from Pool's NextPool resource)
Scheduled time: 04-Mar-2026 04:00:05
Start time: 11-Feb-2026 01:08:16
End time: 11-Feb-2026 01:13:08
Elapsed time: 4 mins 52 secs
Priority: 8
Allow Mixed Priority: no
SD Files Written: 4
SD Bytes Written: 210,067 (210.0 KB)
Rate: 0.7 KB/s
Volume name(s):
Volume Session Id: 151
Volume Session Time: 1772410707
Last Volume Bytes: 0 (0 B)
SD Errors: 1
SD termination status: Fatal Error
Accurate: yes
Bareos binary info: Bareos community build (UNSUPPORTED): Get professional support from https://www.bareos.com
Job triggered by: User
Termination: *** Backup Error ***

04-Mar 06:57 myth-dir JobId 116215: Rescheduled Job sch-hp-desktop-Users-No-Pictures.2026-03-04_04.00.05_24 at 04-Mar-2026 06:57 to re-run in 1800 seconds (04-Mar-2026 07:27).


Brock Palen
bro...@mlds-networks.com
www.mlds-networks.com
Websites, Linux, Hosting, Joomla, Consulting



Bruno Friedmann (bruno-at-bareos)

unread,
Mar 5, 2026, 7:28:07 AMMar 5
to bareos-users
Hi I'm not sure I picked all the details, but basically doing Virtual Full of AI volumes only support one FileType for the source and one for the VF.
So you can't mix File and File-Dedupe, incremental and consolidated volumes need to be all same FileType and accessible by the same storage with all device pointing to the same archive device location.

Hope this clarify a bit the things.

Brock Palen

unread,
Mar 5, 2026, 3:27:42 PMMar 5
to Bruno Friedmann (bruno-at-bareos), bareos-users
Ok I was trying to do this and have them slowly migrate over to the dedupe storage and then retire the old File storage.

Just to be sure what if File wasn’t in the mix? What if it was just LTO and File-Dedupe where LTO is populated by Migrate jobs from File-Dedupe when it gets full.

Will that work like before? Or will this same problem arise?


Brock Palen
bro...@mlds-networks.com
www.mlds-networks.com
Websites, Linux, Hosting, Joomla, Consulting



> --
> You received this message because you are subscribed to the Google Groups "bareos-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users...@googlegroups.com.
> To view this discussion visit https://groups.google.com/d/msgid/bareos-users/b402fd27-3d54-4eb2-9aba-6abc55e5609cn%40googlegroups.com.


Bruno Friedmann (bruno-at-bareos)

unread,
Mar 9, 2026, 4:33:53 AM (11 days ago) Mar 9
to bareos-users
The consolidation will failed the same, I fear. inc+consolidate should be on same storage with same media type.

Just a rough thought maybe bcopy can help to  migrate manually the previous File volume to FileDedup storage. 

Brock Palen

unread,
Mar 9, 2026, 12:04:17 PM (11 days ago) Mar 9
to Bruno Friedmann (bruno-at-bareos), bareos-users
Thanks I’m backing out my changes get things working again and then be much more careful with my test client.

BTW I’m also getting sometimes:
09-Mar 10:12 myth-sd JobId 116490: Ready to read from volume "AI-Consol-Dedup-2908" on device "BtrfsStorage" (/mnt/bareos).
09-Mar 10:12 myth-sd JobId 116490: Forward spacing Volume "AI-Consol-Dedup-2908" to file:block 0:42890.
09-Mar 10:12 myth-sd JobId 116490: Fatal error: autoxflate-sd: compress.deflate_buffer was not setup missing bSdEventSetupRecordTranslation call?

I’m guessing this is also related to my attempts to decompress for the dedeupe to work. I’ll try to move through this much more slowly to make sure that consolidations and full consolidations and tape migrations and consolidation all work before moving onto other clients. I thought I had this working until consolidation went sideways.


Brock Palen
bro...@mlds-networks.com
www.mlds-networks.com
Websites, Linux, Hosting, Joomla, Consulting



> To view this discussion visit https://groups.google.com/d/msgid/bareos-users/39a8b4b7-c8ba-47b1-8733-e7e5b3fa955an%40googlegroups.com.

Brock Palen

unread,
Mar 13, 2026, 11:24:30 PM (7 days ago) Mar 13
to Bruno Friedmann (bruno-at-bareos), bareos-users
Ok can confirm after further testing it is not possible to use DeDupe Backend with Always Incremental Jobs where you migrate to tape media later it gives the same error that when the system tries to swap the dedupe backend

2026-03-13 21:39:41 myth-dir JobId 116868: Using Device "BtrfsStorage" to read.
2026-03-13 21:39:41 myth-sd JobId 116868: Using just in time reservation for job 116868
2026-03-13 21:39:41 myth-dir JobId 116868: Using Device "JustInTime Device" to write.
2026-03-13 21:39:41 myth-sd JobId 116868: Moving to end of data on volume "AI-Consol-Dedup-2920"
2026-03-13 21:39:41 myth-sd JobId 116868: Ready to append to end of Volume "AI-Consol-Dedup-2920" at file=0.
2026-03-13 21:39:41 myth-sd JobId 116868: stored/acquire.cc:156 Changing read device. Want Media Type="LTO5" have="File-Dedupe"
device="BtrfsStorage" (/mnt/bareos)
2026-03-13 21:39:41 myth-sd JobId 116868: Releasing device "BtrfsStorage" (/mnt/bareos).
2026-03-13 21:39:41 myth-sd: ERROR in backends/dedupable_device.cc:459 Trying to flush dedup volume when none are open.
2026-03-13 21:39:41 myth-sd JobId 116868: Fatal error: Failed to flush device "BtrfsStorage" (/mnt/bareos).
2026-03-13 21:39:41 myth-sd JobId 116868: Fatal error: stored/acquire.cc:206 No suitable device found to read Volume "GU0589L6"
2026-03-13 21:39:41 myth-sd JobId 116868: Releasing device "BtrfsStorage1" (/mnt/bareos).
2026-03-13 21:39:41 myth-sd JobId 116868: Releasing device "Tand-LTO5" (/dev/tape/by-id/scsi-HUL810A6YV-nst).
2026-03-13 21:39:41 myth-dir JobId 116868: Replicating deleted files from jobids 116345,116705,115609 to jobid 116868
2026-03-13 21:39:42 myth-dir JobId 116868: Error: Bareos myth-dir 25.0.3~pre7.511272b66 (18Feb26):


While using the regular File backend it works just fine:

11-Mar 23:16 myth-dir JobId 116756: Using Device "FileStorage" to read.
11-Mar 23:16 myth-sd JobId 116756: Using just in time reservation for job 116756
11-Mar 23:16 myth-dir JobId 116756: Using Device "JustInTime Device" to write.
11-Mar 23:16 myth-sd JobId 116756: Moving to end of data on volume "AI-Consolidated-2224"
11-Mar 23:16 myth-sd JobId 116756: Ready to append to end of Volume "AI-Consolidated-2224" size=21854296445
11-Mar 23:16 myth-sd JobId 116756: stored/acquire.cc:156 Changing read device. Want Media Type="LTO5" have="File"
device="FileStorage" (/mnt/bacula)
11-Mar 23:16 myth-sd JobId 116756: Releasing device "FileStorage" (/mnt/bacula).
11-Mar 23:16 myth-sd JobId 116756: Media Type change. New read device "Tand-LTO5" (/dev/tape/by-id/scsi-HUL810A6YV-nst) chosen.
11-Mar 23:16 myth-sd JobId 116756: 3307 Issuing autochanger "unload slot 1, drive 0" command.

Should this be considered a bug? Or is it just a quirk that it works at all?


Brock Palen
bro...@mlds-networks.com
www.mlds-networks.com
Websites, Linux, Hosting, Joomla, Consulting



Sebastian Sura

unread,
Mar 16, 2026, 2:52:11 AM (5 days ago) Mar 16
to bareos...@googlegroups.com
Hi Brock,

I would expect this to work.  Could you create an issue for it on
https://github.com/bareos/bareos/issues ?

Kind Regards,
Sebastian Sura

Am 14.03.26 um 04:24 schrieb 'Brock Palen' via bareos-users:
--
Sebastian Sura sebasti...@bareos.com
Bareos GmbH & Co. KG Phone: +49 221 630693-0
https://www.bareos.com
Sitz der Gesellschaft: Köln | Amtsgericht Köln: HRA 29646
Komplementär: Bareos Verwaltungs-GmbH
Geschäftsführer: Stephan Dühr, Jörg Steffens, Philipp Storz

Brock Palen

unread,
Mar 16, 2026, 12:27:09 PM (4 days ago) Mar 16
to Sebastian Sura, bareos...@googlegroups.com
Done https://github.com/bareos/bareos/issues/2584

BTW I didn’t include or make a second issue about

14-Mar 16:27 myth-sd JobId 117120: Fatal error: autoxflate-sd: compress.deflate_buffer was not setup missing bSdEventSetupRecordTranslation call?


I don’t know if this is related. autoxflate is new to me, I make heavy use of FD compression but was drying to inflate to help the dedupe and btrfs zstd to do their thing as all jobs are currently compressed and many clients are on limited bandwidth or (AWS) expensive.

Brock Palen
bro...@mlds-networks.com
www.mlds-networks.com
Websites, Linux, Hosting, Joomla, Consulting



> To view this discussion visit https://groups.google.com/d/msgid/bareos-users/9fb0a054-2d67-4683-a8ef-8415810c8bca%40bareos.com.

Reply all
Reply to author
Forward
0 new messages