Unable to get BAREOS to work with s3 compatible object store

39 views
Skip to first unread message

James Pulver

unread,
Oct 13, 2025, 11:59:54 AMOct 13
to bareos-users
I've been trying for weeks to get this to work. I now have managed to set up the dplcompat plugin, and have been able to follow troubleshooting like https://github.com/bareos/bareos/discussions/2063 from the command line - CLI commands all work. But when I try and run a backup to the s3 storage device in BAREOS it just hangs. No errors noted, and status in director shows the job running forever, and status on the storage shows 

Device status:

Device "obj" (ObjectStorage) is being acquired with:
    Volume:      Full-0011
    Pool:        Full
    Media type:  obj
Backend connection is working.
Inflight chunks: 0
Pending IO flush requests:
   /Full-0011/0000 - 230 (try=2)
    Device is being initialized.
    Total Bytes=230 Blocks=1 Bytes/block=230
    Positioned at File=0 Block=229
==
====

Used Volume status:
Full-0011 on device "obj" (ObjectStorage)
    Reader=0 writers=0 reserves=1 volinuse=1
====

====

It never progresses. I'm really not sure how to proceed.

Andreas Rogge

unread,
Oct 16, 2025, 6:19:31 AMOct 16
to bareos...@googlegroups.com
Hi James,

Am 13.10.25 um 17:59 schrieb James Pulver:
> I've been trying for weeks to get this to work. I now have managed to
> set up the dplcompat plugin, and have been able to follow
> troubleshooting like https://github.com/bareos/bareos/discussions/2063
> from the command line - CLI commands all work. But when I try and run a
> backup to the s3 storage device in BAREOS it just hangs. No errors
> noted, and status in director shows the job running forever, and status
> on the storage shows
>
> Device status:
>
> Device "obj" (ObjectStorage) is being acquired with:
>     Volume:      Full-0011
>     Pool:        Full
>     Media type:  obj
> Backend connection is working.
This means the backend connection test was successful.
> Inflight chunks: 0
> Pending IO flush requests:
>    /Full-0011/0000 - 230 (try=2)
This means chunk 0000 (i.e. the first part of the volume) should be
written to the backend storage. Its size is 230 bytes, so probably the
volume header. This operation failed for some reason and we are now in
try 2.

> It never progresses. I'm really not sure how to proceed.
I think the way to go here is to enable debug logging and tracing in the
SD so you'll see what happens with the backend calls.

You can just do "setdebug trace=1 storage=your-storage level=130" in
bconsole. It will print the path to the trace-file that will be created
on the SD.
In that tracefile you should be able to see what is happening between
Bareos and your script and maybe figure out where it fails.
You can also send it here (or to me personally, if you don't want to
share that publicly) and I'll see if I can help.

Best Regards,
Andreas
--
Andreas Rogge andrea...@bareos.com
Bareos GmbH & Co. KG Phone: +49 221-630693-86
http://www.bareos.com

Sitz der Gesellschaft: Köln | Amtsgericht Köln: HRA 29646
Komplementär: Bareos Verwaltungs-GmbH
Geschäftsführer: Stephan Dühr, Jörg Steffens, Philipp Storz

James Pulver

unread,
Oct 16, 2025, 4:01:29 PMOct 16
to bareos-users

Here's a sanitized version of what I get:
bareos-sd (50): stored/dir_cmd.cc:399-0 level=130 trace=1 timestamp=0 tracefilename=/var/lib/bareos/bareos-sd.trace
bareos-sd (100): lib/jcr.cc:378-0 Destruct JobControlRecord
bareos-sd (100): lib/jcr.cc:268-0 FreeCommonJcr: 7fbc7c05daa0
bareos-sd (100): lib/tls_openssl_private.cc:89-0 Destruct TlsOpenSslPrivate
bareos-sd (100): lib/bsock.cc:137-0 Destruct BareosSocket
bareos-sd (100): lib/bsock.cc:85-0 Construct BareosSocket
bareos-sd (100): lib/tls_openssl_private.cc:58-0 Construct TlsOpenSslPrivate
bareos-sd (100): lib/tls_openssl_private.cc:637-0 Set tcp filedescriptor: <5>
bareos-sd (100): lib/tls_openssl_private.cc:655-0 Set protocol: <>
bareos-sd (100): lib/tls_openssl_private.cc:577-0 Set ca_certfile: <>
bareos-sd (100): lib/tls_openssl_private.cc:583-0 Set ca_certdir: <>
bareos-sd (100): lib/tls_openssl_private.cc:589-0 Set crlfile_: <>
bareos-sd (100): lib/tls_openssl_private.cc:595-0 Set certfile_: <>
bareos-sd (100): lib/tls_openssl_private.cc:601-0 Set keyfile_: <>
bareos-sd (100): lib/tls_openssl_private.cc:619-0 Set dhfile_: <>
bareos-sd (100): lib/tls_openssl_private.cc:643-0 Set cipherlist: <>
bareos-sd (100): lib/tls_openssl_private.cc:649-0 Set ciphersuites: <>
bareos-sd (100): lib/tls_openssl_private.cc:625-0 Set Verify Peer: <false>
bareos-sd (100): lib/tls_openssl_private.cc:631-0 Set ktls: <false>
bareos-sd (100): lib/tls_openssl_private.cc:501-0 psk_server_cb. identitiy: R_DIRECTOR bareos-dir.
bareos-sd (100): lib/tls_openssl_private.cc:522-0 psk_server_cb. result: 32.
bareos-sd (50): lib/bnet.cc:143-0 TLS server negotiation established.
bareos-sd (110): stored/socket_server.cc:97-0 Conn: Hello Director bareos-dir calling
bareos-sd (110): stored/socket_server.cc:113-0 Got a DIR connection at 16-Oct-2025 15:50:40
bareos-sd (100): lib/jcr.cc:185-0 Construct JobControlRecord
bareos-sd (50): lib/cram_md5.cc:106-0 send: auth cram-md5 <1910067158.1760644240@R_STORAGE::bareos-sd> ssl=2
bareos-sd (100): lib/cram_md5.cc:167-0 cram-get received: auth cram-md5 <273648795.1760644240@R_DIRECTOR::bareos-dir> ssl=2
bareos-sd (50): lib/cram_md5.cc:61-0 my_name: <R_STORAGE::bareos-sd> - challenge_name: <R_DIRECTOR::bareos-dir>
bareos-sd (99): lib/cram_md5.cc:232-0 sending resp to challenge: a/xIaRRst6sme9ROL+/zHC
bareos-sd (90): stored/dir_cmd.cc:276-0 Message channel init completed.
bareos-sd (100): stored/job.cc:90-0 <dird: JobId=283 job=tmp-s3.2025-10-16_15.50.38_35 job_name=tmp-s3 client_name=lnx100-fd type=66 level=70 FileSet=tmp NoAttr=0 SpoolAttr=0 FileSetMD5=1C+cV9JM99s729My3DVDfD SpoolData=0 PreferMountedVols=1 SpoolSize=0 rerunning=0 VolSessionId=0 VolSessionTime=0 Quota=0 Protocol=0 BackupFormat=Native
bareos-sd (100): stored/job.cc:109-0 rerunning=0 VolSesId=0 VolSesTime=0 Protocol=0
bareos-sd (50): stored/job.cc:155-283 Quota set as 0
bareos-sd (50): stored/job.cc:166-283 >dird jid=283: 3000 OK Job SDid=4 SDtime=1760633557 Authorization=
bareos-sd (100): lib/bsock.cc:85-0 Construct BareosSocket
bareos-sd (100): lib/tls_openssl_private.cc:58-0 Construct TlsOpenSslPrivate
bareos-sd (100): lib/tls_openssl_private.cc:637-0 Set tcp filedescriptor: <7>
bareos-sd (100): lib/tls_openssl_private.cc:655-0 Set protocol: <>
bareos-sd (100): lib/tls_openssl_private.cc:577-0 Set ca_certfile: <>
bareos-sd (100): lib/tls_openssl_private.cc:583-0 Set ca_certdir: <>
bareos-sd (100): lib/tls_openssl_private.cc:589-0 Set crlfile_: <>
bareos-sd (100): lib/tls_openssl_private.cc:595-0 Set certfile_: <>
bareos-sd (100): lib/tls_openssl_private.cc:601-0 Set keyfile_: <>
bareos-sd (100): lib/tls_openssl_private.cc:619-0 Set dhfile_: <>
bareos-sd (100): lib/tls_openssl_private.cc:643-0 Set cipherlist: <>
bareos-sd (100): lib/tls_openssl_private.cc:649-0 Set ciphersuites: <>
bareos-sd (100): lib/tls_openssl_private.cc:625-0 Set Verify Peer: <false>
bareos-sd (100): lib/tls_openssl_private.cc:631-0 Set ktls: <false>
bareos-sd (100): lib/tls_openssl_private.cc:501-0 psk_server_cb. identitiy: R_JOB tmp-s3.2025-10-16_15.50.38_35.
bareos-sd (100): lib/tls_openssl_private.cc:522-0 psk_server_cb. result: 39.
bareos-sd (50): lib/bnet.cc:143-0 TLS server negotiation established.
bareos-sd (110): stored/socket_server.cc:97-0 Conn: Hello Start Job tmp-s3.2025-10-16_15.50.38_35
bareos-sd (110): stored/socket_server.cc:101-0 Got a FD connection at 16-Oct-2025 15:50:40
bareos-sd (50): stored/fd_cmds.cc:116-0 Found Job tmp-s3.2025-10-16_15.50.38_35
bareos-sd (50): lib/cram_md5.cc:106-0 send: auth cram-md5 <178672579.1760644240@R_STORAGE::bareos-sd> ssl=2
bareos-sd (100): lib/cram_md5.cc:167-0 cram-get received: auth cram-md5 <710187007.1760644240@R_CLIENT::lnxcmp-fd> ssl=2
bareos-sd (50): lib/cram_md5.cc:61-0 my_name: <R_STORAGE::bareos-sd> - challenge_name: <R_CLIENT::lnxcmp-fd>
bareos-sd (99): lib/cram_md5.cc:232-0 sending resp to challenge: S0/TKR/RWX/Ju7t327/y0D
bareos-sd (50): stored/job.cc:199-283 tmp-s3.2025-10-16_15.50.38_35 waiting 1800 sec for FD to contact SD key=
bareos-sd (50): stored/fd_cmds.cc:141-0 OK Authentication jid=283 Job tmp-s3.2025-10-16_15.50.38_35
bareos-sd (50): stored/job.cc:213-283 Auth=1 canceled=0
bareos-sd (120): stored/fd_cmds.cc:169-283 Start run Job=tmp-s3.2025-10-16_15.50.38_35
bareos-sd (110): stored/fd_cmds.cc:210-283 <filed: append open session
bareos-sd (120): stored/fd_cmds.cc:291-283 Append open session: append open session
bareos-sd (110): stored/fd_cmds.cc:302-283 >filed: 3000 OK open ticket = 4
bareos-sd (110): stored/fd_cmds.cc:210-283 <filed: append data 4
bareos-sd (120): stored/fd_cmds.cc:255-283 Append data: append data 4
bareos-sd (110): stored/fd_cmds.cc:257-283 <filed: append data 4
bareos-sd (100): lib/bsock.cc:90-283 Copy Contructor BareosSocket
bareos-sd (100): stored/block.cc:137-283 created new block of blocksize 1048576 (dev->max_block_size)
bareos-sd (50): stored/askdir.cc:211-283 DirFindNextAppendableVolume: reserved=1 Vol=
bareos-sd (50): stored/askdir.cc:261-283 >dird CatReq Job=tmp-s3.2025-10-16_15.50.38_35 FindMedia=1 pool_name=Full media_type=s3 unwanted_volumes=
bareos-sd (50): stored/askdir.cc:117-283 <dird 1000 OK VolName=Full-0011 VolJobs=0 VolFiles=0 VolBlocks=0 VolBytes=0 VolMounts=0 VolErrors=0 VolWrites=0 MaxVolBytes=53687091200 VolCapacityBytes=0 VolStatus=Append Slot=0 MaxVolJobs=0 MaxVolFiles=0 InChanger=0 VolReadTime=0 VolWriteTime=0 EndFile=0 EndBlock=0 LabelType=0 MediaId=11 EncryptionKey= MinBlocksize=0 MaxBlocksize=0
bareos-sd (50): stored/askdir.cc:147-283 DoGetVolumeInfo return true slot=0 Volume=Full-0011, VolminBlocksize=0 VolMaxBlocksize=0
bareos-sd (50): stored/askdir.cc:151-283 setting dcr->VolMinBlocksize(0) to vol.VolMinBlocksize(0)
bareos-sd (50): stored/askdir.cc:154-283 setting dcr->VolMaxBlocksize(0) to vol.VolMaxBlocksize(0)
bareos-sd (50): stored/askdir.cc:272-283 Call reserve_volume for write. Vol=Full-0011
bareos-sd (50): stored/askdir.cc:279-283 DirFindNextAppendableVolume return true. vol=Full-0011
bareos-sd (100): stored/append.cc:272-283 Start append data. res=1
bareos-sd (100): stored/acquire.cc:418-283 acquire_append device is dplcompat
bareos-sd (100): stored/mount.cc:608-283 No swap_dev set
bareos-sd (50): stored/askdir.cc:186-283 >dird CatReq Job=tmp-s3.2025-10-16_15.50.38_35 GetVolInfo VolName=Full-0011 write=1
bareos-sd (50): stored/askdir.cc:117-283 <dird 1000 OK VolName=Full-0011 VolJobs=0 VolFiles=0 VolBlocks=0 VolBytes=0 VolMounts=0 VolErrors=0 VolWrites=0 MaxVolBytes=53687091200 VolCapacityBytes=0 VolStatus=Append Slot=0 MaxVolJobs=0 MaxVolFiles=0 InChanger=0 VolReadTime=0 VolWriteTime=0 EndFile=0 EndBlock=0 LabelType=0 MediaId=11 EncryptionKey= MinBlocksize=0 MaxBlocksize=0
bareos-sd (50): stored/askdir.cc:147-283 DoGetVolumeInfo return true slot=0 Volume=Full-0011, VolminBlocksize=0 VolMaxBlocksize=0
bareos-sd (50): stored/askdir.cc:151-283 setting dcr->VolMinBlocksize(0) to vol.VolMinBlocksize(0)
bareos-sd (50): stored/askdir.cc:154-283 setting dcr->VolMaxBlocksize(0) to vol.VolMaxBlocksize(0)
bareos-sd (100): stored/autochanger.cc:125-283 Device "s3" (ObjectStorage) is not attached to an autochanger
bareos-sd (100): stored/dev.cc:510-283 open dev: type=2080388896 archive_device_string="s3" (ObjectStorage) vol=Full-0011 mode=OPEN_READ_WRITE
bareos-sd (100): stored/dev.cc:528-283 call OpenDevice mode=OPEN_READ_WRITE
bareos-sd (100): stored/dev.cc:593-283 open archive: mode=OPEN_READ_WRITE open(ObjectStorage/Full-0011, 00000002, 0640)
bareos-sd (120): backends/dplcompat_device.cc:272-283 CheckRemoteConnection called
bareos-sd (130): backends/crud_storage.cc:221-283 test_connection called
bareos-sd (130): backends/crud_storage.cc:229-283 testconnection returned 0
== Output ==
s3://test-backup-pool/ (bucket):
   Location:  default
   Payer:     BucketOwner
   Ownership: none
   Versioning:none
   Expiration rule: none
   Block Public Access: none
   Policy:    none
   CORS:      none
   ACL:       s3_0001: FULL_CONTROL
============
bareos-sd (100): backends/chunked_device.cc:106-283 New allocated buffer of 262144000 bytes at 7fbc5c5ff010
bareos-sd (120): backends/dplcompat_device.cc:332-283 Reading chunk Full-0011/0000
bareos-sd (130): backends/crud_storage.cc:245-283 stat Full-0011/0000 called
bareos-sd (130): backends/crud_storage.cc:253-283 stat returned 1
== Output ==
============
bareos-sd (110): backends/crud_storage.cc:260-283 stat returned 1
bareos-sd (100): backends/dplcompat_device.cc:339-283 Running ""/usr/lib/bareos/scripts/s3cmd-wrapper.sh" stat "Full-0011" "0000"" returned 1
bareos-sd (100): stored/dev.cc:605-283 open failed: stored/dev.cc:600 Could not open: ObjectStorage/Full-0011
bareos-sd (100): stored/dev.cc:614-283 open dev: disk fd=-1 opened
bareos-sd (100): stored/dev.cc:534-283 preserve=20127762566 fd=-1
bareos-sd (100): stored/dev.cc:453-283 setting minblocksize to 64512, maxblocksize to label_block_size=64512, on device "s3" (ObjectStorage)
bareos-sd (100): stored/block.cc:137-283 created new block of blocksize 64512 (dev->max_block_size)
bareos-sd (100): stored/dev.cc:467-283 created new block of buf_len: 64512 on device "s3" (ObjectStorage)
bareos-sd (100): stored/dev.cc:510-283 open dev: type=2080388896 archive_device_string="s3" (ObjectStorage) vol=Full-0011 mode=OPEN_READ_WRITE
bareos-sd (100): stored/dev.cc:528-283 call OpenDevice mode=OPEN_READ_WRITE
bareos-sd (100): stored/dev.cc:593-283 open archive: mode=OPEN_READ_WRITE open(ObjectStorage/Full-0011, 00000002, 0640)
bareos-sd (120): backends/dplcompat_device.cc:272-283 CheckRemoteConnection called
bareos-sd (130): backends/crud_storage.cc:221-283 test_connection called
bareos-sd (130): backends/crud_storage.cc:229-283 testconnection returned 0
== Output ==
s3://test-backup-pool/ (bucket):
   Location:  default
   Payer:     BucketOwner
   Ownership: none
   Versioning:none
   Expiration rule: none
   Block Public Access: none
   Policy:    none
   CORS:      none
   ACL:       s3_0001: FULL_CONTROL
============
bareos-sd (100): stored/dev.cc:614-283 open dev: disk fd=0 opened
bareos-sd (100): stored/dev.cc:534-283 preserve=20127762346 fd=0
bareos-sd (130): stored/label.cc:520-283 Start CreateVolumeLabel()
bareos-sd (100): stored/dev.cc:879-283 Clear volhdr vol=

Volume Label:
Id                : Bareos 2.0 immortal
VerNo             : 20
VolName           : Full-0011
PrevVolName       :
VolFile           : 0
LabelType         : VOL_LABEL
LabelSize         : 0
PoolName          : Full
MediaType         : s3
PoolType          : Backup
HostName          : bareos
Date label written: 16-Oct-2025 15:50
bareos-sd (130): stored/label.cc:410-283 Wrote label of 194 bytes to "s3" (ObjectStorage)
bareos-sd (130): stored/label.cc:415-283 Call WriteBlockToDev()
bareos-sd (130): stored/label.cc:423-283  Wrote block to device
bareos-sd (100): backends/chunked_device.cc:345-283 Enqueueing chunk 0 of volume Full-0011 (230 bytes)
bareos-sd (100): backends/chunked_device.cc:164-283 Started new IO-thread threadid=0x00007fbc7affd640
bareos-sd (100): backends/chunked_device.cc:164-283 Started new IO-thread threadid=0x00007fbc7b7fe640
bareos-sd (100): backends/chunked_device.cc:164-283 Started new IO-thread threadid=0x00007fbc7a7fc640
bareos-sd (100): backends/chunked_device.cc:164-283 Started new IO-thread threadid=0x00007fbc79ffb640
bareos-sd (100): backends/chunked_device.cc:361-283 Allocated chunk io request of 48 bytes at 7fbc7c046930
bareos-sd (100): backends/chunked_device.cc:1082-283 storage is pending, as there are queued write requests for previous volumes.
bareos-sd (100): backends/chunked_device.cc:417-0 Flushing chunk 0 of volume Full-0011 by thread 0x00007fbc7affd640
bareos-sd (120): backends/dplcompat_device.cc:285-0 Flushing chunk Full-0011/0000
bareos-sd (100): backends/chunked_device.cc:226-0 Creating inflight file /var/lib/bareos/Full-0011@0000%inflight for volume Full-0011, chunk 0
bareos-sd (100): backends/dplcompat_device.cc:290-0 Could not acquire inflight lease for Full-0011/0000
bareos-sd (100): backends/chunked_device.cc:449-0 Enqueueing chunk 0 of volume Full-0011 for retry of upload later
bareos-sd (100): backends/chunked_device.cc:1078-283 volume Full-0011 is pending, as there are queued write requests
bareos-sd (100): backends/chunked_device.cc:1078-283 volume Full-0011 is pending, as there are queued write requests
bareos-sd (100): backends/chunked_device.cc:1078-283 volume Full-0011 is pending, as there are queued write requests
bareos-sd (100): backends/chunked_device.cc:1078-283 volume Full-0011 is pending, as there are queued write requests
bareos-sd (100): backends/chunked_device.cc:1078-283 volume Full-0011 is pending, as there are queued write requests
bareos-sd (100): backends/chunked_device.cc:1078-283 volume Full-0011 is pending, as there are queued write requests
bareos-sd (100): backends/chunked_device.cc:1078-283 volume Full-0011 is pending, as there are queued write requests
bareos-sd (100): backends/chunked_device.cc:1078-283 volume Full-0011 is pending, as there are queued write requests
bareos-sd (100): backends/chunked_device.cc:1078-283 volume Full-0011 is pending, as there are queued write requests
bareos-sd (100): backends/chunked_device.cc:1078-283 volume Full-0011 is pending, as there are queued write requests
bareos-sd (100): backends/chunked_device.cc:1078-283 volume Full-0011 is pending, as there are queued write requests
bareos-sd (100): backends/chunked_device.cc:1078-283 volume Full-0011 is pending, as there are queued write requests
bareos-sd (100): backends/chunked_device.cc:1078-283 volume Full-0011 is pending, as there are queued write requests
bareos-sd (100): backends/chunked_device.cc:1078-283 volume Full-0011 is pending, as there are queued write requests
bareos-sd (100): backends/chunked_device.cc:1078-283 volume Full-0011 is pending, as there are queued write requests
bareos-sd (100): backends/chunked_device.cc:1078-283 volume Full-0011 is pending, as there are queued write requests
bareos-sd (100): backends/chunked_device.cc:1078-283 volume Full-0011 is pending, as there are queued write requests
bareos-sd (100): backends/chunked_device.cc:1078-283 volume Full-0011 is pending, as there are queued write requests
bareos-sd (100): backends/chunked_device.cc:1078-283 volume Full-0011 is pending, as there are queued write requests
bareos-sd (100): backends/chunked_device.cc:1078-283 volume Full-0011 is pending, as there are queued write requests
bareos-sd (100): backends/chunked_device.cc:1078-283 volume Full-0011 is pending, as there are queued write requests
bareos-sd (100): backends/chunked_device.cc:1078-283 volume Full-0011 is pending, as there are queued write requests
bareos-sd (100): backends/chunked_device.cc:1078-283 volume Full-0011 is pending, as there are queued write requests
bareos-sd (100): backends/chunked_device.cc:1078-283 volume Full-0011 is pending, as there are queued write requests
bareos-sd (100): backends/chunked_device.cc:1078-283 volume Full-0011 is pending, as there are queued write requests
bareos-sd (100): backends/chunked_device.cc:1078-283 volume Full-0011 is pending, as there are queued write requests
bareos-sd (100): backends/chunked_device.cc:1078-283 volume Full-0011 is pending, as there are queued write requests

Not sure what to make of it except it still seems like it cannot actually write out an object to s3. 

Bruno Friedmann (bruno-at-bareos)

unread,
Oct 20, 2025, 4:14:01 AMOct 20
to bareos-users
Hi James,

Did you really fully completed all the steps describe in the discussion ?
Because you shouldn't have such returns afterwards

Running ""/usr/lib/bareos/scripts/s3cmd-wrapper.sh" stat "Full-0011" "0000"" returned 1

Please carefully recheck all the steps, and try manually to create object once this works
doing a label manually in Bareos should also works (volume created and identified).

Then the backup should run without any glitches.

It's certainly just a question of a details, but which one ;-) 

James Pulver

unread,
Oct 22, 2025, 11:48:56 AMOct 22
to bareos-users
As far as I can tell, yes I've completed all the steps.
[root@lnx bareos]# sudo -u bareos ./bareoss3test.sh testconnection

s3://test-backup-pool/ (bucket):
   Location:  default
   Payer:     BucketOwner
   Ownership: none
   Versioning:none
   Expiration rule: none
   Block Public Access: none
   Policy:    none
   CORS:      none
   ACL:       0001: FULL_CONTROL
[root@lnx bareos]# echo "Hello, World!" | sudo -u bareos ./bareoss3test.sh upload test 0000
upload: '<stdin>' -> 's3://test-backup-pool/test/0000' (0 bytes in 0.3 seconds, -1.00 B/s) [1 of 1]
[root@lnx bareos]# sudo -u bareos ./bareoss3test.sh list test
0000 14
[root@lnx bareos]# sudo -u bareos ./bareoss3test.sh stat test 0000
14
[root@lnx bareos]# sudo -u bareos ./bareoss3test.sh download test 0000
Hello, World!
[root@lnx bareos]# sudo -u bareos ./bareoss3test.sh remove test 0000
delete: 's3://test-backup-pool/test/0000'

These all succeeded as shown. Manual label fails however from bconsole. 

Bruno Friedmann (bruno-at-bareos)

unread,
Oct 23, 2025, 4:48:21 AMOct 23
to bareos-users
Hi James, as I said previously, seems your /bareoss3test.sh has all the needed configuration setup and something was not ported into your bareos configuration, otherwise the label should also work.

James Pulver

unread,
Oct 27, 2025, 12:58:00 PM (11 days ago) Oct 27
to bareos-users
I just added 
# Log the expanded command to /tmp
    echo "$s3cmd_prog ${s3cmd_common_args[*]} $*" >> /tmp/run_s3cmd_command.log

to the s3command wrapper script.

This shows that the last thing put into that command is
/usr/bin/s3cmd --no-progress --config /etc/bareos/s3cmd.cfg info s3://test-backup-pool

I then tested that command doing
sudo -u bareos -s
/usr/bin/s3cmd --no-progress --config /etc/bareos/s3cmd.cfg info s3://test-back
up-pool
Which immediately returns:
s3://test-backup-pool/ (bucket):
  Location:  default
  Payer:     BucketOwner
  Ownership: none
  Versioning:none
  Expiration rule: none
  Block Public Access: none
  Policy:    none
  CORS:      none
  ACL:       usr_0001: FULL_CONTROL

So as far as I can tell, my /etc/bareos/s3cmd.cfg is correct, and I can run s3cmd using that config as the bareos user successfully. But the actual backup just hangs after that command - nothing else happens that I can see. I'm not sure what it's waiting for to get back? Or what else to check.

Bruno Friedmann (bruno-at-bareos)

unread,
Oct 28, 2025, 6:46:47 AM (10 days ago) Oct 28
to bareos-users
Hi James,

What you didn't shown here is the configuration of you device

and maybe you have an option line that create an "invisible error" 
For example with a local s3minio server and the default config example file 

```
  Device Options = "iothreads=4"
                   ",ioslots=2"
                   ",chunksize=262144000"
                   ",program=s3cmd-wrapper.sh"
                   ",s3cfg=/etc/bareos/s3cfg"
                   ",bucket=bareos"
                   ",storage_class=STANDARD_IA"
```

won't work because minio won't accept the storage_class so I can get whatever test with s3cmd this error will be only visible during the label of the volume with a returned 403 

(this can be visible if you activate debuglevel and trace on the storage with)
setdebug level=500 trace=1 timestamp=1 storage=dplcompat 

So the fix is easy, remove storage_class from the device configuration or found the right value that need to be there.

With the trace activated you will have detailed informations available and maybe understand  what's going wrong in your setup.

My bet is still that you have one option in bareos-sd device which is not present in the s3cfg which are failing.

James Pulver

unread,
Oct 31, 2025, 10:25:39 AM (7 days ago) Oct 31
to bareos-users
I don't have a storage class - here's the device definition:

Device {
 Name = s3
 MediaType = "s3"
 ArchiveDevice = Object Storage
 DeviceType = dplcompat
 Device Options = "iothreads=4"
                  ",ioslots=2"
                  ",chunksize=262144000"
                  ",program=/usr/lib/bareos/scripts/s3cmd-wrapper.sh"
                  ",s3cfg=/etc/bareos/s3cmd.cfg"
                  ",bucket=test-backup-pool"
 LabelMedia = yes
 RandomAccess = yes
 AutomaticMount = yes
 RemovableMedia = no
 AlwaysOpen = no
}


Andreas Rogge

unread,
Nov 4, 2025, 4:12:44 AM (3 days ago) Nov 4
to bareos...@googlegroups.com
Hi James,

sorry for the delay - I had vacation and other things going on.

Am 16.10.25 um 22:01 schrieb James Pulver:
> bareos-sd (100): backends/chunked_device.cc:1082-283 storage is pending,
> as there are queued write requests for previous volumes.
> bareos-sd (100): backends/chunked_device.cc:417-0 Flushing chunk 0 of
> volume Full-0011 by thread 0x00007fbc7affd640
> bareos-sd (120): backends/dplcompat_device.cc:285-0 Flushing chunk
> Full-0011/0000
> bareos-sd (100): backends/chunked_device.cc:226-0 Creating inflight
> file /var/lib/bareos/Full-0011@0000%inflight for volume Full-0011, chunk 0
> bareos-sd (100): backends/dplcompat_device.cc:290-0 Could not acquire
> inflight lease for Full-0011/0000

Could you check if you have files matching the pattern
/var/lib/bareos/*@*%inflight lying around when the SD is stopped?
If that's the case, please delete these, start the SD and see if things
have improved.

James Pulver

unread,
Nov 5, 2025, 3:46:50 PM (2 days ago) Nov 5
to bareos-users
Thank you so much! That appears to have made it finally work - at least in my toy test environment! I never would have figured that out on my own with the directions I was finding or the logging / errors / whatever.

James Pulver

Reply all
Reply to author
Forward
0 new messages