I have exactly the same issue.
I have turned on verbose monitoring on my S3 bucket which I am using for droplet and it is not even being hit by Bareos at all.
When I try the bucket with awscli or with S3FS, it works without any issues with exactly the same credentials, but it doesn't from S3 droplet.
AWS doesn't report any get/put/head requests, it doesn't return any 5xx nor 4xx errors.
I also tried adding "https://" in the bucket address, but then it returns:
17-May-2018 13:36:00.402878 17-May-2018 13:36:00.402918 bareos-sd (850): message.c:1129-3865 DIRECTOR for following msg: bareos-sd: ERROR in droplet_device.c:103 error: src/addrlist.c:638: dpl_addrlist_add: cannot lookup host https: Unknown host
17-May-2018 13:36:00.402979 bareos-storage-lon-sd (10): droplet_device.c:103-3865 ERROR: error: src/profile.c:293: conf_cb_func: error parsing address list
17-May-2018 13:36:00.403006 bareos-sd (850): message.c:858-3865 Enter dispatch_message type=4 msg=bareos-storage-lon-sd: ERROR in droplet_device.c:103 error: src/profile.c:293: conf_cb_func: error parsing address list
17-May-2018 13:36:00.403036 bareos-sd (850): message.c:1129-3865 DIRECTOR for following msg: bareos-sd: ERROR in droplet_device.c:103 error: src/profile.c:293: conf_cb_func: error parsing address list
17-May-2018 13:36:00.403096 bareos-sd (100): droplet_device.c:704-3865 droplet_device.c:703 Failed to create a new context using config profile=/etc/bareos/bareos-sd.d/device/droplet/droplet.profile,bucket=name-of-my-bucket,iothreads=1,ioslots=1,chunksize=100M
I have tried switching use_https to False too, but it didn't change anything, exactly the mount loop was happening.
This is what setdebug=999 for storage reports when the loop happens:
16-May-2018 09:55:49.133920 bareos-sd (100): droplet_device.c:292-0 Flushing chunk /S3DropletTest-Full-2886/0001
16-May-2018 09:55:49.158134 bareos-sd (100): chunked_device.c:413-0 Enqueueing chunk 1 of volume S3DropletTest-Full-2886 for retry of upload later
16-May-2018 09:55:49.166766 bareos-sd (100): chunked_device.c:413-0 Enqueueing chunk 0 of volume S3DropletTest-Full-2886 for retry of upload later
16-May-2018 09:59:15.203428 bareos-sd (100): chunked_device.c:398-0 Flushing chunk 0 of volume S3DropletTest-Full-2906 by thread 0x6100007f86617fa7
16-May-2018 09:59:15.203475 bareos-sd (100): droplet_device.c:292-0 Flushing chunk /S3DropletTest-Full-2906/0000
16-May-2018 09:59:15.231959 bareos-sd (100): chunked_device.c:398-0 Flushing chunk 1 of volume S3DropletTest-Full-2906 by thread 0x3900007f8660ff97
16-May-2018 09:59:15.231995 bareos-sd (100): droplet_device.c:292-0 Flushing chunk /S3DropletTest-Full-2906/0001
16-May-2018 09:59:15.271106 bareos-sd (100): chunked_device.c:413-0 Enqueueing chunk 0 of volume S3DropletTest-Full-2906 for retry of upload later
16-May-2018 09:59:15.306617 bareos-sd (100): chunked_device.c:413-0 Enqueueing chunk 1 of volume S3DropletTest-Full-2906 for retry of upload later
First I had used 5 threads with 5 slots and for 3 jobs in parallel, so I thought I have reached S3 bucket limits for requests, but then I changed to 1 thread with 1 slot and the issue remained exactly the same - this was the moment when I started to check AWS monitoring for buckets.
I'd love to move on with that.
If needed, I can provide obfuscated configs related to the droplet backend.
Best Regards,
Jan.