bareos-19-2.7-2 on CentOS 7: S3/droplet to AWS and ceph both fail

197 views
Skip to first unread message

JAMES BELLINGER

unread,
Nov 10, 2020, 10:51:45 AM11/10/20
to bareos-users

I am testing bareos-19-2.7-2 on CentOS Linux release 7.4.1708, including the bareos-storage-droplet RPM.

We want to evaluate how well we can backup to the cloud, or to a ceph server of our own.  I have tried both, and both fail.

The credentials are valid, and were tested independently.  (Unless droplet wants them specially encoded??)

Since other people have been able to get this working, I assume there's some pilot error in the configuration.  (Other backup tests worked.)

I would be grateful for any suggestions as to what I'm doing wrong.
jim

To illustrate, this is the aws configuration:

        /etc/bareos/bareos-sd.d/device/awstest.conf
Device {
  Name = awstest
  Media Type = S3_Object2
  Archive Device = "bareostestuw"       # This doesn't work when I use "bareos-test-uw" either
  # testing:
  Device Options = "profile=/etc/bareos/bareos-sd.d/device/droplet/awstest.profile,bucket=bareos-test-uw,chunksize=100M,iothreads=0,retries=1"
  Device Type = droplet
  Label Media = yes                    # lets Bareos label unlabeled media
  Random Access = yes
  Automatic Mount = yes                # when device opened, read it
  Removable Media = no
  Always Open = no
  Description = "S3 device"
  Maximum Concurrent Jobs = 1
}

        /etc/bareos/bareos-sd.d/device/droplet/awstest.profile
host = bareos-test-uw.s3.amazonaws.com
use_https = true
backend = s3
aws_region = us-east-2
aws_auth_sign_version = 4
access_key = "redacted"
secret_key = "redacted"
pricing_dir = ""


        /etc/bareos/bareos-dir.d/storage/awstest.conf
Storage {
  Name = awstest
  Address = bareos-test-sd.icecube.wisc.edu       # N.B. Use a fully qualified name here (do not use "localhost" here).
  Password = "redacted"
  Device = awstest
  Media Type = S3_Object2
}

Etc.

The job fails in WriteNewVolumeLabel with
10-Nov 09:00 bareos-sd: ERROR in backends/droplet_device.cc:111 error: src/conn.c:389: init_ssl_conn: SSL connect error: 0: 0
10-Nov 09:00 bareos-sd: ERROR in backends/droplet_device.cc:111 error: src/conn.c:392: init_ssl_conn: SSL certificate verification status: 0: ok
10-Nov 09:00 bareos-sd JobId 131: Warning: stored/label.cc:390 Open device "awstest" (bareostestuw) Volume "cfull0010" failed: ERR=stored/dev.cc:747 Could not open: bareostestuw/cfull0010, ERR=Success


============

When I test this on our ceph system, the result is quite similar, and I have access to the system logs.
The ceph server logs include "HEAD", suggesting that it is failing when trying to get bucket information.  The error return logged is 403--something is forbidden


2020-11-09 15:15:42.020 7f84290ea700  1 ====== starting new request req=0x7f84290e37f0 =====
2020-11-09 15:15:42.020 7f84290ea700  1 ====== req done req=0x7f84290e37f0 op status=0 http_status=403 latency=0s ======
2020-11-09 15:15:42.020 7f84290ea700  1 civetweb: 0x55d654014000: 10.128.108.133 - - [09/Nov/2020:15:15:41 -0600] "HEAD / HTTP/1.1" 403 231 - -

The bareos messages say it is failing in both WriteNewVolumeLabel and in MountNextWriteVolume.
Warning: stored/label.cc:390 Open device "uwcephS3" (baretest) Volume "cfull0009" failed: ERR=stored/dev.cc:747 Could not open: baretest/cfull0009, ERR=Success
Warning: stored/label.cc:390 Open device "uwcephS3" (baretest) Volume "cfull0009" failed: ERR=stored/dev.cc:747 Could not open: baretest/cfull0009, ERR=Success
Warning: stored/mount.cc:275 Open device "uwcephS3" (baretest) Volume "cfull0009" failed: ERR=stored/dev.cc:747 Could not open: baretest/cfull0009, ERR=Success

Frank Ueberschar

unread,
Nov 11, 2020, 10:07:06 AM11/11/20
to JAMES BELLINGER, bareos-users

You may want to switch-on the tracefile on the storage daemon that gives you more debugging output:

i.e. "setdebug trace=1 level=200 storage=awstest"

Additionally you may want to try the current nightly build because we added some other improvements to the droplet backend-device: https://download.bareos.org/bareos/experimental/nightly/


Best, Frank


Am 10.11.20 um 16:51 schrieb 'JAMES BELLINGER' via bareos-users:
--
You received this message because you are subscribed to the Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bareos-users/a7cb1354-547b-4921-9c56-0995bf673eaan%40googlegroups.com.
-- 
Mit freundlichen Grüßen

 Frank Ueberschar                          frank.ue...@bareos.com
 Bareos GmbH & Co. KG                      Phone: +49 221 63 06 93-88
 http://www.bareos.com                     Fax:   +49 221 63 06 93-10

 Sitz der Gesellschaft: Köln | Amtsgericht Köln: HRA 29646
 Geschäftsführer: S. Dühr, M. Außendorf, J. Steffens, P. Storz

JAMES BELLINGER

unread,
Nov 11, 2020, 10:59:18 AM11/11/20
to bareos-users
The central part of the trace follows.  The message about the SSL connection seems ambiguous:
ERROR: error: src/conn.c:392: init_ssl_conn: SSL certificate verification status: 0: ok

I take this to mean that somehow this is not managing the credentials properly




bareos-sd (150): stored/mount.cc:84-144 Enter mount_next_volume(release=0) dev="awstest" (bareos-test-uw)
bareos-sd (150): stored/mount.cc:97-144 mount_next_vol retry=0
bareos-sd (100): stored/mount.cc:673-144 No swap_dev set
bareos-sd (50): stored/askdir.cc:179-144 >dird CatReq Job=TestOpt.2020-11-11_09.14.54_26 GetVolInfo VolName=cfull0010 write=1
bareos-sd (50): stored/askdir.cc:106-144 <dird 1000 OK VolName=cfull0010 VolJobs=0 VolFiles=0 VolBlocks=0 VolBytes=0 VolMounts=0 VolErrors=0 VolWrites=0 MaxVolBytes=53
687091200 VolCapacityBytes=0 VolStatus=Append Slot=0 MaxVolJobs=0 MaxVolFiles=0 InChanger=0 VolReadTime=0 VolWriteTime=0 EndFile=0 EndBlock=0 LabelType=0 MediaId=10 En
cryptionKey= MinBlocksize=0 MaxBlocksize=0
bareos-sd (50): stored/askdir.cc:141-144 DoGetVolumeInfo return true slot=0 Volume=cfull0010, VolminBlocksize=0 VolMaxBlocksize=0
bareos-sd (50): stored/askdir.cc:144-144 setting dcr->VolMinBlocksize(0) to vol.VolMinBlocksize(0)
bareos-sd (50): stored/askdir.cc:147-144 setting dcr->VolMaxBlocksize(0) to vol.VolMaxBlocksize(0)
bareos-sd (150): stored/mount.cc:134-144 After find_next_append. Vol=cfull0010 Slot=0
bareos-sd (100): stored/autochanger.cc:136-144 Device "awstest" (bareos-test-uw) is not an autochanger
bareos-sd (150): stored/mount.cc:204-144 autoLoadDev returns 0
bareos-sd (150): stored/mount.cc:239-144 want vol=cfull0010 devvol= dev="awstest" (bareos-test-uw)
bareos-sd (100): stored/dev.cc:646-144 open dev: type=6 dev_name="awstest" (bareos-test-uw) vol=cfull0010 mode=OPEN_READ_WRITE
bareos-sd (100): stored/dev.cc:665-144 call OpenDevice mode=OPEN_READ_WRITE
bareos-sd (190): stored/dev.cc:1097-144 Enter mount
bareos-sd (100): stored/dev.cc:741-144 open disk: mode=OPEN_READ_WRITE open(bareos-test-uw/cfull0010, 00000002, 0640)
bareos-sd (10): backends/droplet_device.cc:111-144 ERROR: error: src/conn.c:389: init_ssl_conn: SSL connect error: 0: 0
bareos-sd (10): backends/droplet_device.cc:111-144 ERROR: error: src/conn.c:392: init_ssl_conn: SSL certificate verification status: 0: ok
bareos-sd (100): backends/droplet_device.cc:344-144 check_path(device="awstest" (bareos-test-uw), bucket=bareos-test-uw, path=/): DPL_FAILURE
bareos-sd (100): backends/droplet_device.cc:364-144 CheckRemote("awstest" (bareos-test-uw)): failed
bareos-sd (100): backends/chunked_device.cc:673-144 setup_chunk failed, as remote device is not available
bareos-sd (100): stored/dev.cc:748-144 open failed: stored/dev.cc:747 Could not open: bareos-test-uw/cfull0010, ERR=Success

bareos-sd (100): stored/dev.cc:757-144 open dev: disk fd=-1 opened
bareos-sd (100): stored/dev.cc:673-144 preserve=33376204060 fd=-1
bareos-sd (150): stored/mount.cc:794-144 Create volume label
bareos-sd (100): stored/dev.cc:590-144 setting minblocksize to 64512, maxblocksize to label_block_size=64512, on device "awstest" (bareos-test-uw)
bareos-sd (150): stored/label.cc:365-144 write_volume_label()
bareos-sd (150): stored/label.cc:382-144 New VolName=cfull0010
bareos-sd (100): stored/dev.cc:646-144 open dev: type=6 dev_name="awstest" (bareos-test-uw) vol=cfull0010 mode=OPEN_READ_WRITE
bareos-sd (100): stored/dev.cc:665-144 call OpenDevice mode=OPEN_READ_WRITE
bareos-sd (190): stored/dev.cc:1097-144 Enter mount
bareos-sd (100): stored/dev.cc:741-144 open disk: mode=OPEN_READ_WRITE open(bareos-test-uw/cfull0010, 00000002, 0640)
bareos-sd (10): backends/droplet_device.cc:111-144 ERROR: error: src/conn.c:389: init_ssl_conn: SSL connect error: 0: 0
bareos-sd (10): backends/droplet_device.cc:111-144 ERROR: error: src/conn.c:392: init_ssl_conn: SSL certificate verification status: 0: ok
bareos-sd (100): backends/droplet_device.cc:344-144 check_path(device="awstest" (bareos-test-uw), bucket=bareos-test-uw, path=/): DPL_FAILURE
bareos-sd (100): backends/droplet_device.cc:364-144 CheckRemote("awstest" (bareos-test-uw)): failed
bareos-sd (100): backends/chunked_device.cc:673-144 setup_chunk failed, as remote device is not available

JAMES BELLINGER

unread,
Nov 19, 2020, 4:20:38 PM11/19/20
to bareos-users
I checked SSL access directly using the MinIO Client on the bareos-sd server.  That works.

I have to assume that I'm doing something wrong, and would appreciate extra eyes on the configuration.

James Bellinger


Dmitry Ponkin

unread,
Nov 20, 2020, 1:13:44 AM11/20/20
to bareos-users
I'm having the same issue with AWS S3 at the moment. No SSL errors though.
HEAD in logs, too. This is how droplet backend checks if the bucket exists and if the connection could be established.
According to the comments, libdroplet can return unexpected results in some cases, but I don't understand how is that possible, considering that my configuration worked flawlessly for several months but then just "broke".

Dmitry Ponkin

unread,
Nov 20, 2020, 1:53:49 AM11/20/20
to bareos-users
I used dplsh to test it out and sure enough, passing / to getattr yields DPL_FAILURE

bucket_name:
bucket_name:/> getattr /
status: DPL_FAILURE (-1)
bucket_name:/> getattr -r test.file
last-modified=
accept-ranges=bytes
etag=
date=
server=AmazonS3
content-type=
x-amz-request-id=
content-length=
x-amz-id-2=

Which explains why even the nightly has the same issue, but does not explain why it worked before.

Dmitry Ponkin

unread,
Nov 20, 2020, 2:03:46 AM11/20/20
to bareos-users
https://github.com/bareos/bareos/blob/9c59c6460c7ddcdc361c6fb03fa01a2fd2aaa28e/core/src/stored/backends/droplet_device.cc#L385

And here's where the backend shoots itself in the foot. I'm considering putting an empty file in every bucket I use for backups and using its name to check if the bucket is accessible. I need the backend to work ASAP.

Frank Ueberschar

unread,
Nov 20, 2020, 10:44:40 AM11/20/20
to bareos...@googlegroups.com

At the beginning of each job the droplet-sd-backend probes the s3-host by accessing the attributes of a bucket with name "/".

Now, both your descriptions probably describe two issues:

1. It cannot establish a connection with a TLS Handshake ?

2. requesting the attributes of a bucket "/" does not work (anymore) with aws ?

To figure out more details I've started a PR with two minor changes: Added a more verbose SSL Error and try to get the attributes from a bucket "bareos-test/" instead of "/". See https://github.com/bareos/bareos/pull/674/commits for Details.

I hope we have testing-binaries at the beginning of next week here: https://download.bareos.org/bareos/experimental/CD/


Thanks, Frank


Am 10.11.20 um 16:51 schrieb 'JAMES BELLINGER' via bareos-users:
--
You received this message because you are subscribed to the Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bareos-users/a7cb1354-547b-4921-9c56-0995bf673eaan%40googlegroups.com.

Dmitry Ponkin

unread,
Nov 20, 2020, 11:11:05 AM11/20/20
to bareos-users
I just patched 19.2.7 to use marker.bareos and put that file into every bucket. It's a subpar solution at best and should not be merged.

I suggest finding another way to check the connection. libdroplet can output the list of accessible buckets, this can be used instead, as it doesn't require any input and thus avoids any issues related to incorrect or unstable behavior.

Andreas Rogge

unread,
Nov 20, 2020, 11:32:21 AM11/20/20
to bareos...@googlegroups.com
Am 20.11.20 um 17:11 schrieb Dmitry Ponkin:
> I just patched 19.2.7 to use marker.bareos and put that file into every
> bucket. It's a subpar solution at best and should not be merged.

If you didn't upgrade Bareos or libdroplet and it suddenly broke, I
guess it was the other end breaking compatiblility. Do you have any
idea, what change at Amazon's broke access to "/" in the buckets?

Listing the Buckets is not sufficient, because the client might see a
bucket, but does not have (write) access to it.

Best Regards,
Andreas

--
Andreas Rogge andrea...@bareos.com
Bareos GmbH & Co. KG Phone: +49 221-630693-86
http://www.bareos.com

Sitz der Gesellschaft: Köln | Amtsgericht Köln: HRA 29646
Komplementär: Bareos Verwaltungs-GmbH
Geschäftsführer: S. Dühr, M. Außendorf, J. Steffens, Philipp Storz
OpenPGP_0xC9343A2D7259BD60.asc
OpenPGP_signature

Dmitry Ponkin

unread,
Nov 20, 2020, 11:40:14 AM11/20/20
to bareos-users
Yes, I didn't update anything. The issue started presenting itself a month ago, and after several attempts the jobs tended to finally work. Last night was the first time none of them worked.
No idea what might've caused it on Amazon side. They usually warn about any sort of breaking changes months in advance.

>  Listing the Buckets is not sufficient, because the client might see a bucket, but does not have (write) access to it.  
The current (supposedly broken) and the proposed (in the PR) methods succeed without ever checking for write permissions too, don't they?

Either way, sorry for being antagonistic towards the solution in the PR, it does feel like a project of this scale deserves something better thought out. I'll let you know if I come up with anything myself.

andr...@gmail.com

unread,
Nov 23, 2020, 7:41:11 AM11/23/20
to bareos-users
I've experienced this too with AWS S3 and bareos-sd 18.2.5; it stop working in the same pattern Dmitry described. What I do find a bit odd is that it works with http only set in the droplet profile:
use_https = false

--
Andrei

JAMES BELLINGER

unread,
Nov 23, 2020, 10:26:01 AM11/23/20
to bareos-users
For me it fails whether use_https is true or false

Frank Ueberschar

unread,
Nov 24, 2020, 3:49:28 AM11/24/20
to bareos...@googlegroups.com

Feel free to check the binaries: https://download.bareos.org/bareos/experimental/CD/PR-674/

Best, Frank


Am 20.11.20 um 16:44 schrieb Frank Ueberschar:

JAMES BELLINGER

unread,
Nov 26, 2020, 7:26:16 PM11/26/20
to bareos-users
I installed the binaries, and tried the AWS test (yesterday wasn't good--our test bucket is in AWS us-east-1)


bareos-sd (150): stored/mount.cc:240-218 want vol=cfull0010 devvol= dev="awstest" (bareos-test-uw)
bareos-sd (100): stored/dev.cc:619-218 open dev: type=6 dev_name="awstest" (bareos-test-uw) vol=cfull0010 mode=OPEN_READ_WRITE
bareos-sd (100): stored/dev.cc:638-218 call OpenDevice mode=OPEN_READ_WRITE
bareos-sd (190): stored/dev.cc:1071-218 Enter mount
bareos-sd (100): stored/dev.cc:715-218 open disk: mode=OPEN_READ_WRITE open(bareos-test-uw/cfull0010, 00000002, 0640)
bareos-sd (10): backends/droplet_device.cc:113-218 ERROR: error: ../../../../../core/src/droplet/libdroplet/src/conn.c:392: init_ssl_conn: SSL connect error: 0 (error:00000005:lib(0):func(0):DH lib)
bareos-sd (10): backends/droplet_device.cc:113-218 ERROR: error: ../../../../../core/src/droplet/libdroplet/src/conn.c:395: init_ssl_conn: SSL certificate verification status: 140591459467264: ok
bareos-sd (10): backends/droplet_device.cc:113-218 ERROR: error: ../../../../../core/src/droplet/libdroplet/src/conn.c:392: init_ssl_conn: SSL connect error: 0 (error:00000005:lib(0):func(0):DH lib)
bareos-sd (10): backends/droplet_device.cc:113-218 ERROR: error: ../../../../../core/src/droplet/libdroplet/src/conn.c:395: init_ssl_conn: SSL certificate verification status: 140591459467264: ok
bareos-sd (100): backends/droplet_device.cc:357-218 check_path: path=<bareos-test/> (device="awstest" (bareos-test-uw), bucket=bareos-test-uw): Result DPL_FAILURE

bareos-sd (10): backends/droplet_device.cc:113-218 ERROR: error: ../../../../../core/src/droplet/libdroplet/src/conn.c:392: init_ssl_conn: SSL connect error: 0 (error:00000005:lib(0):func(0):DH lib)
bareos-sd (10): backends/droplet_device.cc:113-218 ERROR: error: ../../../../../core/src/droplet/libdroplet/src/conn.c:395: init_ssl_conn: SSL certificate verification status: 140591459467264: ok
bareos-sd (10): backends/droplet_device.cc:113-218 ERROR: error: ../../../../../core/src/droplet/libdroplet/src/conn.c:392: init_ssl_conn: SSL connect error: 0 (error:00000005:lib(0):func(0):DH lib)
bareos-sd (10): backends/droplet_device.cc:113-218 ERROR: error: ../../../../../core/src/droplet/libdroplet/src/conn.c:395: init_ssl_conn: SSL certificate verification status: 140591459467264: ok
bareos-sd (100): backends/droplet_device.cc:357-218 Retry: check_path: path=<bareos-test/> (device="awstest" (bareos-test-uw), bucket=bareos-test-uw): Result DPL_FAILURE

Our ceph system is having issues right now and I won't be able to test that until tomorrow.
jim

JAMES BELLINGER

unread,
Nov 27, 2020, 12:23:37 PM11/27/20
to bareos-users
Our ceph server is back.  A section of the bareos-sd.trace and the matching section of the ceph log follow:

bareos-sd (150): stored/mount.cc:205-224 autoLoadDev returns 0
bareos-sd (150): stored/mount.cc:240-224 want vol=cfull0009 devvol= dev="uwcephS3" (bareos-bucket)
bareos-sd (100): stored/dev.cc:619-224 open dev: type=6 dev_name="uwcephS3" (bareos-bucket) vol=cfull0009 mode=OPEN_READ_WRITE
bareos-sd (100): stored/dev.cc:638-224 call OpenDevice mode=OPEN_READ_WRITE
bareos-sd (190): stored/dev.cc:1071-224 Enter mount
bareos-sd (100): stored/dev.cc:715-224 open disk: mode=OPEN_READ_WRITE open(bareos-bucket/cfull0009, 00000002, 0640)
bareos-sd (100): backends/droplet_device.cc:357-224 check_path: path=<bareos-test/> (device="uwcephS3" (bareos-bucket), bucket=baretest): Result DPL_EPERM
bareos-sd (100): backends/droplet_device.cc:357-224 Retry: check_path: path=<bareos-test/> (device="uwcephS3" (bareos-bucket), bucket=baretest): Result DPL_EPERM
bareos-sd (100): backends/droplet_device.cc:357-224 Retry: check_path: path=<bareos-test/> (device="uwcephS3" (bareos-bucket), bucket=baretest): Result DPL_EPERM
bareos-sd (100): backends/droplet_device.cc:357-224 Retry: check_path: path=<bareos-test/> (device="uwcephS3" (bareos-bucket), bucket=baretest): Result DPL_EPERM
bareos-sd (100): backends/droplet_device.cc:357-224 Retry: check_path: path=<bareos-test/> (device="uwcephS3" (bareos-bucket), bucket=baretest): Result DPL_EPERM
bareos-sd (100): backends/droplet_device.cc:397-224 Cannot reach host: 128.104.8.31:443 (DPL_EPERM)
 bareos-sd (100): backends/chunked_device.cc:655-224 setup_chunk failed, as remote device is not available
bareos-sd (100): stored/dev.cc:722-224 open failed: stored/dev.cc:721 Could not open: bareos-bucket/cfull0009, ERR=Success
bareos-sd (100): stored/dev.cc:731-224 open dev: disk fd=-1 opened


2020-11-27 11:11:22.823 7f84290ea700  1 civetweb: 0x55d654014000: 10.128.108.133 - - [27/Nov/2020:11:11:22 -0600] "GET /?delimiter=/ HTTP/1.1" 403 431 - -
2020-11-27 11:11:27.977 7f84290ea700  1 ====== starting new request req=0x7f84290e37f0 =====
2020-11-27 11:11:27.977 7f84290ea700  1 ====== req done req=0x7f84290e37f0 op status=0 http_status=403 latency=0s ======
2020-11-27 11:11:27.977 7f84290ea700  1 civetweb: 0x55d654014000: 10.128.108.133 - - [27/Nov/2020:11:11:27 -0600] "GET /?delimiter=/ HTTP/1.1" 403 431 - -
2020-11-27 11:11:33.133 7f84290ea700  1 ====== starting new request req=0x7f84290e37f0 =====
2020-11-27 11:11:33.133 7f84290ea700  1 ====== req done req=0x7f84290e37f0 op status=0 http_status=403 latency=0s ======
2020-11-27 11:11:33.133 7f84290ea700  1 civetweb: 0x55d654014000: 10.128.108.133 - - [27/Nov/2020:11:11:33 -0600] "GET /?delimiter=/ HTTP/1.1" 403 431 - -
2020-11-27 11:11:33.177 7f84290ea700  1 ====== starting new request req=0x7f84290e37f0 =====
2020-11-27 11:11:33.177 7f84290ea700  1 ====== req done req=0x7f84290e37f0 op status=0 http_status=403 latency=0s ======
2020-11-27 11:11:33.177 7f84290ea700  1 civetweb: 0x55d654014000: 10.128.108.133 - - [27/Nov/2020:11:11:33 -0600] "HEAD /bareos-test HTTP/1.1" 403 231 - -
2020-11-27 11:11:38.329 7f84290ea700  1 ====== starting new request req=0x7f84290e37f0 =====
2020-11-27 11:11:38.329 7f84290ea700  1 ====== req done req=0x7f84290e37f0 op status=0 http_status=403 latency=0s ======
2020-11-27 11:11:38.329 7f84290ea700  1 civetweb: 0x55d654014000: 10.128.108.133 - - [27/Nov/2020:11:11:38 -0600] "HEAD /bareos-test HTTP/1.1" 403 231 - -

Frank Ueberschar

unread,
Nov 30, 2020, 4:06:04 AM11/30/20
to bareos...@googlegroups.com

Both the logs show a http authentication error. From my far position I would suggest to double check the droplet config regarding this line:

aws_auth_sign_version = 2


Am 27.11.20 um 18:23 schrieb 'JAMES BELLINGER' via bareos-users:
--
You received this message because you are subscribed to the Google Groups "bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bareos-users...@googlegroups.com.

JAMES BELLINGER

unread,
Nov 30, 2020, 9:08:43 AM11/30/20
to bareos-users
Progress!  With
aws_auth_sign_version = 4  # must be 4 for AWS S3, must be 2 for CEPH Object
I can write to our ceph store.

$ mc ls ceph/cfull0009
[2020-11-30 07:34:27 CST]  42KiB 0000

JAMES BELLINGER

unread,
Dec 2, 2020, 1:22:14 PM12/2/20
to bareos-users
And I can write to AWS now--with use_https set to false.

host = s3.amazonaws.com
use_https = false

backend = s3
aws_region = us-east-2
aws_auth_sign_version = 4
access_key = "REDACTED"
secret_key = "REDACTED"
pricing_dir = ""



Reply all
Reply to author
Forward
0 new messages