I am attempting to configure non-Amazon S3 deep storage for my druid cluster. I've configured my common-properties file with the following properties for S3 storage and commented out the local storage properties:
druid.s3.accessKey | | S3 access key. | Must be set. |
druid.s3.secretKey | | S3 secret key. | Must be set. |
druid.storage.bucket | | Bucket to store in. | Must be set. |
druid.storage.baseKey | | Base key prefix to use, i.e. what directory. | Must be set.
|
There is no mention of a property to specify for the S3 host, so based on searching past posts in the group I also added this property:
druid.extensions.coordinates=["io.druid.extensions:druid-s3-extensions:0.6.173"]
and created a file conf/druid/_common/jets3t.properties that contains the following:
# Uncomment to use s3 compatibility mode on GCE
s3service.s3-endpoint-http-port=443
s3service.disable-dns-buckets=true
s3service.https-only=false
Despite all of this, after restarting all of the druid processes none of my data is being stored in S3 when I run my batch index job. Druid still appears to be using local storage since I can successfully query the data after loading. I don't receive any errors during the load.
Any suggestions on how to load data to non-Amazon S3 deep storage would be much appreciated. I am running Druid version 0.10.0.