Error Access Denied S3 deepstorage Druid 0.13.0

585 views
Skip to first unread message

srg

unread,
Jan 9, 2019, 12:41:16 AM1/9/19
to Druid User
Hi guys,

I had a previous Druid cluster running with Druid 0.12.3 and S3 deepstorage, then I upgraded two days ago to Druid 0.13.0 because I could benefit from the new automatic compaction functionality of the coordinator.

The problem is that the AWS S3 access is no longer working. I tried both configuring the AWS region as environmental variables as well as JVM properties and runtime property for the middleManager as described in the official documentation (http://druid.io/docs/latest/development/extensions-core/s3.html) but I still have the error

Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied

I don't have S3 storage encryption neither other an S3 proxy.

Someone can help?  Please find an attached task log (log is set to TRACE).

Thank you!

Sergio

brief log of the error (see file for full task log):
2019-01-09T05:15:45,127 ERROR [task-runner-0-priority-0] org.apache.druid.indexing.common.task.IndexTask - Encountered exception in BUILD_SEGMENTS.
java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.io.IOException: com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 922ADAAE8B054BD9; S3 Extended Request ID: HHh4OLBf2F2KnB2+Vn1EVRVi/2qfgahVNjsR0yBmhiXMbEOWKKfh42D7j5d0PjfN0Amasg8NMXA=), S3 Extended Request ID: HHh4OLBf2F2KnB2+Vn1EVRVi/2qfgahVNjsR0yBmhiXMbEOWKKfh42D7j5d0PjfN0Amasg8NMXA=
at com.google.common.base.Throwables.propagate(Throwables.java:160) ~[guava-16.0.1.jar:?]
at org.apache.druid.indexing.common.task.IndexTask.generateAndPublishSegments(IndexTask.java:1098) ~[druid-indexing-service-0.13.0-incubating.jar:0.13.0-incubating]
at org.apache.druid.indexing.common.task.IndexTask.run(IndexTask.java:466) [druid-indexing-service-0.13.0-incubating.jar:0.13.0-incubating]
at org.apache.druid.indexing.overlord.SingleTaskBackgroundRunner$SingleTaskBackgroundRunnerCallable.call(SingleTaskBackgroundRunner.java:421) [druid-indexing-service-0.13.0-incubating.jar:0.13.0-incubating]
at org.apache.druid.indexing.overlord.SingleTaskBackgroundRunner$SingleTaskBackgroundRunnerCallable.call(SingleTaskBackgroundRunner.java:393) [druid-indexing-service-0.13.0-incubating.jar:0.13.0-incubating]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_191]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_191]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_191]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191]
Caused by: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.io.IOException: com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 922ADAAE8B054BD9; S3 Extended Request ID: HHh4OLBf2F2KnB2+Vn1EVRVi/2qfgahVNjsR0yBmhiXMbEOWKKfh42D7j5d0PjfN0Amasg8NMXA=), S3 Extended Request ID: HHh4OLBf2F2KnB2+Vn1EVRVi/2qfgahVNjsR0yBmhiXMbEOWKKfh42D7j5d0PjfN0Amasg8NMXA=
at com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:299) ~[guava-16.0.1.jar:?]
at com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:286) ~[guava-16.0.1.jar:?]
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116) ~[guava-16.0.1.jar:?]
at org.apache.druid.segment.realtime.appenderator.BatchAppenderatorDriver.pushAndClear(BatchAppenderatorDriver.java:141) ~[druid-server-0.13.0-incubating.jar:0.13.0-incubating]
at org.apache.druid.segment.realtime.appenderator.BatchAppenderatorDriver.pushAllAndClear(BatchAppenderatorDriver.java:124) ~[druid-server-0.13.0-incubating.jar:0.13.0-incubating]
at org.apache.druid.indexing.common.task.IndexTask.generateAndPublishSegments(IndexTask.java:1060) ~[druid-indexing-service-0.13.0-incubating.jar:0.13.0-incubating]
... 7 more


error-S3-druid-0.13.log

Eric Graham

unread,
Jan 9, 2019, 8:01:59 AM1/9/19
to druid...@googlegroups.com
HI Sergio,

Can you please send me your conf/druid/_common/common.runtime.properties?

Best regards,
Eric



Eric Graham
Solutions Engineer - Imply



This email and any attachments are proprietary and confidential and are intended solely for the use of the individual to whom it is addressed. Any views or opinions expressed are solely those of the author and do not necessarily reflect or represent those of SwitchDin Pty Ltd. If you have received this email in error, please let us know immediately by reply email and delete it from your system. You may not use, disseminate, distribute or copy this message nor disclose its contents to anyone. 
SwitchDin Pty Ltd (ABN 29 154893857) PO Box 1165, Newcastle NSW 2300 Australia

--
You received this message because you are subscribed to the Google Groups "Druid User" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-user+...@googlegroups.com.
To post to this group, send email to druid...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-user/92ff0385-5c45-4e74-9dce-e782d5431193%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
<error-S3-druid-0.13.log>

Rommel Garcia

unread,
Jan 9, 2019, 9:07:28 AM1/9/19
to druid...@googlegroups.com
Sergio,

Check on the overlord log as well to see if druid-s3 extension is loading properly and its the right version for Druid 0.13. 

Rommel Garcia
Director, Field Engineering
rommel...@imply.io
404.502.9672

srg

unread,
Jan 9, 2019, 9:03:02 PM1/9/19
to Druid User
Thank you! I should have spotted that the error was coming from the overlord process (it written in the log).
Anyhow I checked and in the log the overlord is loading the s3 extensions from the druid 0.13 as seen in the log:

2019-01-10T01:15:53,692 INFO [main] org.apache.druid.guice.JsonConfigurator - Loaded class[class org.apache.druid.guice.ExtensionsConfig] from props[druid.extensions.] as [ExtensionsConfig{searchCurrentClassloader=true, directory='/opt/druid/druid-0.13.0/extensions', useExtensionClassloaderFirst=false, hadoopDependenciesDir='/opt/druid/druid-0.13.0/hadoop-dependencies', hadoopContainerDruidClasspath='null', addExtensionsToHadoopContainer=false, loadList=[postgresql-metadata-storage, druid-s3-extensions]}]
2019-01-10T01:15:53,696 INFO [main] org.apache.druid.initialization.Initialization - Loading extension [postgresql-metadata-storage] for class [interface org.apache.druid.cli.CliCommandCreator]
2019-01-10T01:15:53,708 INFO [main] org.apache.druid.initialization.Initialization - added URL[file:/opt/druid/druid-0.13.0/extensions/postgresql-metadata-storage/postgresql-metadata-storage-0.13.0-incubating.jar] for extension[postgresql-metadata-storage]
2019-01-10T01:15:53,709 INFO [main] org.apache.druid.initialization.Initialization - added URL[file:/opt/druid/druid-0.13.0/extensions/postgresql-metadata-storage/postgresql-9.4.1208.jre7.jar] for extension[postgresql-metadata-storage]
2019-01-10T01:15:53,711 INFO [main] org.apache.druid.initialization.Initialization - Loading extension [druid-s3-extensions] for class [interface org.apache.druid.cli.CliCommandCreator]
2019-01-10T01:15:53,711 INFO [main] org.apache.druid.initialization.Initialization - added URL[file:/opt/druid/druid-0.13.0/extensions/druid-s3-extensions/druid-s3-extensions-0.13.0-incubating.jar] for extension[druid-s3-extensions]
2019-01-10T01:15:53,762 DEBUG [main] com.google.inject.internal.BytecodeGen - Loading class org.apache.druid.cli.GuiceRunnable FastClass with sun.misc.Launcher$AppClassLoader@6d5380c2
2019-01-10T01:15:53,771 TRACE [main] org.hibernate.validator.internal.metadata.aggregated.BeanMetaDataImpl - Members of the default group sequence for bean org.apache.druid.guice.ModulesConfig are: [interface javax.validation.groups.Default].
2019-01-10T01:15:53,772 INFO [main] org.apache.druid.guice.JsonConfigurator - Loaded class[class org.apache.druid.guice.ModulesConfig] from props[druid.modules.] as [ModulesConfig{excludeList=[]}]
2019-01-10T01:15:53,814 DEBUG [main] com.google.inject.internal.BytecodeGen - Loading class org.apache.druid.server.emitter.EmitterModule FastClass with sun.misc.Launcher$AppClassLoader@6d5380c2
2019-01-10T01:15:53,882 INFO [main] org.apache.druid.initialization.Initialization - Loading extension [postgresql-metadata-storage] for class [interface org.apache.druid.initialization.DruidModule]
2019-01-10T01:15:53,884 INFO [main] org.apache.druid.initialization.Initialization - Adding implementation [org.apache.druid.metadata.storage.postgresql.PostgreSQLMetadataStorageModule] for class [interface org.apache.druid.initialization.DruidModule] from local file system extension
2019-01-10T01:15:53,885 INFO [main] org.apache.druid.initialization.Initialization - Loading extension [druid-s3-extensions] for class [interface org.apache.druid.initialization.DruidModule]
2019-01-10T01:15:53,886 INFO [main] org.apache.druid.initialization.Initialization - Adding implementation [org.apache.druid.storage.s3.S3StorageDruidModule] for class [interface org.apache.druid.initialization.DruidModule] from local file system extension
2019-01-10T01:15:53,887 INFO [main] org.apache.druid.initialization.Initialization - Adding implementation [org.apache.druid.firehose.s3.S3FirehoseDruidModule] for class [interface org.apache.druid.initialization.DruidModule] from local file system extension

Seems it is loading the right extension... Maybe the mirror I used to download Druid was not good?

srg

unread,
Jan 9, 2019, 9:13:22 PM1/9/19
to Druid User
Hi Eric,

here you go:

#
# Licensed to Metamarkets Group Inc. (Metamarkets) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. Metamarkets licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#

# Extensions specified in the load list will be loaded by Druid
# We are using local fs for deep storage - not recommended for production - use S3, HDFS, or NFS instead
# We are using local derby for the metadata store - recommended for production - use MySQL or Postgres instead

# If you specify `druid.extensions.loadList=[]`, Druid won't load any extension from file system.
# If you don't specify `druid.extensions.loadList`, Druid will load all the extensions under root extension directory.
druid.extensions.loadList=["postgresql-metadata-storage", "druid-s3-extensions"]
druid.extensions.directory=/opt/druid/druid-0.13.0/extensions

# Hadoop dependencies are bundled in the druid installation package
druid.extensions.hadoopDependenciesDir=/opt/druid/druid-0.13.0/hadoop-dependencies

# Log all runtime properties on startup. Disable to avoid logging properties on startup:
druid.startup.logging.logProperties=true

# Enable Double column storage - ONLY for druid v0.11.0 or later
druid.indexing.doubleStorage=double

#
# Zookeeper
#

druid.zk.service.host=MY_ZK_IP
druid.zk.paths.base=/druid

#
# Metadata storage
#

# For PostgreSQL (make sure to additionally include the Postgres extension):
druid.metadata.storage.type=postgresql
druid.metadata.storage.connector.connectURI=jdbc:postgresql://MY_PSQL_IP:5432/my_druid_database
druid.metadata.storage.connector.user=druid
druid.metadata.storage.connector.password=MY_PSQL_PASS

#
# Deep storage
#

# For local disk (only viable in a cluster if this is a network mount):

# For S3:
druid.storage.type=s3
druid.storage.bucket=my-druid-bucket
druid.storage.baseKey=deepstore
druid.storage.storageDirectory=my-druid-bucket/deepstore
druid.s3.accessKey=MY_ACCESS_KEY
druid.s3.secretKey=MY_SECRET_KEY

#
# Indexing service logs
#

# For S3:
druid.indexer.logs.type=s3
druid.indexer.logs.s3Bucket=my-druid-bucket
druid.indexer.logs.s3Prefix=logs

# For local disk (only viable in a cluster if this is a network mount):


#
# Service discovery
#

druid.selectors.indexing.serviceName=druid/overlord
druid.selectors.coordinator.serviceName=druid/coordinator

#
# Monitoring
#

# # druid.monitoring.monitors=["org.apache.druid.java.util.metrics.JvmMonitor", "org.apache.druid.java.util.metrics.SysMonitor"]
# # druid.emitter=logging
# # druid.emitter.logging.logLevel=info
# # 
#
# JavaScript
#

druid.javascript.enabled=False

#
# SQL
#
# Druid SQL is a built-in SQL layer and an alternative to Druid's native
# JSON-based query language, and is powered by a parser and planner based on
# Apache Calcite. Druid SQL translates SQL into native Druid queries on the
# query broker (the first node you query), which are then passed down to data
# nodes as native Druid queries. Other than the (slight) overhead of
# translating SQL on the broker, there isn't an additional performance penalty
# versus native queries.

druid.sql.enable = True



Thank you for taking time to have a look at that.

Kind regards,
Sergio

Eric Graham

unread,
Jan 10, 2019, 9:40:20 AM1/10/19
to druid...@googlegroups.com
What is the path to s3? Are you using s3a or s3n? Did you establish an IAM role to access the s3 bucket? 


Eric Graham
Solutions Engineer - Imply


srg

unread,
Jan 10, 2019, 6:25:47 PM1/10/19
to Druid User
The path is :


I'm using the default s3. Anyhow I though s3a and s3n are Hadoop binding stuffs for s3, and I'm not using any Hadoop ingestion. Do we need to specify a the s3 type also with native batch indexing?

 I did established a IAM role to access the Druid buckets.

thanks a lot!

Jihoon Son

unread,
Jan 10, 2019, 7:55:04 PM1/10/19
to Druid User
Hi Sergio,

I'm pretty sure this is the missing or wrong aws region issue. Would you please double check that the region you specified is valid? Looks like it's 'ap-southeast-2' from your task log.
Please note that Druid is using DefaultAwsRegionProviderChain which is implemented like below:

public DefaultAwsRegionProviderChain() {
super(new AwsEnvVarOverrideRegionProvider(),
new AwsSystemPropertyRegionProvider(),
new AwsProfileRegionProvider(),
new InstanceMetadataRegionProvider());
}
Please make sure that the region you provided is the first one returned by this chain.

Also, I'm seeing this line.

> 2019-01-09T05:15:18,597 INFO [main] org.apache.druid.cli.CliPeon - * druid.indexer.runner.javaOpts: -Daws.region=ap-southeast-2

Did you intentionally leave only this one option? Usually javaOpts includes other configurations like for memory or timezone.

Jihoon

srg

unread,
Jan 10, 2019, 11:00:39 PM1/10/19
to Druid User
Hi Jihoon,

thank you for your help!
I'm confident that 'ap-southeast-2' is a valid region as we have all our S3 buckets in that region. I also checked online on https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-available-regions, and that is a valid AWS region. I also checked on https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region and defined the following in the _common/common.runtime.properties

druid.storage.type=s3
druid.storage.bucket=MY_BUCKET
druid.storage.baseKey=MY_BASEKEY
druid.storage.storageDirectory=MY_BUCKET/MY_BASEKEY
druid.s3.accessKey=MY_ACCESS_KEY
druid.s3.secretKey=MY_SECRET_KEY
druid.s3.endpoint.url=s3.ap-southeast-2.amazonaws.com
druid.s3.endpoint.signingRegion=ap-southeast-2

Even specifying the druid.s3.endpoint.url and druid.s3.endpoint.signingRegion parameters, I get the same access error.

How I can debug the DefaultAwsRegionProviderChain, beside from enabling TRACE or DEBUG on the log4j2.xml ?

I did intentionally leave only this option on the druid.indexer.runner.javaOpts as I specify the other JVM options (memory, timezone, GC) in the jvm.config file. I did that because the documentation says that this parameter is going to be deprecated...
Not sure it is a good practice...

Does Zookeepeer need to known the AWS_REGION as well?

cheers,
Sergio

Eric Graham

unread,
Jan 11, 2019, 10:27:44 AM1/11/19
to druid...@googlegroups.com
Hi Sergio,

Can you send me additional details on your cluster? Did you copy the common.runtime.properties to all other server types in the cluster (Data/Query/Master)? Did you restart processes after adding your S3 changes and extensions? 
Can you add the rest of the information to the common.runtime.properties for 

druid.storage.bucket=MY_BUCKET
druid.storage.baseKey=MY_BASEKEY

You mentioned an earlier version was working. Did you save the old common.runtime.properties before upgrading?

Eric



Eric Graham
Solutions Engineer - Imply


Jihoon Son

unread,
Jan 11, 2019, 2:18:02 PM1/11/19
to Druid User
Eric,
from the attached task log, I believe he copied common.runtime.properties to middleManager and restarted it. The task log shows all properties and configurations and they look valid.

Sergio, thank you for providing details. 
I'm surprised about that it doesn't work even with the below configurations.

druid.s3.endpoint.url=s3.ap-southeast-2.amazonaws.com
druid.s3.endpoint.signingRegion=ap-southeast-2

I think 'druid.s3.endpoint.url' was not configured at first and you added it later since I don't see this configuration in the attached task log.
Would you please check that this is properly configured from the latest task log? Or, if you can, it would be great if you provide the latest task log.

Regarding DefaultAwsRegionProviderChain, I don't think there's a good way to debug it, but I believe it works as expected. So, it searches for the configured aws region in the order of env, system property, aws profile, and then instance metadata.

> I did intentionally leave only this option on the druid.indexer.runner.javaOpts as I specify the other JVM options (memory, timezone, GC) in the jvm.config file. I did that because the documentation says that this parameter is going to be deprecated... Not sure it is a good practice... 

Ah, I think 'druid.indexer.runner.javaOptsArray' is the recommended one instead of javaOpts (http://druid.io/docs/latest/configuration/index.html#middlemanager-configuration). I think the task will use the default JVM options if you don't provide.

> Does Zookeepeer need to known the AWS_REGION as well?

No, Zookeeper doesn't need it.

Thanks,
Jihoon

srg

unread,
Jan 13, 2019, 12:30:11 AM1/13/19
to Druid User
Hi Eric and Jihoon,

please find the attached latest task log, and all the conf files for both previous running Druid version (0.12.3) and latest (0.13.0). Thank you for helping out!

I also run the java processes with the following command (if that helps in figuring out the error):

/usr/bin/java $(/bin/cat /opt/druid/druid-0.12.3/conf/druid/coordinator/jvm.config | /usr/bin/xargs) -cp /opt/druid/druid-0.12.3/conf/druid/_common:/opt/druid/druid-0.12.3/conf/druid/coordinator:/opt/druid/druid-0.12.3/lib/* io.druid.cli.Main server coordinator

and the following for Druid latest:

/usr/bin/java $(/bin/cat /opt/druid/druid-0.13.0/conf/druid/coordinator/jvm.config | /usr/bin/xargs) -cp /opt/druid/druid-0.13.0/conf/druid/_common:/opt/druid/druid-0.13.0/conf/druid/coordinator:/opt/druid/druid-0.13.0/lib/* org.apache.druid.cli.Main server coordinator


cheers,
Sergio
log.txt
conf-0.12.3.zip
conf-0.13.0.zip

Jihoon Son

unread,
Jan 14, 2019, 5:20:18 PM1/14/19
to Druid User
Hi Sergio,

it's really strange. I tested both endpoint.signingRegion and the system property (aws.region) with the 0.13.0 version and verified it works.
I basically added only the below configurations to common.runtime.properties for s3.

# For S3:

druid.storage.type=s3

druid.storage.bucket=jihoon-ap-southeast-2-test

druid.storage.baseKey=segments

druid.s3.accessKey=ACCESS_KEY

druid.s3.secretKey=SECRET_KEY


# For S3:

druid.indexer.logs.type=s3

druid.indexer.logs.s3Bucket=jihoon-ap-southeast-2-test

druid.indexer.logs.s3Prefix=indexing-logs


For endpoint.signingRegion, I added two more configurations to common.runtime.properties.

druid.s3.endpoint.url=s3.ap-southeast-2.amazonaws.com

druid.s3.endpoint.signingRegion=ap-southeast-2


For system property, I removed endpoint.signingRegion from common.runtime.properties and added '-Daws.region=ap-southeast-2' at the end of javaOpts in middleManager/runtime.properties as below.

druid.indexer.runner.javaOpts=-server -XX:+UseG1GC -Xmx6g  -XX:-MaxFDLimit -XX:MaxDirectMemorySize=4G -Duser.timezone=UTC -Dfile.encoding=UTF-8 -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager -Daws.region=ap-southeast-2


Would you please double check that your accessKey and secretKey are valid and there's no other region configurations in your machine?

Jihoon

Sergio Pintaldi

unread,
Jan 14, 2019, 7:45:18 PM1/14/19
to druid...@googlegroups.com
Hi Jihoon,

Thank you so much for testing it! I had to revert back to Druid 0.12.3, and everything works back to normal (I haven't changed my AWS credentials). I double checked the region options and they look okay.

So not sure what I did wrong.... Perhaps some error in parsing the AWS credentials from aws jar druid 0.13.0 (my secret keys has some special characters including /)?

Thank you for the support! I will soon retest everything from scratch with a new S3 bucket and see if I can make it working.

Sergio

You received this message because you are subscribed to a topic in the Google Groups "Druid User" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/druid-user/KbdNUXtiUns/unsubscribe.
To unsubscribe from this group and all its topics, send an email to druid-user+...@googlegroups.com.

To post to this group, send email to druid...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Jihoon Son

unread,
Jan 14, 2019, 7:53:45 PM1/14/19
to Druid User
Sad to hear that you should revert to 0.12.3. :(


> Perhaps some error in parsing the AWS credentials from aws jar druid 0.13.0 (my secret keys has some special characters including /)?

Druid does nothing with the secret key. It just passes the secret key to the official aws client. I think it should work unless there's a bug in the aws client.

Jihoon

Sergio Pintaldi

unread,
Jan 15, 2019, 1:40:56 AM1/15/19
to druid...@googlegroups.com
yeah... anyhow out of clarity, could you tell me which IAM permissions you gave to your role?

thanks!
Sergio


For more options, visit https://groups.google.com/d/optout.

Jihoon Son

unread,
Jan 15, 2019, 4:08:03 PM1/15/19
to Druid User
I tested ingesting data from s3 in my laptop configured with s3 deep storage.

Jihoon

srg

unread,
Jan 17, 2019, 10:33:23 PM1/17/19
to Druid User
 Hey Jihoon,

I did some extensive testing and I find out that Druid 0.13.0 need FULL access to the S3 bucket is pointing at (all Permissions). Specifically:

{
    "Version": "xxxx-xx-xx",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "s3:ListBucketByTags",
                "s3:GetLifecycleConfiguration",
                "s3:GetBucketTagging",
                "s3:GetInventoryConfiguration",
                "s3:GetObjectVersionTagging",
                "s3:ListBucketVersions",
                "s3:GetBucketLogging",
                "s3:GetAccelerateConfiguration",
                "s3:GetBucketPolicy",
                "s3:GetObjectVersionTorrent",
                "s3:GetObjectAcl",
                "s3:GetEncryptionConfiguration",
                "s3:GetBucketRequestPayment",
                "s3:GetObjectVersionAcl",
                "s3:GetObjectTagging",
                "s3:GetMetricsConfiguration",
                "s3:GetBucketPublicAccessBlock",
                "s3:GetBucketPolicyStatus",
                "s3:ListBucketMultipartUploads",
                "s3:GetBucketWebsite",
                "s3:GetBucketVersioning",
                "s3:GetBucketAcl",
                "s3:GetBucketNotification",
                "s3:GetReplicationConfiguration",
                "s3:ListMultipartUploadParts",
                "s3:GetObject",
                "s3:GetObjectTorrent",
                "s3:GetBucketCORS",
                "s3:GetAnalyticsConfiguration",
                "s3:GetObjectVersionForReplication",
                "s3:GetBucketLocation",
                "s3:GetObjectVersion"
            ],
            "Resource": [
                "arn:aws:s3:::MYBUCKET/*",
                "arn:aws:s3:::MYBUCKET"
            ]
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": [
                "s3:PutAnalyticsConfiguration",
                "s3:PutAccelerateConfiguration",
                "s3:DeleteObjectVersion",
                "s3:ReplicateTags",
                "s3:RestoreObject",
                "s3:CreateBucket",
                "s3:ReplicateObject",
                "s3:PutEncryptionConfiguration",
                "s3:DeleteBucketWebsite",
                "s3:AbortMultipartUpload",
                "s3:PutBucketTagging",
                "s3:PutLifecycleConfiguration",
                "s3:PutBucketAcl",
                "s3:PutObjectTagging",
                "s3:DeleteObject",
                "s3:DeleteBucket",
                "s3:PutBucketVersioning",
                "s3:PutObjectAcl",
                "s3:DeleteObjectTagging",
                "s3:PutBucketPublicAccessBlock",
                "s3:PutMetricsConfiguration",
                "s3:PutReplicationConfiguration",
                "s3:PutObjectVersionTagging",
                "s3:DeleteObjectVersionTagging",
                "s3:PutBucketCORS",
                "s3:DeleteBucketPolicy",
                "s3:PutInventoryConfiguration",
                "s3:PutObject",
                "s3:PutBucketNotification",
                "s3:ObjectOwnerOverrideToBucketOwner",
                "s3:PutBucketWebsite",
                "s3:PutBucketRequestPayment",
                "s3:PutBucketLogging",
                "s3:PutObjectVersionAcl",
                "s3:PutBucketPolicy",
                "s3:ReplicateDelete"
            ],
            "Resource": [
                "arn:aws:s3:::MYBUCKET/*",
                "arn:aws:s3:::MYBUCKET"
            ]
        }
    ]
}

Before with Druid 0.12.3, I had only:

s3:GetObject *
s3:PutObject *

One last question: I notice there is two properties in druid-0.13.0/quickstart/tutorial/conf/druid/_common/common.runtime.properties that are not present in the cluster config file druid-0.13.0/conf/druid/_common/common.runtime.properties, which are:

druid.host=localhost
#
# Security
#
druid.server.hiddenProperties=["druid.s3.accessKey","druid.s3.secretKey","druid.metadata.storage.connector.password"]

Should need to be included in the cluster common.runtime.properties as well?

thanks,
Sergio

Jihoon Son

unread,
Jan 18, 2019, 5:17:30 PM1/18/19
to Druid User
Hi Serio,

thanks for further testing. I tested with the below configuration, but it worked.. though I had to add some new jvm configurations to the overlord and the middleManager to set aws.region to ap-southeast-2.

{
    "Version": "xxxx-xx-xx",
    "Statement": [
        {
            "Sid": "StmtId",
            "Action": [
                "s3:GetObject",
                "s3:PutObject"
            ],
            "Effect": "Allow",
            "Resource": "arn:aws:s3:::my_bucket/*",
        }
    ]
}

Jihoon

Jihoon Son

unread,
Jan 18, 2019, 5:19:36 PM1/18/19
to Druid User
For druid.server.hiddenProperties, AFAIT, it is being used by only '/status/properties' API which is not documented. It would be fine without it unless you're going to use that API.

Jihoon

Adrian S

unread,
Feb 4, 2019, 10:38:18 AM2/4/19
to Druid User
For us, the following extra permissions (in addition to the ones we use for version 0.12.3) were required to get things working:
        {
            "Action": [
                "s3:GetBucketAcl"
            ],
            "Resource": [
                "arn:aws:s3:::MYBUCKET"
            ],
            "Effect": "Allow"
        }

Jihoon Son

unread,
Feb 4, 2019, 6:26:00 PM2/4/19
to Druid User
Hmm, I'm not sure what I missed.

Adrian, could you please post your full permission settings if possible?

Jihoon

--
You received this message because you are subscribed to the Google Groups "Druid User" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-user+...@googlegroups.com.
To post to this group, send email to druid...@googlegroups.com.

Jihoon Son

unread,
Feb 4, 2019, 8:43:16 PM2/4/19
to Druid User
I tested again and GetBucketAcl must be allowed if 'druid.storage.disableAcl' is false.
I'll add a documentation for this.

Thanks Adrian and Sergio!
Jihoon

Jihoon Son

unread,
Feb 5, 2019, 1:05:29 PM2/5/19
to Druid User

srg

unread,
Feb 7, 2019, 4:34:26 PM2/7/19
to Druid User
No problems! Thank you!
Reply all
Reply to author
Forward
0 new messages