S3 Deep Storage with fileSessionCredntials - AccessDenied

455 views
Skip to first unread message

Varaga

unread,
Apr 26, 2018, 7:32:30 AM4/26/18
to Druid User
Hi Everyone,

    I'm trying to integrate S3 for Deep Storage. We have IAM - AssumeRoles with temporary STS tokens fetched every hour.
    I have the plumbing done to write these tokens into ~/.aws/credentials file in the mesos workers. The S3 buckets are created and the policy entails to rw into these buckets for the role (assumed)

    The common.runtime.properties file is configured as below.
    
# For S3:
druid
.storage.type=s3
druid
.storage.bucket=<bucket>
druid
.storage.baseKey=segments/sample-data
druid
.s3.fileSessionCredentials=<home>/.aws/credentials

#
# Indexing service logs
#

druid
.indexer.logs.type=file
druid
.indexer.logs.directory=/mnt/sample-data/indexing-logs

       I have an index task that has an io config specified to read from a csv file, as below.
  
{
 
"type" : "index",
 
"spec" : {
   
"dataSchema" : {
     
"dataSource" : "caliper-sample-s3-1field",
     
"parser" : {
       
"type" : "string",
       
"parseSpec" : {
         
"format" : "csv",
         
"hasHeaderRow": "true",
         
"dimensionsSpec" : {
           
"dimensions" : [
             
"state_id",
             
"timestamp"
           
]
         
},
         
"timestampSpec": {
           
"column": "timestamp",
           
"format": "iso"
         
}
       
}
     
},
     
"metricsSpec" : [],
     
"granularitySpec" : {
       
"type" : "uniform",
       
"segmentGranularity" : "day",
       
"queryGranularity" : "none",
       
"rollup" : false
     
}
   
},
   
"ioConfig" : {
     
"type" : "index",
     
"firehose" : {
       
"type" : "local",
       
"baseDir" : "/var/druid/data/Caliper/",
       
"filter": "*"
     
},
     
"appendToExisting" : false
   
},
   
"tuningConfig" : {
     
"type" : "index",
     
"targetPartitionSize" : 5000000,
     
"maxRowsInMemory" : 25000,
     
"forceExtendableShardSpecs" : true
   
}
 
}
}

       What I'm not sure is if I need jets3t.properties file. I do have this properties file in the classpath.

       s3service.https-only=true
       s3service.s3-endpoint=s3.amazonaws.com
       s3service.s3-endpoint-https-port=443

       When I ingest the task, I get the following exception:
      
.....
....
2018-04-26T10:15:13,122 INFO [appenderator_merge_0] io.druid.storage.s3.S3DataSegmentPusher - Pushing [/tmp/druid6316416650554449011index.zip] to bucket[<bucket>] and key[segments/
sample
-data/caliper-sample-s3-1field/2017-10-02T00:00:00.000Z_2017-10-03T00:00:00.000Z/2018-04-26T10:07:55.708Z/0/index.zip].
.....
.....
.....
java
.io.IOException: com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 1F19F4126C6F6CED; S3 Extended Request ID: Vt6qlA
avGXJWsThesmy6KebvrYKXLm59YYpipXyRkDOB4l6JJIvNJs
+WqZlnf6LAc7T8ixydEjI=), S3 Extended Request ID: Vt6qlAavGXJWsThesmy6KebvrYKXLm59YYpipXyRkDOB4l6JJIvNJs+WqZlnf6LAc7T8ixydEjI=
        at io
.druid.storage.s3.S3DataSegmentPusher.push(S3DataSegmentPusher.java:125) ~[?:?]
        at io
.druid.segment.realtime.appenderator.AppenderatorImpl.lambda$mergeAndPush$3(AppenderatorImpl.java:655) ~[druid-server-0.13.0-SNAPSHOT.jar:0.13.0-SNAPSHOT]
        at io
.druid.java.util.common.RetryUtils.retry(RetryUtils.java:86) ~[java-util-0.13.0-SNAPSHOT.jar:0.13.0-SNAPSHOT]
        at io
.druid.java.util.common.RetryUtils.retry(RetryUtils.java:114) ~[java-util-0.13.0-SNAPSHOT.jar:0.13.0-SNAPSHOT]
        at io
.druid.java.util.common.RetryUtils.retry(RetryUtils.java:104) ~[java-util-0.13.0-SNAPSHOT.jar:0.13.0-SNAPSHOT]
        at io
.druid.segment.realtime.appenderator.AppenderatorImpl.mergeAndPush(AppenderatorImpl.java:646) ~[druid-server-0.13.0-SNAPSHOT.jar:0.13.0-SNAPSHOT]
        at io
.druid.segment.realtime.appenderator.AppenderatorImpl.lambda$push$0(AppenderatorImpl.java:536) ~[druid-server-0.13.0-SNAPSHOT.jar:0.13.0-SNAPSHOT]
        at com
.google.common.util.concurrent.Futures$1.apply(Futures.java:713) [guava-16.0.1.jar:?]
        at com
.google.common.util.concurrent.Futures$ChainingListenableFuture.run(Futures.java:861) [guava-16.0.1.jar:?]
        at java
.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_161]
        at java
.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_161]
        at java
.lang.Thread.run(Thread.java:748) [?:1.8.0_161]
Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 1F19F4126C6F6CED; S3 Extended Request ID: Vt6qlAavGXJWsThe
smy6KebvrYKXLm59YYpipXyRkDOB4l6JJIvNJs
+WqZlnf6LAc7T8ixydEjI=)
        at com
.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1638) ~[aws-java-sdk-bundle-1.11.199.jar:?]
        at com
.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1303) ~[aws-java-sdk-bundle-1.11.199.jar:?]
        at com
.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1055) ~[aws-java-sdk-bundle-1.11.199.jar:?]
        at com
.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743) ~[aws-java-sdk-bundle-1.11.199.jar:?]
        at com
.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717) ~[aws-java-sdk-bundle-1.11.199.jar:?]
        at com
.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699) ~[aws-java-sdk-bundle-1.11.199.jar:?]
        at com
.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667) ~[aws-java-sdk-bundle-1.11.199.jar:?]
        at com
.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649) ~[aws-java-sdk-bundle-1.11.199.jar:?]
        at com
.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513) ~[aws-java-sdk-bundle-1.11.199.jar:?]
        at com
.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4229) ~[aws-java-sdk-bundle-1.11.199.jar:?]
        at com
.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4176) ~[aws-java-sdk-bundle-1.11.199.jar:?]
        at com
.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1720) ~[aws-java-sdk-bundle-1.11.199.jar:?]
        at io
.druid.storage.s3.S3DataSegmentPusher.uploadFileIfPossible(S3DataSegmentPusher.java:182) ~[?:?]
        at io
.druid.storage.s3.S3DataSegmentPusher.lambda$push$0(S3DataSegmentPusher.java:111) ~[?:?]
        at io
.druid.java.util.common.RetryUtils.retry(RetryUtils.java:86) ~[java-util-0.13.0-SNAPSHOT.jar:0.13.0-SNAPSHOT]
        at io
.druid.java.util.common.RetryUtils.retry(RetryUtils.java:114) ~[java-util-0.13.0-SNAPSHOT.jar:0.13.0-SNAPSHOT]
        at io
.druid.java.util.common.RetryUtils.retry(RetryUtils.java:104) ~[java-util-0.13.0-SNAPSHOT.jar:0.13.0-SNAPSHOT]
        at io
.druid.storage.s3.S3Utils.retryS3Operation(S3Utils.java:83) ~[?:?]
        at io
.druid.storage.s3.S3DataSegmentPusher.push(S3DataSegmentPusher.java:109) ~[?:?]


    I'm not sure why the DefaultAWSCredentialsProviderChain or ProfileCredentialsProvider client is not picked up?

Can you help here?

Best regards
varaga

Chakravarthy varaga

unread,
Apr 26, 2018, 10:26:38 AM4/26/18
to druid...@googlegroups.com

Any help here guys?

--
You received this message because you are subscribed to the Google Groups "Druid User" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-user+unsubscribe@googlegroups.com.
To post to this group, send email to druid...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-user/39097e7e-77d7-40a4-a974-cf0c38a8aedc%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Chakravarthy varaga

unread,
Apr 26, 2018, 3:20:08 PM4/26/18
to druid...@googlegroups.com
Hey Druiders,

        I presume you guys should have ran into this. Can you point me to a direction here?

Best Regards
Varaga

Varaga

unread,
Apr 27, 2018, 6:34:55 AM4/27/18
to Druid User

It's worth to inform you guys that I am using a 0.13.0-SNAPSHOT distribution built locally (that includes my PR fix for zookeeper)...

Figured out that this latest contains aws-java-sdk implementation.
The problem seems to be the fact that the region comes back as unidentified.

It is a bit of pain of to go through the code to find out what property needs to be set or how the code behaves..
It'd be great if the documentation is updated for new features added?

Some debug logs worth your look.


2018-04-27T09:18:46,231 INFO [appenderator_merge_0] io.druid.storage.s3.S3DataSegmentPusher - Pushing [/tmp/druid400080604628751070index.zip] to bucket[<bucket>] and key[segments/s
ample
-data/caliper-sample-s3-1field/2017-09-30T00:00:00.000Z_2017-10-01T00:00:00.000Z/2018-04-27T09:10:50.350Z/0/index.zip].
2018-04-27T09:18:46,232 DEBUG [appenderator_merge_0] com.amazonaws.services.s3.internal.Mimetypes - Recognised extension 'zip', mimetype is: 'application/zip'
2018-04-27T09:18:46,232 DEBUG [appenderator_merge_0] com.amazonaws.services.s3.AmazonS3Client - Bucket region cache doesn't have an entry for <<bucket>>. Trying to get bucket region
from Amazon S3.

.........
.............
........
2018-04-27T09:18:46,241 DEBUG [appenderator_merge_0]
com.amazonaws.thirdparty.apache.http.impl.execchain.MainClientExec - Connection can be kept alive for 60000 MILLISECONDS
2018-04-27T09:18:46,241 DEBUG [appenderator_merge_0] com.amazonaws.thirdparty.apache.http.impl.conn.PoolingHttpClientConnectionManager - Connection [id: 1][route: {s}->https://s3.amazonaws.com:443] can be kept alive for 60.0 seconds
2018-04-27T09:18:46,241 DEBUG [appenderator_merge_0] com.amazonaws.thirdparty.apache.http.impl.conn.PoolingHttpClientConnectionManager - Connection released: [id: 1][route: {s}->https://s3.amazonaws.com:443][total kept alive: 1; route allocated: 1 of 50; total allocated: 1 of 50]
2018-04-27T09:18:46,241 DEBUG [appenderator_merge_0] com.amazonaws.request - Received successful response: 200, AWS Request ID: null
2018-04-27T09:18:46,241 DEBUG [appenderator_merge_0] com.amazonaws.requestId - x-amzn-RequestId: not available
2018-04-27T09:18:46,241 DEBUG [appenderator_merge_0] com.amazonaws.requestId - AWS Request ID: not available
2018-04-27T09:18:46,241 DEBUG [appenderator_merge_0] com.amazonaws.services.s3.AmazonS3Client - Not able to derive region of the hmheng-data-services/druid-segments/dev from the HEAD Bucket requests.
2018-04-27T09:18:46,241 DEBUG [appenderator_merge_0] com.amazonaws.services.s3.AmazonS3Client - Region for
<bucket> is null

Jihoon Son

unread,
Apr 27, 2018, 8:11:13 PM4/27/18
to druid...@googlegroups.com
Hi Varaga,

druid.s3.fileSessionCredentials=<home>/.aws/credentials is not a valid configuration. If you have the 'credentials' file under /your/home/.aws/, Druid should be aware of it automatically.

Would you check that the region is set properly?
Jihoon


2018년 4월 27일 (금) 오전 3:34, Varaga <chakrav...@gmail.com>님이 작성:
--
You received this message because you are subscribed to the Google Groups "Druid User" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-user+...@googlegroups.com.

To post to this group, send email to druid...@googlegroups.com.

Chakravarthy varaga

unread,
Apr 28, 2018, 2:59:09 AM4/28/18
to druid...@googlegroups.com
Thanks Jihoon

    The S3 credentials file are picked up with that configuration and you can see that in the logs attached. The region is also set in config file under. /../.aws/  directory. However this config file may not have been picked up.
Also it was the InstanceProfileProvider that was handling the credentials.

 The process runs as root in an Ubuntu base (in house) image that sets the root user to a location in mesos mount.

Are you suggesting to run the process in into its own user and then sts tokens fetched as this user without specifying the fileSessionCredentials property


Chakravarthy varaga

unread,
Apr 30, 2018, 6:26:01 AM4/30/18
to druid...@googlegroups.com
Hi Jihoon,

     The druid.s3.fileSessionCredentials=~/.aws/credentials
     was what was set.

     I tried to set the region through the environment (exported variable) as well but it didn't work. With the aws-cli assume role command the config file is fetched and written as  ~/.aws/config
     This should be autodetected by the aws client. However I see it is not!

     How else do you recommend to set the region? Also, I'm not sure if this is the issue as well !
   

2018-04-30T10:10:19,008 DEBUG [appenderator_merge_0] com.amazonaws.thirdparty.apache.http.headers - http-outgoing-4 << Transfer-Encoding: chunked
2018-04-30T10:10:19,008 DEBUG [appenderator_merge_0] com.amazonaws.thirdparty.apache.http.headers - http-outgoing-4 << Server: AmazonS3
2018-04-30T10:10:19,008 DEBUG [appenderator_merge_0] com.amazonaws.thirdparty.apache.http.impl.execchain.MainClientExec - Connection can be kept alive for 60000 MILLISECONDS
2018-04-30T10:10:19,008 DEBUG [appenderator_merge_0] com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser - Parsing XML response document with handler: class com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser$AccessControlListHandler
2018-04-30T10:10:19,008 DEBUG [appenderator_merge_0] com.amazonaws.thirdparty.apache.http.wire - http-outgoing-4 << "23c[\r][\n]"
2018-04-30T10:10:19,008 DEBUG [appenderator_merge_0] com.amazonaws.thirdparty.apache.http.wire - http-outgoing-4 << "<?xml version="1.0" encoding="UTF-8"?>[\n]"
2018-04-30T10:10:19,008 DEBUG [appenderator_merge_0] com.amazonaws.thirdparty.apache.http.wire - http-outgoing-4 << "<AccessControlPolicy xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>55a2a99348b42f91bd02796243cb3c4be7fc15c422defec115804ab3c93608e8</ID><DisplayName>bedrock-nonprod-aws</DisplayName></Owner><AccessControlList><Grant><Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="CanonicalUser"><ID>55a2a99348b42f91bd02796243cb3c4be7fc15c422defec115804ab3c93608e8</ID><DisplayName>bedrock-nonprod-aws</DisplayName></Grantee><Permission>FULL_CONTROL</Permission></Grant></AccessControlList></AccessControlPolicy>[\r][\n]"
2018-04-30T10:10:19,008 DEBUG [appenderator_merge_0] com.amazonaws.thirdparty.apache.http.wire - http-outgoing-4 << "0[\r][\n]"
2018-04-30T10:10:19,008 DEBUG [appenderator_merge_0] com.amazonaws.thirdparty.apache.http.wire - http-outgoing-4 << "[\r][\n]"
2018-04-30T10:10:19,009 DEBUG [appenderator_merge_0] com.amazonaws.thirdparty.apache.http.impl.conn.PoolingHttpClientConnectionManager - Connection [id: 4][route: {s}->https://s3.amazonaws.com:443] can be kept alive for 60.0 seconds
2018-04-30T10:10:19,009 DEBUG [appenderator_merge_0] com.amazonaws.thirdparty.apache.http.impl.conn.PoolingHttpClientConnectionManager - Connection released: [id: 4][route: {s}->https://s3.amazonaws.com:443][total kept alive: 1; route allocated: 1 of 50; total allocated: 1 of 50]
2018-04-30T10:10:19,009 DEBUG [appenderator_merge_0] com.amazonaws.request - Received successful response: 200, AWS Request ID: F0245F222F551DF8
2018-04-30T10:10:19,009 DEBUG [appenderator_merge_0] com.amazonaws.requestId - x-amzn-RequestId: not available
2018-04-30T10:10:19,009 DEBUG [appenderator_merge_0] com.amazonaws.requestId - AWS Request ID: F0245F222F551DF8
2018-04-30T10:10:19,009 INFO [appenderator_merge_0] io.druid.storage.s3.S3DataSegmentPusher - Pushing [/tmp/druid3309108450302028265index.zip] to bucket[<Bucket Name Masked>] and key[segments/sample-data/caliper-sample-s3-1field/2017-09-30T00:00:00.000Z_2017-10-01T00:00:00.000Z/2018-04-30T10:01:57.504Z/0/index.zip].
2018-04-30T10:10:19,009 DEBUG [appenderator_merge_0] com.amazonaws.services.s3.internal.Mimetypes - Recognised extension 'zip', mimetype is: 'application/zip'
2018-04-30T10:10:19,009 DEBUG [appenderator_merge_0] com.amazonaws.services.s3.AmazonS3Client - Bucket region cache doesn't have an entry for <Bucket Name Masked>. Trying to get bucket region from Amazon S3.

.............
2018-04-30T10:10:19,020 DEBUG [appenderator_merge_0] com.amazonaws.services.s3.AmazonS3Client - Not able to derive region of the <Bucket Name Masked> from the HEAD Bucket requests.
.............

2018-04-30T10:10:19,020 DEBUG [appenderator_merge_0] com.amazonaws.services.s3.AmazonS3Client - Region for <Bucket Name Masked> is null
2018-04-30T10:10:19,020 DEBUG [appenderator_merge_0] com.amazonaws.request - Sending Request: PUT https://s3.amazonaws.com <Bucket Name Masked>/segments/sample-data/caliper-sample-s3-1field/2017-09-30T00%3A00%3A00.000Z_2017-10-01T00%3A00%3A00.000Z/2018-04-30T10%3A01%3A57.504Z/0/index.zip Headers: (x-amz-grant-full-control: id="", id="<Masked>", User-Agent: aws-sdk-java/1.11.199 Linux/4.9.76-3.78.amzn1.x86_64 Java_HotSpot(TM)_64-Bit_Server_VM/25.161-b12 java/1.8.0_161, amz-sdk-invocation-id: 2048b4ca-c713-1757-1fd2-49ef46e3a3a5, Content-Length: 37654, Content-MD5: 28eKWEVL6KllzpPX/Tkg5A==, Content-Type: application/zip, )
2018-04-30T10:10:19,020 DEBUG [appenderator_merge_0] com.amazonaws.auth.AWS4Signer - AWS4 Canonical Request: '"PUT
<Bucket Name Masked>segments/sample-data/caliper-sample-s3-1field/2017-09-30T00%3A00%3A00.000Z_2017-10-01T00%3A00%3A00.000Z/2018-04-30T10%3A01%3A57.504Z/0/index.zip

amz-sdk-invocation-id:2048b4ca-c713-1757-1fd2-49ef46e3a3a5
amz-sdk-retry:0/0/500
content-length:37654
content-md5:28eKWEVL6KllzpPX/Tkg5A==
content-type:application/zip
host:s3.amazonaws.com
user-agent:aws-sdk-java/1.11.199 Linux/4.9.76-3.78.amzn1.x86_64 Java_HotSpot(TM)_64-Bit_Server_VM/25.161-b12 java/1.8.0_161
x-amz-content-sha256:UNSIGNED-PAYLOAD
x-amz-date:20180430T101019Z
x-amz-grant-full-control:id="<Masked>", id="<Masked>"
x-amz-security-token:FQoDYXdzEOv/....................................<Masked>

amz-sdk-invocation-id;amz-sdk-retry;content-length;content-md5;content-type;host;user-agent;x-amz-content-sha256;x-amz-date;x-amz-grant-full-control;x-amz-security-token
UNSIGNED-PAYLOAD"
2018-04-30T10:10:19,020 DEBUG [appenderator_merge_0] com.amazonaws.auth.AWS4Signer - AWS4 String to Sign: '"AWS4-HMAC-SHA256
20180430T101019Z
20180430/us-east-1/s3/aws4_request

    


To unsubscribe from this group and stop receiving emails from it, send an email to druid-user+unsubscribe@googlegroups.com.

To post to this group, send email to druid...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-user/c4ad6fa5-1054-444e-8a17-5300f798de07%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Druid User" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-user+unsubscribe@googlegroups.com.

To post to this group, send email to druid...@googlegroups.com.

Chakravarthy varaga

unread,
Apr 30, 2018, 9:08:00 AM4/30/18
to druid...@googlegroups.com
my common.runtime.properties file

druid.storage.type=s3
druid.storage.bucket=<Bucket Masked>
druid.storage.baseKey=segments/sample-data
druid.s3.fileSessionCredentials=~/.aws/credentials
druid.s3.endpoint.url=s3.amazonaws.com
druid.s3.endpoint.serviceName=s3
druid.s3.endpoint.signingRegion=us-east-1

Jihoon Son

unread,
Apr 30, 2018, 9:04:31 PM4/30/18
to druid...@googlegroups.com
Hi Varaga,

which version of Druid are you using?

I also set my client region in /my/home/.aws/config file. Or, you should be able to set the region by 'export AWS_REGION='us-east-1'.

Jihoon

2018년 4월 30일 (월) 오전 6:08, Chakravarthy varaga <chakrav...@gmail.com>님이 작성:
To unsubscribe from this group and stop receiving emails from it, send an email to druid-user+...@googlegroups.com.

To post to this group, send email to druid...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-user/c4ad6fa5-1054-444e-8a17-5300f798de07%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Druid User" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-user+...@googlegroups.com.

To post to this group, send email to druid...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Druid User" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-user+...@googlegroups.com.

To post to this group, send email to druid...@googlegroups.com.

Chakravarthy varaga

unread,
May 1, 2018, 4:05:11 AM5/1/18
to druid...@googlegroups.com
Hi Jihoon

     I have sorted this. I had to set the AWS_CREDENTIALS_PROPERTIES_FILE env to they creds., File. 

     The problem was the ~(home) was the mesos slave  directory where the token was fetched. The credentials provider in the have process was fetching the 'user.home' for the tokens and this was the process user ( root inside the container).

     Thanks for your prompt responses. I guess some documentation around this would have been great. I'm using 13.0.snapshot version.

Best Regards
Varaga

Jihoon Son

unread,
May 1, 2018, 1:42:17 PM5/1/18
to druid...@googlegroups.com
Glad to hear that you solved the issue!

Jihoon

2018년 5월 1일 (화) 오전 1:05, Chakravarthy varaga <chakrav...@gmail.com>님이 작성:
Reply all
Reply to author
Forward
0 new messages