--
You received this message because you are subscribed to the Google Groups "Druid User" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-user+unsubscribe@googlegroups.com.
To post to this group, send email to druid...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-user/af0f8fac-4d61-4e17-afca-28edd8c6429e%40googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "Druid User" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-user+unsubscribe@googlegroups.com.
To post to this group, send email to druid...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-user/e0555f94-7799-460d-971f-6674bba80898%40googlegroups.com.
ex: https://github.com/druid-io/druid/compare/0.11.0...hoesler:feature/hadoop2.8
git clone https://github.com/hoesler/druid.git
cd druid
git checkout 47290406a5fa01200545ab0825e7500dafdcfaba
mvn clean package -DskipTests
Creates the following files:
distribution/target/druid-0.11.0-bin.tar.gz
distribution/target/mysql-metadata-storage-0.11.0.tar.gz
Use the druid-hdfs-storage extension with an s3 storage directory. This should work the same way as s3 deep storage. Example relevant part of _common/common.runtime.properties
druid.extensions.loadList=["druid-s3-extensions", "mysql-metadata-storage", "druid-hdfs-storage"]
#druid.storage.type=s3
#druid.storage.bucket=${S3_BUCKET}
#druid.storage.baseKey=druid/segments
druid.s3.accessKey=${S3_ACCESS_KEY_ID}
druid.s3.secretKey=${S3_SECRET_ACCESS_KEY}
druid.storage.type=hdfs
druid.storage.storageDirectory=s3a://${S3_BUCKET}/druid/segments
Have hadoop use S3a. Example relevant part of _common/core-site.xml:
<property>
<name>fs.s3a.endpoint</name>
<value>s3.${AWS_REGION}.amazonaws.com</value>
</property>
<property>
<name>fs.s3.impl</name>
<value>org.apache.hadoop.fs.s3a.S3AFileSystem</value>
</property>
<property>
<name>fs.s3n.impl</name>
<value>org.apache.hadoop.fs.s3a.S3AFileSystem</value>
</property>
<property>
<name>fs.s3a.impl</name>
<value>org.apache.hadoop.fs.s3a.S3AFileSystem</value>
</property>
<property>
<name>fs.s3a.access.key</name>
<value>${S3_ACCESS_KEY_ID}</value>
</property>
<property>
<name>fs.s3a.secret.key</name>
<value>${S3_SECRET_ACCESS_KEY}</value>
</property>
Thanks to https://github.com/hoesler