Hi
I installed Druid Realtime Node and started the server with our kafka firehose type. Everything is running fine, except that it is not able to persist the segments it receives to deepstorage.
--------
Our deepstorage is HDFS.
Druid Version : 0.6.105
Hadoop Version : hadoop-1.0.2
runtime.properties
--------------------------
druid.extensions.coordinates=["io.druid.extensions:druid-kafka-eight:0.6.105","io.druid.extensions:druid-hdfs-storage:0.6.105"]
-----
The exception in Logs:
14/06/20 16:42:36 ERROR RealtimePlumber: Failed to persist merged index [realtime_17]:
{class=io.druid.segment.realtime.plumber.RealtimePlumber, exceptionType=class java.io.IOException, exceptionMessage=No FileSystem for scheme: hdfs, interval=2014-06-20T12:00:00.000Z/2014-06-20T13:00:00.000Z}
java.io.IOException: No FileSystem for scheme: hdfs
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2304)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2311)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:90)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2350)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2332)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:369)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
at io.druid.storage.hdfs.HdfsDataSegmentPusher.push(HdfsDataSegmentPusher.java:77)
at io.druid.segment.realtime.plumber.RealtimePlumber$4.doRun(RealtimePlumber.java:349)
at io.druid.common.guava.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
I also tried giving the hadoop client jar in the classpath. Still it is giving this exception and I think it is ending up persisting the segments in a local directory.
Is the druid-hdfs extension for 0.6.105 not compatible with our hadoop version ?
Could you please let me know what is missing here ?
Thanks
Narayan