2015-01-03 11:54:07,110 WARN [task-runner-0] io.druid.indexing.common.index.YeOldePlumberSchool - Failed to merge and upload java.io.IOException: No FileSystem for scheme: hdfs at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2304) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2311) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:90) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2350) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2332) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:369) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296) at io.druid.storage.hdfs.HdfsDataSegmentPusher.push(HdfsDataSegmentPusher.java:75) at io.druid.indexing.common.task.IndexTask$2.push(IndexTask.java:390) at io.druid.indexing.common.index.YeOldePlumberSchool$1.finishJob(YeOldePlumberSchool.java:179) at io.druid.indexing.common.task.IndexTask.generateSegment(IndexTask.java:444) at io.druid.indexing.common.task.IndexTask.run(IndexTask.java:198) at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:218) at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:197) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) 2015-01-03 11:54:07,114 INFO [task-runner-0] io.druid.indexing.common.index.YeOldePlumberSchool - Deleting Index File[/tmp/persistent/task/index_trafficbase_2015-01-03T11:53:36.676Z/work/trafficbase_2014-12-22T00:00:00.000Z_2014-12-23T00:00:00.000Z_2015-01-03T11:53:36.686Z_0/trafficbase_2014-12-22T00:00:00.000Z_2014-12-23T00:00:00.000Z_2015-01-03T11:53:36.686Z/spill0] 2015-01-03 11:54:07,115 INFO [task-runner-0] io.druid.indexing.common.task.IndexTask - Task[index_trafficbase_2015-01-03T11:53:36.676Z] interval[2014-12-22T00:00:00.000Z/2014-12-23T00:00:00.000Z] partition[0] took in 99,999 rows (99,999 processed, 0 unparseable, 0 thrown away) and output 99,405 rows 2015-01-03 11:54:07,117 ERROR [task-runner-0] io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running task[IndexTask{id=index_trafficbase_2015-01-03T11:53:36.676Z, type=index, dataSource=trafficbase}] java.lang.RuntimeException: java.io.IOException: No FileSystem for scheme: hdfs at com.google.common.base.Throwables.propagate(Throwables.java:160) at io.druid.indexing.common.index.YeOldePlumberSchool$1.finishJob(YeOldePlumberSchool.java:189) at io.druid.indexing.common.task.IndexTask.generateSegment(IndexTask.java:444) at io.druid.indexing.common.task.IndexTask.run(IndexTask.java:198) at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:218) at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:197) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: java.io.IOException: No FileSystem for scheme: hdfs at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2304) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2311) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:90) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2350) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2332) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:369) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296) at io.druid.storage.hdfs.HdfsDataSegmentPusher.push(HdfsDataSegmentPusher.java:75) at io.druid.indexing.common.task.IndexTask$2.push(IndexTask.java:390) at io.druid.indexing.common.index.YeOldePlumberSchool$1.finishJob(YeOldePlumberSchool.java:179) ... 8 more
I searched for solution on the groups and noticed that when I started the node using hdfs, I should include hadoop conguration files in the classpath. The problem is that since I'm using a remote hadoop cluster, I don't have hadoop configuration files on the local servers, they are on the severs holding the hadoop cluster.So I wonder if I can use a remote hdfs as deep storage or I have to deploy a local hadoop instance and use this hdfs?
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-development/a721350f-b8f7-487a-a7b1-27d4f899f7d3%40googlegroups.com.--
You received this message because you are subscribed to the Google Groups "Druid Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-developm...@googlegroups.com.
To post to this group, send email to druid-de...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-development/e172b0b1-d3ef-48a0-a8fe-9a03f7269f45%40googlegroups.com.