Hi, Fangjin:I stop all the nodes, then drop the tables in MySQL, and clean the dir "consumers" and "druid" in zookeeper, at last remove the dir "rm -rf /tmp/*" in every node.
Finish these steps, I restart all the nodes, and start to load the data through indexing, but still have this exception on historical node....
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-development/417d92ea-fb9c-49c7-99ab-33b85dee6374%40googlegroups.com.--
You received this message because you are subscribed to the Google Groups "Druid Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-developm...@googlegroups.com.
To post to this group, send email to druid-de...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-development/6d858f18-db25-4307-b0cc-13b3973a053d%40googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-development/4aa598c8-718d-4c14-a8cc-199e5ec31aba%40googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-development/67338aed-2a57-4317-96c3-9dd8d2c4fd2a%40googlegroups.com.
2014-04-18 08:50:56,175 INFO [task-runner-0] io.druid.segment.IndexIO$DefaultIndexIOHandler - Converting v8[/tmp/persistent/task/index_wikipedia_2014-04-18T08:50:46.657Z/work/wikipedia_2013-08-31T00:00:00.000Z_2013-09-01T00:00:00.000Z_2014-04-18T08:50:46.670Z_0/wikipedia_2013-08-31T00:00:00.000Z_2013-09-01T00:00:00.000Z_2014-04-18T08:50:46.670Z/spill0/v8-tmp] to v9[/tmp/persistent/task/index_wikipedia_2014-04-18T08:50:46.657Z/work/wikipedia_2013-08-31T00:00:00.000Z_2013-09-01T00:00:00.000Z_2014-04-18T08:50:46.670Z_0/wikipedia_2013-08-31T00:00:00.000Z_2013-09-01T00:00:00.000Z_2014-04-18T08:50:46.670Z/spill0] 2014-04-18 08:50:56,177 INFO [task-runner-0] io.druid.segment.IndexIO$DefaultIndexIOHandler - Processing file[dim_anonymous.drd] 2014-04-18 08:50:56,182 INFO [task-runner-0] io.druid.segment.IndexIO$DefaultIndexIOHandler - Dimension[anonymous] is single value, converting... 2014-04-18 08:50:56,203 INFO [task-runner-0] io.druid.segment.IndexIO$DefaultIndexIOHandler - Processing file[dim_city.drd] 2014-04-18 08:50:56,204 INFO [task-runner-0] io.druid.segment.IndexIO$DefaultIndexIOHandler - Dimension[city] is single value, converting... 2014-04-18 08:50:56,204 INFO [task-runner-0] io.druid.segment.IndexIO$DefaultIndexIOHandler - Processing file[dim_continent.drd] 2014-04-18 08:50:56,204 INFO [task-runner-0] io.druid.segment.IndexIO$DefaultIndexIOHandler - Dimension[continent] is single value, converting... 2014-04-18 08:50:56,207 INFO [task-runner-0] io.druid.segment.IndexIO$DefaultIndexIOHandler - Processing file[dim_country.drd] 2014-04-18 08:50:56,208 INFO [task-runner-0] io.druid.segment.IndexIO$DefaultIndexIOHandler - Dimension[country] is single value, converting... 2014-04-18 08:50:56,208 INFO [task-runner-0] io.druid.segment.IndexIO$DefaultIndexIOHandler - Processing file[dim_language.drd] 2014-04-18 08:50:56,208 INFO [task-runner-0] io.druid.segment.IndexIO$DefaultIndexIOHandler - Dimension[language] is single value, converting... 2014-04-18 08:50:56,209 INFO [task-runner-0] io.druid.segment.IndexIO$DefaultIndexIOHandler - Processing file[dim_namespace.drd] 2014-04-18 08:50:56,209 INFO [task-runner-0] io.druid.segment.IndexIO$DefaultIndexIOHandler - Dimension[namespace] is single value, converting... 2014-04-18 08:50:56,209 INFO [task-runner-0] io.druid.segment.IndexIO$DefaultIndexIOHandler - Processing file[dim_newpage.drd] 2014-04-18 08:50:56,209 INFO [task-runner-0] io.druid.segment.IndexIO$DefaultIndexIOHandler - Dimension[newpage] is single value, converting... 2014-04-18 08:50:56,210 INFO [task-runner-0] io.druid.segment.IndexIO$DefaultIndexIOHandler - Processing file[dim_page.drd] 2014-04-18 08:50:56,213 INFO [task-runner-0] io.druid.segment.IndexIO$DefaultIndexIOHandler - Dimension[page] is single value, converting... 2014-04-18 08:50:56,214 INFO [task-runner-0] io.druid.segment.IndexIO$DefaultIndexIOHandler - Processing file[dim_region.drd] 2014-04-18 08:50:56,214 INFO [task-runner-0] io.druid.segment.IndexIO$DefaultIndexIOHandler - Dimension[region] is single value, converting... 2014-04-18 08:50:56,214 INFO [task-runner-0] io.druid.segment.IndexIO$DefaultIndexIOHandler - Processing file[dim_robot.drd] 2014-04-18 08:50:56,215 INFO [task-runner-0] io.druid.segment.IndexIO$DefaultIndexIOHandler - Dimension[robot] is single value, converting... 2014-04-18 08:50:56,215 INFO [task-runner-0] io.druid.segment.IndexIO$DefaultIndexIOHandler - Processing file[dim_unpatrolled.drd] 2014-04-18 08:50:56,215 INFO [task-runner-0] io.druid.segment.IndexIO$DefaultIndexIOHandler - Dimension[unpatrolled] is single value, converting... 2014-04-18 08:50:56,216 INFO [task-runner-0] io.druid.segment.IndexIO$DefaultIndexIOHandler - Processing file[dim_user.drd] 2014-04-18 08:50:56,216 INFO [task-runner-0] io.druid.segment.IndexIO$DefaultIndexIOHandler - Dimension[user] is single value, converting... 2014-04-18 08:50:56,216 INFO [task-runner-0] io.druid.segment.IndexIO$DefaultIndexIOHandler - Processing file[index.drd] 2014-04-18 08:50:56,219 INFO [task-runner-0] io.druid.segment.IndexIO$DefaultIndexIOHandler - Processing file[inverted.drd] 2014-04-18 08:50:56,219 INFO [task-runner-0] io.druid.segment.IndexIO$DefaultIndexIOHandler - Processing file[met_added_LITTLE_ENDIAN.drd] 2014-04-18 08:50:56,226 INFO [task-runner-0] io.druid.segment.IndexIO$DefaultIndexIOHandler - Processing file[met_count_LITTLE_ENDIAN.drd] 2014-04-18 08:50:56,227 INFO [task-runner-0] io.druid.segment.IndexIO$DefaultIndexIOHandler - Processing file[met_deleted_LITTLE_ENDIAN.drd] 2014-04-18 08:50:56,227 INFO [task-runner-0] io.druid.segment.IndexIO$DefaultIndexIOHandler - Processing file[met_delta_LITTLE_ENDIAN.drd] 2014-04-18 08:50:56,228 INFO [task-runner-0] io.druid.segment.IndexIO$DefaultIndexIOHandler - Processing file[spatial.drd] 2014-04-18 08:50:56,231 INFO [task-runner-0] io.druid.segment.IndexIO$DefaultIndexIOHandler - Processing file[time_LITTLE_ENDIAN.drd] 2014-04-18 08:50:56,239 INFO [task-runner-0] io.druid.segment.IndexIO$DefaultIndexIOHandler - Skipped files[[index.drd, inverted.drd, spatial.drd]] 9.623: [Full GC9.623: [Tenured: 26657K->31422K(786432K), 0.1514110 secs] 85856K->31422K(1022400K), [Perm : 37566K->37566K(37568K)], 0.1515960 secs] [Times: user=0.15 sys=0.00, real=0.15 secs] 2014-04-18 08:50:56,606 WARN [task-runner-0] io.druid.indexing.common.index.YeOldePlumberSchool - Failed to merge and upload java.io.IOException: failure to login at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:490) at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:452) at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:1494) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1395) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187) at io.druid.storage.hdfs.HdfsDataSegmentPusher.push(HdfsDataSegmentPusher.java:82) at io.druid.indexing.common.task.IndexTask$2.push(IndexTask.java:339) at io.druid.indexing.common.index.YeOldePlumberSchool$1.finishJob(YeOldePlumberSchool.java:156) at io.druid.indexing.common.task.IndexTask.generateSegment(IndexTask.java:395) at io.druid.indexing.common.task.IndexTask.run(IndexTask.java:153) at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:216) at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:195) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744) Caused by: javax.security.auth.login.LoginException: unable to find LoginModule class: org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule at javax.security.auth.login.LoginContext.invoke(LoginContext.java:800) at javax.security.auth.login.LoginContext.access$000(LoginContext.java:203) at javax.security.auth.login.LoginContext$4.run(LoginContext.java:690) at javax.security.auth.login.LoginContext$4.run(LoginContext.java:688) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:687) at javax.security.auth.login.LoginContext.login(LoginContext.java:595) at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:471) ... 16 more
Hi Tao,druid.storage.type and storageDirectory is used by the nodes creating the segments, So you need to specify them in runtime.properties for indexing service instead of historical nodes.Historical nodes only gets to know about the location of segments from segment metadata stored in mysql so they dont need these.It seems your segments are still created in local storage instead of hdfs.Once you add these and reindex data, you should see the segments being created in hdfs instead of local storage.
Inline.
Tao
...<span sty
20T16:23:02.488Z] to overlord[http://<hostname>:<port>/druid/indexer/v1/action]: LockListAction{} 2014-04-20 16:23:25,654 INFO [task-runner-0] io.druid.indexing.common.task.HadoopIndexTask - Setting version to: 2014-04-20T16:23:02.489Z 2014-04-20 16:23:25,963 ERROR [task-runner-0] io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running task[HadoopIndexTask{id=index_hadoop_wikipedia_2014-04-20T16:23:02.488Z, type=index_hadoop, dataSource=wikipedia}] java.lang.RuntimeException: java.lang.RuntimeException: class org.apache.hadoop.security.ShellBasedUnixGroupsMapping not org.apache.hadoop.security.GroupMappingServiceProvider at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:899) at org.apache.hadoop.security.Groups.<init>(Groups.java:48) at org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:140) at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:205) at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:184) at org.apache.hadoop.security.UserGroupInformation.isSecurityEnabled(UserGroupInformation.java:236) at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:466) at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:452) at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:1494) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1395) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:123) at io.druid.storage.hdfs.HdfsDataSegmentPusher.getPathForHadoop(HdfsDataSegmentPusher.java:70) at io.druid.indexing.common.task.HadoopIndexTask.run(HadoopIndexTask.java:178) at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:216) at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:195) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744) Caused by: java.lang.RuntimeException: class org.apache.hadoop.security.ShellBasedUnixGroupsMapping not org.apache.hadoop.security.GroupMappingServiceProvider at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:893) ... 19 more 2014-04-20 16:23:25,973 INFO [task-runner-0] io.druid.indexing.worker.executor.ExecutorLifecycle - Task completed with status: { "id" : "index_hadoop_wikipedia_2014-04-20T16:23:02.488Z", "status" : "FAILED", "duration" : 12598 }
And also have other questions.
This topic is for running your doc example "Loading your Data(part 1)" (http://druid.io/docs/0.6.73/Tutorial:-Loading-Your-Data-Part-1.html)
1. It doesn't mention how to start realtime, so I used the command in part 2 to start it.
"java -Xmx256m -Duser.timezone=UTC -Dfile.encoding=UTF-8 -Ddruid.realtime.specFile=examples/indexing/wikipedia.spec -classpath lib/*:/home/taoluo/hadoop-lib/*:/home/taoluo/software/hadoop-2.2.0/etc/hadoop:config/realtime io.druid.cli.Main server realtime"
I add hadoop-clinet.jar and hadoop configuration files in classpath.
Is it right? And "druid.realtime.specFile" must be given? If I want to add another data source in Druid, should I defined another specFile for this data source and restart realtime?
2.The difference I followed the steps between the doc "Loading your data(part 1)" is that I want to use “hdfs” as "Deep storage".
In my viewpoint, the realtime node will aggregate some index as "segment" in "Deep storage" for Historical node to read.
So we could see these segments in "hdfs".Is it right?
But I got the exception as above referred when I load the data. Is it the hadoop configurations weren't right?
Inline.
Tao
...io.druid.segment.loading.SegmentLoadingException: Exception loading segment[wikipedia_2013-08-31T0</
Hi, Fangjin:
And also have other questions.This topic is for running your doc example "Loading your Data(part 1)" (http://druid.io/docs/0.6.73/Tutorial:-Loading-Your-Data-Part-1.html)1. It doesn't mention how to start realtime, so I used the command in part 2 to start it.
"java -Xmx256m -Duser.timezone=UTC -Dfile.encoding=UTF-8 -Ddruid.realtime.specFile=examples/indexing/wikipedia.spec -classpath lib/*:/home/taoluo/hadoop-lib/*:/home/taoluo/software/hadoop-2.2.0/etc/hadoop:config/realtime io.druid.cli.Main server realtime"
I add hadoop-clinet.jar and hadoop configuration files in classpath.
Is it right? And "druid.realtime.specFile" must be given? If I want to add another data source in Druid, should I defined another specFile for this data source and restart realtime?
2.The difference I followed the steps between the doc "Loading your data(part 1)" is that I want to use “hdfs” as "Deep storage".
In my viewpoint, the realtime node will aggregate some index as "segment" in "Deep storage" for Historical node to read.So we could see these segments in "hdfs".Is it right?
But I got the exception as above referred when I load the data. Is it the hadoop configurations weren't right?
--
You received this message because you are subscribed to the Google Groups "Druid Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-developm...@googlegroups.com.
To post to this group, send email to druid-de...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-development/93ba9587-f5fc-4664-b66c-6c5af7d7e61d%40googlegroups.com.
2014-04-21 02:26:09,242 INFO [task-runner-0] io.druid.indexing.common.task.IndexTask - Task[index_wikipedia_2014-04-21T02:25:58.617Z] interval[2013-08-31T00:00:00.000Z/2013-09-01T00:00:00.000Z] partition[0] took in 5 rows (5 processed, 0 unparseable, 0 thrown away) and output 5 rows 2014-04-21 02:26:09,245 ERROR [task-runner-0] io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running task[IndexTask{id=index_wikipedia_2014-04-21T02:25:58.617Z, type=index, dataSource=wikipedia}] java.lang.RuntimeException: java.io.IOException: failure to login at com.google.common.base.Throwables.propagate(Throwables.java:160) at io.druid.indexing.common.index.YeOldePlumberSchool$1.finishJob(YeOldePlumberSchool.java:165) at io.druid.indexing.common.task.IndexTask.generateSegment(IndexTask.java:395) at io.druid.indexing.common.task.IndexTask.run(IndexTask.java:153) at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:216) at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:195) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744) Caused by: java.io.IOException: failure to login at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:490) at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:452) at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:1494) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1395) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:123) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:238) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187) at io.druid.storage.hdfs.HdfsDataSegmentPusher.push(HdfsDataSegmentPusher.java:77) at io.druid.indexing.common.task.IndexTask$2.push(IndexTask.java:339) at io.druid.indexing.common.index.YeOldePlumberSchool$1.finishJob(YeOldePlumberSchool.java:156) ... 8 more Caused by: javax.security.auth.login.LoginException: unable to find LoginModule class: org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule at javax.security.auth.login.LoginContext.invoke(LoginContext.java:800) at javax.security.auth.login.LoginContext.access$000(LoginContext.java:203) at javax.security.auth.login.LoginContext$4.run(LoginContext.java:690) at javax.security.auth.login.LoginContext$4.run(LoginContext.java:688) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:687) at javax.security.auth.login.LoginContext.login(LoginContext.java:595) at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:471) ... 18 more 2014-04-21 02:26:09,251 INFO [task-runner-0] io.druid.indexing.overlord.ThreadPoolTaskRunner - Removing task directory: /tmp/persistent/task/index_wikipedia_2014-04-21T02:25:58.617Z/work 2014-04-21 02:26:09,264 INFO [task-runner-0] io.druid.indexing.worker.executor.ExecutorLifecycle - Task completed with status: { "id" : "index_wikipedia_2014-04-21T02:25:58.617Z", "status" : "FAILED", "duration" : 694 }
Thanks,
Tao
...
...
2014-04-25 05:52:29,787 ERROR [task-runner-0] io.druid.indexing.overlord.ThreadPoolTaskRunner - Uncaught Throwable while running task[IndexTask{id=index_wikipedia_2014-04-25T05:52:18.495Z, type=index, dataSource=wikipedia}] java.lang.NoClassDefFoundError: org/apache/hadoop/fs/FileSystem at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:800) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:449) at java.net.URLClassLoader.access$100(URLClassLoader.java:71) at java.net.URLClassLoader$1.run(URLClassLoader.java:361) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:425) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at java.lang.ClassLoader.loadClass(ClassLoader.java:358) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:270) at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:363) at java.util.ServiceLoader$1.next(ServiceLoader.java:445) at org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:2400) at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2411) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2428) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:88) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2467) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2449) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:367) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:166) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:351) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:287) at io.druid.storage.hdfs.HdfsDataSegmentPusher.push(HdfsDataSegmentPusher.java:77) at io.druid.indexing.common.task.IndexTask$2.push(IndexTask.java:339) at io.druid.indexing.common.index.YeOldePlumberSchool$1.finishJob(YeOldePlumberSchool.java:156) at io.druid.indexing.common.task.IndexTask.generateSegment(IndexTask.java:395) at io.druid.indexing.common.task.IndexTask.run(IndexTask.java:153) at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:216) at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:195) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744)
It is so confused, This class was in the "druid-hdfs-storage-0.6.52.jar".
jar tf /home/taoluo/.m2/repository/io/druid/extensions/druid-hdfs-storage/0.6.52/druid-hdfs-storage-0.6.52.jar | grep "org/apache/hadoop/fs/FileSystem"
org/apache/hadoop/fs/FileSystem$1.class
org/apache/hadoop/fs/FileSystem$2.class
org/apache/hadoop/fs/FileSystem$3.class
org/apache/hadoop/fs/FileSystem$4.class
org/apache/hadoop/fs/FileSystem$5.class
org/apache/hadoop/fs/FileSystem$Cache$ClientFinalizer.class
org/apache/hadoop/fs/FileSystem$Cache$Key.class
org/apache/hadoop/fs/FileSystem$Cache.class
org/apache/hadoop/fs/FileSystem$Statistics.class
org/apache/hadoop/fs/FileSystem.class
org/apache/hadoop/fs/FileSystemLinkResolver.class
But why druid can't find it?
...
...
...
...