druid-0.10.1 can not work

109 views
Skip to first unread message

better...@gmail.com

unread,
Oct 6, 2017, 7:55:26 AM10/6/17
to Druid User
 Previously used druid-0.10.0 can be used normally; Resencently, I upgrade druid to 0.10.1,  hadoop version is 2.7.3,  but index failed, the index error log like below:
 2017-10-06 18:46:25.025 INFO [task-runner-0-priority-0]
   org.apache.hadoop.mapreduce.Job - Job job_1499451117562_0045 running in
   uber mode : false 2017-10-06 18:46:25.025 INFO [task-runner-0-priority-0]
   org.apache.hadoop.mapreduce.Job - map 0% reduce 0% 2017-10-06 18:46:25.025
   INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - Job
   job_1499451117562_0045 failed with state FAILED due to: Application
   application_1499451117562_0045 failed 2 times due to AM Container for
   appattempt_1499451117562_0045_000002 exited with exitCode: 1 For more
   detailed output, check application tracking
   click on links to logs of each attempt. Diagnostics: Exception from
   container-launch. Container id: container_1499451117562_0045_02_000001
   Exit code: 1 Stack trace: ExitCodeException exitCode=1: at
   org.apache.hadoop.util.Shell.runCommand(Shell.java:582) at
   org.apache.hadoop.util.Shell.run(Shell.java:479) at
   org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
   at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
   at
   org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
   at
   org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
   at
   org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
   at java.util.concurrent.FutureTask.run(FutureTask.java:266) at
   java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
   at
   java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
   at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero
   exit code 1 Failing this attempt. Failing the application. 2017-10-06
   18:46:25.025 INFO [task-runner-0-priority-0]
   org.apache.hadoop.mapreduce.Job - Counters: 0 2017-10-06 18:46:25.025 INFO
   [task-runner-0-priority-0] io.druid.indexer.JobHelper - Deleting
   path[var/druid/hadoop-tmp/test_fund_trade_records_hadoop_dist_phone/2017-10-06T184613.791+0800_61b2e90d5a954c4788efdab027ccd30e]
   2017-10-06 18:46:25.025 ERROR [task-runner-0-priority-0]
   io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running
   task[HadoopIndexTask{id=index_hadoop_test_fund_trade_records_hadoop_dist_phone_2017-10-06T18:46:13.749+08:00,
   type=index_hadoop, dataSource=test_fund_trade_records_hadoop_dist_phone}]
java.lang.RuntimeException: java.lang.reflect.InvocationTargetException at
   com.google.common.base.Throwables.propagate(Throwables.java:160)
   ~[guava-16.0.1.jar:?] at
   io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:218)
   ~[druid-indexing-service-0.10.1.jar:0.10.1] at
   io.druid.indexing.common.task.HadoopIndexTask.run(HadoopIndexTask.java:224)
   ~[druid-indexing-service-0.10.1.jar:0.10.1] at
   io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:436)
   [druid-indexing-service-0.10.1.jar:0.10.1] at
   io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:408)
   [druid-indexing-service-0.10.1.jar:0.10.1] at
   java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_77] at
   java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
   [?:1.8.0_77] at


Lawrence Huang

unread,
Oct 6, 2017, 5:40:35 PM10/6/17
to Druid User
maybe related: 
I managed to fix this issue by adding the hadoop-aws jar to the hadoop dependencies. I was getting the following error during hadoop batch ingestion:
java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.s3a.S3AFileSystem not found
I fixed it by adding  hadoop-aws-2.7.3.jar (https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-aws/2.7.3) to hadoop-dependencies/hadoop-client/2.7.3/hadoop-aws-2.7.3.jar
Reply all
Reply to author
Forward
Message has been deleted
0 new messages