Failing ingestion jobs with java.lang.OutOfMemoryError: PermGen space

88 views
Skip to first unread message

Saurabh Tyagi

unread,
Feb 13, 2018, 10:19:20 AM2/13/18
to Druid User
hi,

my machine is having around 40G free space and 8 cores.

I am trying to run druid with deep storage = hdfs apart from that everything seems to be same configuration as default


This was the error i was receiving before adding  
"mapreduce.reduce.java.opts" : "-Xmx5g -XX:PermSize=256M -XX:MaxPermSize=5g",
 "mapreduce.map.java.opts" : "-Xmx5g -XX:PermSize=256M -XX:MaxPermSize=5g"

into by wikiticker-index.json


INFO [task-runner-0-priority-0] org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl - Timeline service address: http://oser402528.wal-mart.com:8188/ws/v1/timeline/
Exception in thread "main" java.lang.OutOfMemoryError: PermGen space
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)



and after adding permSize i was getting 

2018-02-13T12:34:43,350 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths to process : 1
2018-02-13T12:34:45,363 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.JobSubmitter - Cleaning up the staging area /user/styagi/.staging/job_1516231012101_60365
Exception in thread "main" java.lang.OutOfMemoryError: PermGen space




I have experimented with lot of configurations changes but nothing works.
it has been becoming very tough to solve this problem


thanks ,
Saurabh 


Saurabh Tyagi

unread,
Feb 13, 2018, 10:20:45 AM2/13/18
to Druid User
I have been desperately searching for answer from 8-9 hours without any results. Any help will be appreciated

Gian Merlino

unread,
Feb 13, 2018, 11:06:22 AM2/13/18
to druid...@googlegroups.com
Hi Saurabh,

I guess from this error that you are using an older version of Java and an older version of Druid, since PermGen doesn't exist in Java 8, and newer versions of Druid require Java 8. Have you tried using the latest Druid along with a newer Java?

Gian

--
You received this message because you are subscribed to the Google Groups "Druid User" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-user+unsubscribe@googlegroups.com.
To post to this group, send email to druid...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-user/8552bd83-feeb-46fe-a195-ffb9ad1c1b28%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Saurabh Tyagi

unread,
Feb 13, 2018, 12:49:42 PM2/13/18
to Druid User
hi Gian,

i am aware of fact that java 8 doesn't have permgen problem. but my i am using it in environment where java7 along with druid 0.9 is the only choice.

Now it is failing with error :

2018-02-13T17:44:58,359 INFO [task-runner-0-priority-0] io.druid.indexing.common.task.HadoopIndexTask - Starting a hadoop determine configuration job...
2018-02-13T17:44:58,413 INFO [task-runner-0-priority-0] io.druid.indexer.path.StaticPathSpec - Adding paths[quickstart/wikiticker-2015-09-12-sampled.json]
2018-02-13T17:44:59,432 INFO [task-runner-0-priority-0] io.druid.indexer.path.StaticPathSpec - Adding paths[quickstart/wikiticker-2015-09-12-sampled.json]
2018-02-13T17:44:59,804 INFO [task-runner-0-priority-0] org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl - Timeline service address: http://oser402528.wal-mart.com:8188/ws/v1/timeline/
2018-02-13T17:45:00,082 INFO [task-runner-0-priority-0] org.apache.hadoop.hdfs.DFSClient - Created HDFS_DELEGATION_TOKEN token 5893277 for svcrcs on ha-hdfs:prod16ha
2018-02-13T17:45:00,109 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.security.TokenCache - Got dt for hdfs://prod16ha; Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:prod16ha, Ident: (HDFS_DELEGATION_TOKEN token 5893277 for svcrcs)
2018-02-13T17:45:00,279 WARN [task-runner-0-priority-0] org.apache.hadoop.mapreduce.JobResourceUploader - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
2018-02-13T17:45:00,287 WARN [task-runner-0-priority-0] org.apache.hadoop.mapreduce.JobResourceUploader - No job jar file set.  User classes may not be found. See Job or Job#setJar(String).
2018-02-13T17:45:00,909 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths to process : 1
2018-02-13T17:45:02,006 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.JobSubmitter - Cleaning up the staging area /user/svcrcs/.staging/job_1517538669309_125656
2018-02-13T17:45:43,783 WARN [main-SendThread(localhost:2181)] org.apache.zookeeper.ClientCnxn - Client session timed out, have not heard from server in 38953ms for sessionid 0x3611dd1de1a0198
2018-02-13T17:45:44,036 INFO [main-SendThread(localhost:2181)] org.apache.zookeeper.ClientCnxn - Client session timed out, have not heard from server in 38953ms for sessionid 0x3611dd1de1a0198, closing socket connection and attempting reconnect
Exception in thread "main" java.lang.OutOfMemoryError: PermGen space
2018-02-13T17:45:44,638 INFO [main-EventThread] org.apache.curator.framework.state.ConnectionStateManager - State change: SUSPENDED




On Tuesday, February 13, 2018 at 8:49:20 PM UTC+5:30, Saurabh Tyagi wrote:

Saurabh Tyagi

unread,
Feb 13, 2018, 12:52:02 PM2/13/18
to Druid User
Last logs :

018-02-13T17:50:56,835 INFO [task-runner-0-priority-0] io.druid.indexing.common.task.HadoopIndexTask - Starting a hadoop determine configuration job...
2018-02-13T17:50:56,885 INFO [task-runner-0-priority-0] io.druid.indexer.path.StaticPathSpec - Adding paths[quickstart/wikiticker-2015-09-12-sampled.json]
2018-02-13T17:50:58,005 INFO [Announcer-0] io.druid.curator.announcement.Announcer - Node[/u/users/svcrcs/item_performance/setup/druid/prod/announcements/oser402532.wal-mart.com:8100] dropped, reinstating.
2018-02-13T17:50:58,006 INFO [Announcer-0] io.druid.curator.announcement.Announcer - Node[/u/users/svcrcs/item_performance/setup/druid/prod/listeners/lookups/__default/oser402532.wal-mart.com:8100] dropped, reinstating.
2018-02-13T17:50:58,010 INFO [task-runner-0-priority-0] io.druid.indexer.path.StaticPathSpec - Adding paths[quickstart/wikiticker-2015-09-12-sampled.json]
2018-02-13T17:50:58,400 INFO [task-runner-0-priority-0] org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl - Timeline service address: http://oser402528.wal-mart.com:8188/ws/v1/timeline/
2018-02-13T17:50:58,670 INFO [task-runner-0-priority-0] org.apache.hadoop.hdfs.DFSClient - Created HDFS_DELEGATION_TOKEN token 5893328 for svcrcs on ha-hdfs:prod16ha
2018-02-13T17:50:58,693 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.security.TokenCache - Got dt for hdfs://prod16ha; Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:prod16ha, Ident: (HDFS_DELEGATION_TOKEN token 5893328 for svcrcs)
2018-02-13T17:50:58,770 WARN [task-runner-0-priority-0] org.apache.hadoop.mapreduce.JobResourceUploader - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
2018-02-13T17:50:58,780 WARN [task-runner-0-priority-0] org.apache.hadoop.mapreduce.JobResourceUploader - No job jar file set.  User classes may not be found. See Job or Job#setJar(String).
2018-02-13T17:50:59,758 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths to process : 1
2018-02-13T17:51:01,072 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.JobSubmitter - Cleaning up the staging area /user/svcrcs/.staging/job_1517538669309_125690
Exception in thread "main" java.lang.OutOfMemoryError: PermGen space

On Tuesday, February 13, 2018 at 8:49:20 PM UTC+5:30, Saurabh Tyagi wrote:

Saurabh Tyagi

unread,
Feb 14, 2018, 2:02:58 AM2/14/18
to Druid User
After doing some changes in configurations i have received 


2018-02-14T06:53:30,325 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths to process : 1
11.781: [Full GC 499M->44M(148M), 0.5237640 secs]
   [Eden: 430.0M(563.0M)->0.0B(85.0M) Survivors: 17.0M->0.0B Heap: 499.6M(2048.0M)->44.2M(148.0M)]
 [Times: user=0.53 sys=0.13, real=0.52 secs] 
12.305: [Full GC 44M->40M(134M), 0.2729450 secs]
   [Eden: 0.0B(85.0M)->0.0B(76.0M) Survivors: 0.0B->0.0B Heap: 44.2M(148.0M)->40.2M(134.0M)]
 [Times: user=0.42 sys=0.01, real=0.27 secs] 
12.582: [Full GC 40M->39M(132M), 0.2370330 secs]
   [Eden: 1024.0K(76.0M)->0.0B(75.0M) Survivors: 0.0B->0.0B Heap: 40.2M(134.0M)->39.6M(132.0M)]
 [Times: user=0.36 sys=0.00, real=0.23 secs] 
12.819: [Full GC 39M->39M(132M), 0.2705140 secs]
   [Eden: 0.0B(75.0M)->0.0B(75.0M) Survivors: 0.0B->0.0B Heap: 39.6M(132.0M)->39.6M(132.0M)]
 [Times: user=0.40 sys=0.01, real=0.28 secs] 
2018-02-14T06:53:31,647 INFO [task-runner-0-priority-0] org.apache.hadoop.io.retry.RetryInvocationHandler - Exception while invoking getBlockLocations of class ClientNamenodeProtocolTranslatorPB over oser402531.wal-mart.com/10.224.183.38:8020. Trying to fail over immediately.
java.io.IOException: com.google.protobuf.ServiceException: java.lang.OutOfMemoryError: PermGen space
	at org.apache.hadoop.ipc.ProtobufHelper.getRemoteException(ProtobufHelper.java:47) ~[hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:260) ~[hadoop-hdfs-2.7.1.jar:?]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_71]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_71]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_71]
	at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_71]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) ~[hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) [hadoop-common-2.7.1.jar:?]
	at com.sun.proxy.$Proxy188.getBlockLocations(Unknown Source) [?:?]
	at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1240) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1227) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DFSClient.getBlockLocations(DFSClient.java:1285) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem$1.doCall(DistributedFileSystem.java:221) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem$1.doCall(DistributedFileSystem.java:217) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) [hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileBlockLocations(DistributedFileSystem.java:228) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileBlockLocations(DistributedFileSystem.java:209) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:397) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.lib.input.DelegatingInputFormat.getSplits(DelegatingInputFormat.java:115) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at java.security.AccessController.doPrivileged(Native Method) ~[?:1.7.0_71]
	at javax.security.auth.Subject.doAs(Subject.java:415) [?:1.7.0_71]
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) [hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at io.druid.indexer.DetermineHashedPartitionsJob.run(DetermineHashedPartitionsJob.java:116) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexer.JobHelper.runJobs(JobHelper.java:349) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexer.HadoopDruidDetermineConfigurationJob.run(HadoopDruidDetermineConfigurationJob.java:91) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.common.task.HadoopIndexTask$HadoopDetermineConfigInnerProcessing.runTask(HadoopIndexTask.java:291) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_71]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_71]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_71]
	at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_71]
	at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:201) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.common.task.HadoopIndexTask.run(HadoopIndexTask.java:175) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:436) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:408) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at java.util.concurrent.FutureTask.run(FutureTask.java:262) [?:1.7.0_71]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_71]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_71]
	at java.lang.Thread.run(Thread.java:745) [?:1.7.0_71]
Caused by: com.google.protobuf.ServiceException: java.lang.OutOfMemoryError: PermGen space
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:271) ~[hadoop-common-2.7.1.jar:?]
	at com.sun.proxy.$Proxy187.getBlockLocations(Unknown Source) ~[?:?]
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:255) ~[hadoop-hdfs-2.7.1.jar:?]
	... 42 more
Caused by: java.lang.OutOfMemoryError: PermGen space
	at java.lang.ClassLoader.defineClass1(Native Method) ~[?:1.7.0_71]
	at java.lang.ClassLoader.defineClass(ClassLoader.java:800) ~[?:1.7.0_71]
	at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) ~[?:1.7.0_71]
	at java.net.URLClassLoader.defineClass(URLClassLoader.java:449) ~[?:1.7.0_71]
	at java.net.URLClassLoader.access$100(URLClassLoader.java:71) ~[?:1.7.0_71]
	at java.net.URLClassLoader$1.run(URLClassLoader.java:361) ~[?:1.7.0_71]
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355) ~[?:1.7.0_71]
	at java.security.AccessController.doPrivileged(Native Method) ~[?:1.7.0_71]
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354) ~[?:1.7.0_71]
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425) ~[?:1.7.0_71]
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358) ~[?:1.7.0_71]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$DatanodeInfoProto.initFields(HdfsProtos.java:5160) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$DatanodeInfoProto.<clinit>(HdfsProtos.java:6271) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlockProto.<init>(HdfsProtos.java:14756) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlockProto.<init>(HdfsProtos.java:14685) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlockProto$1.parsePartialFrom(HdfsProtos.java:14881) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlockProto$1.parsePartialFrom(HdfsProtos.java:14876) ~[hadoop-hdfs-2.7.1.jar:?]
	at com.google.protobuf.CodedInputStream.readMessage(CodedInputStream.java:309) ~[protobuf-java-2.5.0.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlocksProto.<init>(HdfsProtos.java:21293) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlocksProto.<init>(HdfsProtos.java:21235) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlocksProto$1.parsePartialFrom(HdfsProtos.java:21365) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlocksProto$1.parsePartialFrom(HdfsProtos.java:21360) ~[hadoop-hdfs-2.7.1.jar:?]
	at com.google.protobuf.CodedInputStream.readMessage(CodedInputStream.java:309) ~[protobuf-java-2.5.0.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetBlockLocationsResponseProto.<init>(ClientNamenodeProtocolProtos.java:1425) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetBlockLocationsResponseProto.<init>(ClientNamenodeProtocolProtos.java:1372) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetBlockLocationsResponseProto$1.parsePartialFrom(ClientNamenodeProtocolProtos.java:1463) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetBlockLocationsResponseProto$1.parsePartialFrom(ClientNamenodeProtocolProtos.java:1458) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetBlockLocationsResponseProto$Builder.mergeFrom(ClientNamenodeProtocolProtos.java:1768) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetBlockLocationsResponseProto$Builder.mergeFrom(ClientNamenodeProtocolProtos.java:1651) ~[hadoop-hdfs-2.7.1.jar:?]
	at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:337) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:267) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessageLite$Builder.mergeFrom(AbstractMessageLite.java:170) ~[protobuf-java-2.5.0.jar:?]
2018-02-14T06:53:31,663 INFO [task-runner-0-priority-0] org.apache.hadoop.io.retry.RetryInvocationHandler - Exception while invoking getBlockLocations of class ClientNamenodeProtocolTranslatorPB over oser402527.wal-mart.com/10.224.183.35:8020 after 1 fail over attempts. Trying to fail over after sleeping for 1372ms.
org.apache.hadoop.ipc.RemoteException: Operation category READ is not supported in state standby
	at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
	at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1979)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1345)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1738)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:693)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:373)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)

	at org.apache.hadoop.ipc.Client.call(Client.java:1476) ~[hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.ipc.Client.call(Client.java:1407) ~[hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) ~[hadoop-common-2.7.1.jar:?]
	at com.sun.proxy.$Proxy187.getBlockLocations(Unknown Source) ~[?:?]
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:255) ~[hadoop-hdfs-2.7.1.jar:?]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_71]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_71]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_71]
	at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_71]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) ~[hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) [hadoop-common-2.7.1.jar:?]
	at com.sun.proxy.$Proxy188.getBlockLocations(Unknown Source) [?:?]
	at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1240) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1227) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DFSClient.getBlockLocations(DFSClient.java:1285) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem$1.doCall(DistributedFileSystem.java:221) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem$1.doCall(DistributedFileSystem.java:217) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) [hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileBlockLocations(DistributedFileSystem.java:228) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileBlockLocations(DistributedFileSystem.java:209) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:397) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.lib.input.DelegatingInputFormat.getSplits(DelegatingInputFormat.java:115) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at java.security.AccessController.doPrivileged(Native Method) ~[?:1.7.0_71]
	at javax.security.auth.Subject.doAs(Subject.java:415) [?:1.7.0_71]
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) [hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at io.druid.indexer.DetermineHashedPartitionsJob.run(DetermineHashedPartitionsJob.java:116) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexer.JobHelper.runJobs(JobHelper.java:349) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexer.HadoopDruidDetermineConfigurationJob.run(HadoopDruidDetermineConfigurationJob.java:91) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.common.task.HadoopIndexTask$HadoopDetermineConfigInnerProcessing.runTask(HadoopIndexTask.java:291) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_71]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_71]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_71]
	at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_71]
	at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:201) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.common.task.HadoopIndexTask.run(HadoopIndexTask.java:175) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:436) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:408) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at java.util.concurrent.FutureTask.run(FutureTask.java:262) [?:1.7.0_71]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_71]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_71]
	at java.lang.Thread.run(Thread.java:745) [?:1.7.0_71]
2018-02-14T06:53:33,038 INFO [task-runner-0-priority-0] org.apache.hadoop.io.retry.RetryInvocationHandler - Exception while invoking getBlockLocations of class ClientNamenodeProtocolTranslatorPB over oser402531.wal-mart.com/10.224.183.38:8020 after 2 fail over attempts. Trying to fail over immediately.
java.io.IOException: com.google.protobuf.ServiceException: java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$DatanodeInfoProto
	at org.apache.hadoop.ipc.ProtobufHelper.getRemoteException(ProtobufHelper.java:47) ~[hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:260) ~[hadoop-hdfs-2.7.1.jar:?]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_71]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_71]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_71]
	at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_71]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) ~[hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) [hadoop-common-2.7.1.jar:?]
	at com.sun.proxy.$Proxy188.getBlockLocations(Unknown Source) [?:?]
	at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1240) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1227) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DFSClient.getBlockLocations(DFSClient.java:1285) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem$1.doCall(DistributedFileSystem.java:221) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem$1.doCall(DistributedFileSystem.java:217) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) [hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileBlockLocations(DistributedFileSystem.java:228) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileBlockLocations(DistributedFileSystem.java:209) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:397) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.lib.input.DelegatingInputFormat.getSplits(DelegatingInputFormat.java:115) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at java.security.AccessController.doPrivileged(Native Method) ~[?:1.7.0_71]
	at javax.security.auth.Subject.doAs(Subject.java:415) [?:1.7.0_71]
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) [hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at io.druid.indexer.DetermineHashedPartitionsJob.run(DetermineHashedPartitionsJob.java:116) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexer.JobHelper.runJobs(JobHelper.java:349) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexer.HadoopDruidDetermineConfigurationJob.run(HadoopDruidDetermineConfigurationJob.java:91) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.common.task.HadoopIndexTask$HadoopDetermineConfigInnerProcessing.runTask(HadoopIndexTask.java:291) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_71]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_71]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_71]
	at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_71]
	at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:201) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.common.task.HadoopIndexTask.run(HadoopIndexTask.java:175) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:436) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:408) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at java.util.concurrent.FutureTask.run(FutureTask.java:262) [?:1.7.0_71]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_71]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_71]
	at java.lang.Thread.run(Thread.java:745) [?:1.7.0_71]
Caused by: com.google.protobuf.ServiceException: java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$DatanodeInfoProto
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:271) ~[hadoop-common-2.7.1.jar:?]
	at com.sun.proxy.$Proxy187.getBlockLocations(Unknown Source) ~[?:?]
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:255) ~[hadoop-hdfs-2.7.1.jar:?]
	... 42 more
Caused by: java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$DatanodeInfoProto
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlockProto.<init>(HdfsProtos.java:14756) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlockProto.<init>(HdfsProtos.java:14685) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlockProto$1.parsePartialFrom(HdfsProtos.java:14881) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlockProto$1.parsePartialFrom(HdfsProtos.java:14876) ~[hadoop-hdfs-2.7.1.jar:?]
	at com.google.protobuf.CodedInputStream.readMessage(CodedInputStream.java:309) ~[protobuf-java-2.5.0.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlocksProto.<init>(HdfsProtos.java:21293) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlocksProto.<init>(HdfsProtos.java:21235) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlocksProto$1.parsePartialFrom(HdfsProtos.java:21365) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlocksProto$1.parsePartialFrom(HdfsProtos.java:21360) ~[hadoop-hdfs-2.7.1.jar:?]
	at com.google.protobuf.CodedInputStream.readMessage(CodedInputStream.java:309) ~[protobuf-java-2.5.0.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetBlockLocationsResponseProto.<init>(ClientNamenodeProtocolProtos.java:1425) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetBlockLocationsResponseProto.<init>(ClientNamenodeProtocolProtos.java:1372) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetBlockLocationsResponseProto$1.parsePartialFrom(ClientNamenodeProtocolProtos.java:1463) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetBlockLocationsResponseProto$1.parsePartialFrom(ClientNamenodeProtocolProtos.java:1458) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetBlockLocationsResponseProto$Builder.mergeFrom(ClientNamenodeProtocolProtos.java:1768) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetBlockLocationsResponseProto$Builder.mergeFrom(ClientNamenodeProtocolProtos.java:1651) ~[hadoop-hdfs-2.7.1.jar:?]
	at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:337) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:267) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessageLite$Builder.mergeFrom(AbstractMessageLite.java:170) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:882) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:267) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessageLite$Builder.mergeFrom(AbstractMessageLite.java:161) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:875) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:267) ~[protobuf-java-2.5.0.jar:?]
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:262) ~[hadoop-common-2.7.1.jar:?]
	at com.sun.proxy.$Proxy187.getBlockLocations(Unknown Source) ~[?:?]
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:255) ~[hadoop-hdfs-2.7.1.jar:?]
	... 42 more
2018-02-14T06:53:33,043 INFO [task-runner-0-priority-0] org.apache.hadoop.io.retry.RetryInvocationHandler - Exception while invoking getBlockLocations of class ClientNamenodeProtocolTranslatorPB over oser402527.wal-mart.com/10.224.183.35:8020 after 3 fail over attempts. Trying to fail over after sleeping for 4107ms.
org.apache.hadoop.ipc.RemoteException: Operation category READ is not supported in state standby
	at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
	at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1979)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1345)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1738)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:693)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:373)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)

	at org.apache.hadoop.ipc.Client.call(Client.java:1476) ~[hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.ipc.Client.call(Client.java:1407) ~[hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) ~[hadoop-common-2.7.1.jar:?]
	at com.sun.proxy.$Proxy187.getBlockLocations(Unknown Source) ~[?:?]
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:255) ~[hadoop-hdfs-2.7.1.jar:?]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_71]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_71]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_71]
	at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_71]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) ~[hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) [hadoop-common-2.7.1.jar:?]
	at com.sun.proxy.$Proxy188.getBlockLocations(Unknown Source) [?:?]
	at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1240) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1227) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DFSClient.getBlockLocations(DFSClient.java:1285) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem$1.doCall(DistributedFileSystem.java:221) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem$1.doCall(DistributedFileSystem.java:217) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) [hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileBlockLocations(DistributedFileSystem.java:228) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileBlockLocations(DistributedFileSystem.java:209) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:397) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.lib.input.DelegatingInputFormat.getSplits(DelegatingInputFormat.java:115) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at java.security.AccessController.doPrivileged(Native Method) ~[?:1.7.0_71]
	at javax.security.auth.Subject.doAs(Subject.java:415) [?:1.7.0_71]
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) [hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at io.druid.indexer.DetermineHashedPartitionsJob.run(DetermineHashedPartitionsJob.java:116) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexer.JobHelper.runJobs(JobHelper.java:349) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexer.HadoopDruidDetermineConfigurationJob.run(HadoopDruidDetermineConfigurationJob.java:91) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.common.task.HadoopIndexTask$HadoopDetermineConfigInnerProcessing.runTask(HadoopIndexTask.java:291) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_71]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_71]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_71]
	at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_71]
	at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:201) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.common.task.HadoopIndexTask.run(HadoopIndexTask.java:175) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:436) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:408) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at java.util.concurrent.FutureTask.run(FutureTask.java:262) [?:1.7.0_71]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_71]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_71]
	at java.lang.Thread.run(Thread.java:745) [?:1.7.0_71]
2018-02-14T06:53:37,177 INFO [task-runner-0-priority-0] org.apache.hadoop.io.retry.RetryInvocationHandler - Exception while invoking getBlockLocations of class ClientNamenodeProtocolTranslatorPB over oser402531.wal-mart.com/10.224.183.38:8020 after 4 fail over attempts. Trying to fail over immediately.
java.io.IOException: com.google.protobuf.ServiceException: java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$DatanodeInfoProto
	at org.apache.hadoop.ipc.ProtobufHelper.getRemoteException(ProtobufHelper.java:47) ~[hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:260) ~[hadoop-hdfs-2.7.1.jar:?]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_71]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_71]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_71]
	at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_71]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) ~[hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) [hadoop-common-2.7.1.jar:?]
	at com.sun.proxy.$Proxy188.getBlockLocations(Unknown Source) [?:?]
	at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1240) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1227) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DFSClient.getBlockLocations(DFSClient.java:1285) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem$1.doCall(DistributedFileSystem.java:221) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem$1.doCall(DistributedFileSystem.java:217) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) [hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileBlockLocations(DistributedFileSystem.java:228) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileBlockLocations(DistributedFileSystem.java:209) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:397) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.lib.input.DelegatingInputFormat.getSplits(DelegatingInputFormat.java:115) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at java.security.AccessController.doPrivileged(Native Method) ~[?:1.7.0_71]
	at javax.security.auth.Subject.doAs(Subject.java:415) [?:1.7.0_71]
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) [hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at io.druid.indexer.DetermineHashedPartitionsJob.run(DetermineHashedPartitionsJob.java:116) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexer.JobHelper.runJobs(JobHelper.java:349) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexer.HadoopDruidDetermineConfigurationJob.run(HadoopDruidDetermineConfigurationJob.java:91) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.common.task.HadoopIndexTask$HadoopDetermineConfigInnerProcessing.runTask(HadoopIndexTask.java:291) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_71]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_71]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_71]
	at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_71]
	at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:201) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.common.task.HadoopIndexTask.run(HadoopIndexTask.java:175) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:436) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:408) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at java.util.concurrent.FutureTask.run(FutureTask.java:262) [?:1.7.0_71]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_71]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_71]
	at java.lang.Thread.run(Thread.java:745) [?:1.7.0_71]
Caused by: com.google.protobuf.ServiceException: java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$DatanodeInfoProto
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:271) ~[hadoop-common-2.7.1.jar:?]
	at com.sun.proxy.$Proxy187.getBlockLocations(Unknown Source) ~[?:?]
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:255) ~[hadoop-hdfs-2.7.1.jar:?]
	... 42 more
Caused by: java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$DatanodeInfoProto
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlockProto.<init>(HdfsProtos.java:14756) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlockProto.<init>(HdfsProtos.java:14685) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlockProto$1.parsePartialFrom(HdfsProtos.java:14881) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlockProto$1.parsePartialFrom(HdfsProtos.java:14876) ~[hadoop-hdfs-2.7.1.jar:?]
	at com.google.protobuf.CodedInputStream.readMessage(CodedInputStream.java:309) ~[protobuf-java-2.5.0.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlocksProto.<init>(HdfsProtos.java:21293) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlocksProto.<init>(HdfsProtos.java:21235) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlocksProto$1.parsePartialFrom(HdfsProtos.java:21365) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlocksProto$1.parsePartialFrom(HdfsProtos.java:21360) ~[hadoop-hdfs-2.7.1.jar:?]
	at com.google.protobuf.CodedInputStream.readMessage(CodedInputStream.java:309) ~[protobuf-java-2.5.0.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetBlockLocationsResponseProto.<init>(ClientNamenodeProtocolProtos.java:1425) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetBlockLocationsResponseProto.<init>(ClientNamenodeProtocolProtos.java:1372) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetBlockLocationsResponseProto$1.parsePartialFrom(ClientNamenodeProtocolProtos.java:1463) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetBlockLocationsResponseProto$1.parsePartialFrom(ClientNamenodeProtocolProtos.java:1458) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetBlockLocationsResponseProto$Builder.mergeFrom(ClientNamenodeProtocolProtos.java:1768) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetBlockLocationsResponseProto$Builder.mergeFrom(ClientNamenodeProtocolProtos.java:1651) ~[hadoop-hdfs-2.7.1.jar:?]
	at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:337) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:267) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessageLite$Builder.mergeFrom(AbstractMessageLite.java:170) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:882) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:267) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessageLite$Builder.mergeFrom(AbstractMessageLite.java:161) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:875) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:267) ~[protobuf-java-2.5.0.jar:?]
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:262) ~[hadoop-common-2.7.1.jar:?]
	at com.sun.proxy.$Proxy187.getBlockLocations(Unknown Source) ~[?:?]
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:255) ~[hadoop-hdfs-2.7.1.jar:?]
	... 42 more
2018-02-14T06:53:37,182 INFO [task-runner-0-priority-0] org.apache.hadoop.io.retry.RetryInvocationHandler - Exception while invoking getBlockLocations of class ClientNamenodeProtocolTranslatorPB over oser402527.wal-mart.com/10.224.183.35:8020 after 5 fail over attempts. Trying to fail over after sleeping for 7813ms.
org.apache.hadoop.ipc.RemoteException: Operation category READ is not supported in state standby
	at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
	at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1979)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1345)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1738)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:693)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:373)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)

	at org.apache.hadoop.ipc.Client.call(Client.java:1476) ~[hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.ipc.Client.call(Client.java:1407) ~[hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) ~[hadoop-common-2.7.1.jar:?]
	at com.sun.proxy.$Proxy187.getBlockLocations(Unknown Source) ~[?:?]
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:255) ~[hadoop-hdfs-2.7.1.jar:?]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_71]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_71]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_71]
	at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_71]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) ~[hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) [hadoop-common-2.7.1.jar:?]
	at com.sun.proxy.$Proxy188.getBlockLocations(Unknown Source) [?:?]
	at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1240) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1227) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DFSClient.getBlockLocations(DFSClient.java:1285) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem$1.doCall(DistributedFileSystem.java:221) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem$1.doCall(DistributedFileSystem.java:217) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) [hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileBlockLocations(DistributedFileSystem.java:228) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileBlockLocations(DistributedFileSystem.java:209) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:397) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.lib.input.DelegatingInputFormat.getSplits(DelegatingInputFormat.java:115) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at java.security.AccessController.doPrivileged(Native Method) ~[?:1.7.0_71]
	at javax.security.auth.Subject.doAs(Subject.java:415) [?:1.7.0_71]
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) [hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at io.druid.indexer.DetermineHashedPartitionsJob.run(DetermineHashedPartitionsJob.java:116) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexer.JobHelper.runJobs(JobHelper.java:349) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexer.HadoopDruidDetermineConfigurationJob.run(HadoopDruidDetermineConfigurationJob.java:91) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.common.task.HadoopIndexTask$HadoopDetermineConfigInnerProcessing.runTask(HadoopIndexTask.java:291) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_71]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_71]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_71]
	at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_71]
	at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:201) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.common.task.HadoopIndexTask.run(HadoopIndexTask.java:175) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:436) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:408) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at java.util.concurrent.FutureTask.run(FutureTask.java:262) [?:1.7.0_71]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_71]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_71]
	at java.lang.Thread.run(Thread.java:745) [?:1.7.0_71]
2018-02-14T06:53:45,000 INFO [task-runner-0-priority-0] org.apache.hadoop.io.retry.RetryInvocationHandler - Exception while invoking getBlockLocations of class ClientNamenodeProtocolTranslatorPB over oser402531.wal-mart.com/10.224.183.38:8020 after 6 fail over attempts. Trying to fail over immediately.
java.io.IOException: com.google.protobuf.ServiceException: java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$DatanodeInfoProto
	at org.apache.hadoop.ipc.ProtobufHelper.getRemoteException(ProtobufHelper.java:47) ~[hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:260) ~[hadoop-hdfs-2.7.1.jar:?]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_71]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_71]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_71]
	at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_71]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) ~[hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) [hadoop-common-2.7.1.jar:?]
	at com.sun.proxy.$Proxy188.getBlockLocations(Unknown Source) [?:?]
	at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1240) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1227) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DFSClient.getBlockLocations(DFSClient.java:1285) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem$1.doCall(DistributedFileSystem.java:221) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem$1.doCall(DistributedFileSystem.java:217) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) [hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileBlockLocations(DistributedFileSystem.java:228) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileBlockLocations(DistributedFileSystem.java:209) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:397) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.lib.input.DelegatingInputFormat.getSplits(DelegatingInputFormat.java:115) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at java.security.AccessController.doPrivileged(Native Method) ~[?:1.7.0_71]
	at javax.security.auth.Subject.doAs(Subject.java:415) [?:1.7.0_71]
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) [hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at io.druid.indexer.DetermineHashedPartitionsJob.run(DetermineHashedPartitionsJob.java:116) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexer.JobHelper.runJobs(JobHelper.java:349) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexer.HadoopDruidDetermineConfigurationJob.run(HadoopDruidDetermineConfigurationJob.java:91) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.common.task.HadoopIndexTask$HadoopDetermineConfigInnerProcessing.runTask(HadoopIndexTask.java:291) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_71]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_71]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_71]
	at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_71]
	at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:201) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.common.task.HadoopIndexTask.run(HadoopIndexTask.java:175) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:436) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:408) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at java.util.concurrent.FutureTask.run(FutureTask.java:262) [?:1.7.0_71]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_71]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_71]
	at java.lang.Thread.run(Thread.java:745) [?:1.7.0_71]
Caused by: com.google.protobuf.ServiceException: java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$DatanodeInfoProto
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:271) ~[hadoop-common-2.7.1.jar:?]
	at com.sun.proxy.$Proxy187.getBlockLocations(Unknown Source) ~[?:?]
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:255) ~[hadoop-hdfs-2.7.1.jar:?]
	... 42 more
Caused by: java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$DatanodeInfoProto
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlockProto.<init>(HdfsProtos.java:14756) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlockProto.<init>(HdfsProtos.java:14685) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlockProto$1.parsePartialFrom(HdfsProtos.java:14881) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlockProto$1.parsePartialFrom(HdfsProtos.java:14876) ~[hadoop-hdfs-2.7.1.jar:?]
	at com.google.protobuf.CodedInputStream.readMessage(CodedInputStream.java:309) ~[protobuf-java-2.5.0.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlocksProto.<init>(HdfsProtos.java:21293) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlocksProto.<init>(HdfsProtos.java:21235) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlocksProto$1.parsePartialFrom(HdfsProtos.java:21365) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlocksProto$1.parsePartialFrom(HdfsProtos.java:21360) ~[hadoop-hdfs-2.7.1.jar:?]
	at com.google.protobuf.CodedInputStream.readMessage(CodedInputStream.java:309) ~[protobuf-java-2.5.0.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetBlockLocationsResponseProto.<init>(ClientNamenodeProtocolProtos.java:1425) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetBlockLocationsResponseProto.<init>(ClientNamenodeProtocolProtos.java:1372) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetBlockLocationsResponseProto$1.parsePartialFrom(ClientNamenodeProtocolProtos.java:1463) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetBlockLocationsResponseProto$1.parsePartialFrom(ClientNamenodeProtocolProtos.java:1458) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetBlockLocationsResponseProto$Builder.mergeFrom(ClientNamenodeProtocolProtos.java:1768) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetBlockLocationsResponseProto$Builder.mergeFrom(ClientNamenodeProtocolProtos.java:1651) ~[hadoop-hdfs-2.7.1.jar:?]
	at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:337) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:267) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessageLite$Builder.mergeFrom(AbstractMessageLite.java:170) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:882) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:267) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessageLite$Builder.mergeFrom(AbstractMessageLite.java:161) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:875) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:267) ~[protobuf-java-2.5.0.jar:?]
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:262) ~[hadoop-common-2.7.1.jar:?]
	at com.sun.proxy.$Proxy187.getBlockLocations(Unknown Source) ~[?:?]
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:255) ~[hadoop-hdfs-2.7.1.jar:?]
	... 42 more
2018-02-14T06:53:45,005 INFO [task-runner-0-priority-0] org.apache.hadoop.io.retry.RetryInvocationHandler - Exception while invoking getBlockLocations of class ClientNamenodeProtocolTranslatorPB over oser402527.wal-mart.com/10.224.183.35:8020 after 7 fail over attempts. Trying to fail over after sleeping for 8479ms.
org.apache.hadoop.ipc.RemoteException: Operation category READ is not supported in state standby
	at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
	at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1979)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1345)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1738)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:693)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:373)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)

	at org.apache.hadoop.ipc.Client.call(Client.java:1476) ~[hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.ipc.Client.call(Client.java:1407) ~[hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) ~[hadoop-common-2.7.1.jar:?]
	at com.sun.proxy.$Proxy187.getBlockLocations(Unknown Source) ~[?:?]
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:255) ~[hadoop-hdfs-2.7.1.jar:?]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_71]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_71]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_71]
	at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_71]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) ~[hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) [hadoop-common-2.7.1.jar:?]
	at com.sun.proxy.$Proxy188.getBlockLocations(Unknown Source) [?:?]
	at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1240) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1227) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DFSClient.getBlockLocations(DFSClient.java:1285) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem$1.doCall(DistributedFileSystem.java:221) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem$1.doCall(DistributedFileSystem.java:217) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) [hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileBlockLocations(DistributedFileSystem.java:228) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileBlockLocations(DistributedFileSystem.java:209) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:397) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.lib.input.DelegatingInputFormat.getSplits(DelegatingInputFormat.java:115) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at java.security.AccessController.doPrivileged(Native Method) ~[?:1.7.0_71]
	at javax.security.auth.Subject.doAs(Subject.java:415) [?:1.7.0_71]
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) [hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at io.druid.indexer.DetermineHashedPartitionsJob.run(DetermineHashedPartitionsJob.java:116) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexer.JobHelper.runJobs(JobHelper.java:349) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexer.HadoopDruidDetermineConfigurationJob.run(HadoopDruidDetermineConfigurationJob.java:91) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.common.task.HadoopIndexTask$HadoopDetermineConfigInnerProcessing.runTask(HadoopIndexTask.java:291) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_71]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_71]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_71]
	at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_71]
	at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:201) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.common.task.HadoopIndexTask.run(HadoopIndexTask.java:175) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:436) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:408) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at java.util.concurrent.FutureTask.run(FutureTask.java:262) [?:1.7.0_71]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_71]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_71]
	at java.lang.Thread.run(Thread.java:745) [?:1.7.0_71]
2018-02-14T06:54:05,107 INFO [task-runner-0-priority-0] org.apache.hadoop.io.retry.RetryInvocationHandler - Exception while invoking getBlockLocations of class ClientNamenodeProtocolTranslatorPB over oser402531.wal-mart.com/10.224.183.38:8020 after 8 fail over attempts. Trying to fail over immediately.
java.io.IOException: com.google.protobuf.ServiceException: java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$DatanodeInfoProto
	at org.apache.hadoop.ipc.ProtobufHelper.getRemoteException(ProtobufHelper.java:47) ~[hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:260) ~[hadoop-hdfs-2.7.1.jar:?]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_71]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_71]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_71]
	at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_71]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) ~[hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) [hadoop-common-2.7.1.jar:?]
	at com.sun.proxy.$Proxy188.getBlockLocations(Unknown Source) [?:?]
	at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1240) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1227) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DFSClient.getBlockLocations(DFSClient.java:1285) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem$1.doCall(DistributedFileSystem.java:221) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem$1.doCall(DistributedFileSystem.java:217) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) [hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileBlockLocations(DistributedFileSystem.java:228) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileBlockLocations(DistributedFileSystem.java:209) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:397) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.lib.input.DelegatingInputFormat.getSplits(DelegatingInputFormat.java:115) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at java.security.AccessController.doPrivileged(Native Method) ~[?:1.7.0_71]
	at javax.security.auth.Subject.doAs(Subject.java:415) [?:1.7.0_71]
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) [hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at io.druid.indexer.DetermineHashedPartitionsJob.run(DetermineHashedPartitionsJob.java:116) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexer.JobHelper.runJobs(JobHelper.java:349) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexer.HadoopDruidDetermineConfigurationJob.run(HadoopDruidDetermineConfigurationJob.java:91) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.common.task.HadoopIndexTask$HadoopDetermineConfigInnerProcessing.runTask(HadoopIndexTask.java:291) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_71]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_71]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_71]
	at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_71]
	at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:201) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.common.task.HadoopIndexTask.run(HadoopIndexTask.java:175) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:436) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:408) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at java.util.concurrent.FutureTask.run(FutureTask.java:262) [?:1.7.0_71]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_71]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_71]
	at java.lang.Thread.run(Thread.java:745) [?:1.7.0_71]
Caused by: com.google.protobuf.ServiceException: java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$DatanodeInfoProto
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:271) ~[hadoop-common-2.7.1.jar:?]
	at com.sun.proxy.$Proxy187.getBlockLocations(Unknown Source) ~[?:?]
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:255) ~[hadoop-hdfs-2.7.1.jar:?]
	... 42 more
Caused by: java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$DatanodeInfoProto
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlockProto.<init>(HdfsProtos.java:14756) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlockProto.<init>(HdfsProtos.java:14685) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlockProto$1.parsePartialFrom(HdfsProtos.java:14881) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlockProto$1.parsePartialFrom(HdfsProtos.java:14876) ~[hadoop-hdfs-2.7.1.jar:?]
	at com.google.protobuf.CodedInputStream.readMessage(CodedInputStream.java:309) ~[protobuf-java-2.5.0.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlocksProto.<init>(HdfsProtos.java:21293) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlocksProto.<init>(HdfsProtos.java:21235) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlocksProto$1.parsePartialFrom(HdfsProtos.java:21365) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlocksProto$1.parsePartialFrom(HdfsProtos.java:21360) ~[hadoop-hdfs-2.7.1.jar:?]
	at com.google.protobuf.CodedInputStream.readMessage(CodedInputStream.java:309) ~[protobuf-java-2.5.0.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetBlockLocationsResponseProto.<init>(ClientNamenodeProtocolProtos.java:1425) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetBlockLocationsResponseProto.<init>(ClientNamenodeProtocolProtos.java:1372) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetBlockLocationsResponseProto$1.parsePartialFrom(ClientNamenodeProtocolProtos.java:1463) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetBlockLocationsResponseProto$1.parsePartialFrom(ClientNamenodeProtocolProtos.java:1458) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetBlockLocationsResponseProto$Builder.mergeFrom(ClientNamenodeProtocolProtos.java:1768) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetBlockLocationsResponseProto$Builder.mergeFrom(ClientNamenodeProtocolProtos.java:1651) ~[hadoop-hdfs-2.7.1.jar:?]
	at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:337) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:267) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessageLite$Builder.mergeFrom(AbstractMessageLite.java:170) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:882) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:267) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessageLite$Builder.mergeFrom(AbstractMessageLite.java:161) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:875) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:267) ~[protobuf-java-2.5.0.jar:?]
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:262) ~[hadoop-common-2.7.1.jar:?]
	at com.sun.proxy.$Proxy187.getBlockLocations(Unknown Source) ~[?:?]
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:255) ~[hadoop-hdfs-2.7.1.jar:?]
	... 42 more
2018-02-14T06:54:05,113 INFO [task-runner-0-priority-0] org.apache.hadoop.io.retry.RetryInvocationHandler - Exception while invoking getBlockLocations of class ClientNamenodeProtocolTranslatorPB over oser402527.wal-mart.com/10.224.183.35:8020 after 9 fail over attempts. Trying to fail over after sleeping for 7541ms.
org.apache.hadoop.ipc.RemoteException: Operation category READ is not supported in state standby
	at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
	at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1979)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1345)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1738)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:693)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:373)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)

	at org.apache.hadoop.ipc.Client.call(Client.java:1476) ~[hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.ipc.Client.call(Client.java:1407) ~[hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) ~[hadoop-common-2.7.1.jar:?]
	at com.sun.proxy.$Proxy187.getBlockLocations(Unknown Source) ~[?:?]
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:255) ~[hadoop-hdfs-2.7.1.jar:?]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_71]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_71]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_71]
	at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_71]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) ~[hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) [hadoop-common-2.7.1.jar:?]
	at com.sun.proxy.$Proxy188.getBlockLocations(Unknown Source) [?:?]
	at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1240) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1227) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DFSClient.getBlockLocations(DFSClient.java:1285) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem$1.doCall(DistributedFileSystem.java:221) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem$1.doCall(DistributedFileSystem.java:217) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) [hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileBlockLocations(DistributedFileSystem.java:228) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileBlockLocations(DistributedFileSystem.java:209) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:397) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.lib.input.DelegatingInputFormat.getSplits(DelegatingInputFormat.java:115) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at java.security.AccessController.doPrivileged(Native Method) ~[?:1.7.0_71]
	at javax.security.auth.Subject.doAs(Subject.java:415) [?:1.7.0_71]
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) [hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at io.druid.indexer.DetermineHashedPartitionsJob.run(DetermineHashedPartitionsJob.java:116) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexer.JobHelper.runJobs(JobHelper.java:349) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexer.HadoopDruidDetermineConfigurationJob.run(HadoopDruidDetermineConfigurationJob.java:91) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.common.task.HadoopIndexTask$HadoopDetermineConfigInnerProcessing.runTask(HadoopIndexTask.java:291) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_71]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_71]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_71]
	at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_71]
	at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:201) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.common.task.HadoopIndexTask.run(HadoopIndexTask.java:175) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:436) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:408) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at java.util.concurrent.FutureTask.run(FutureTask.java:262) [?:1.7.0_71]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_71]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_71]
	at java.lang.Thread.run(Thread.java:745) [?:1.7.0_71]
2018-02-14T06:54:12,658 INFO [task-runner-0-priority-0] org.apache.hadoop.io.retry.RetryInvocationHandler - Exception while invoking getBlockLocations of class ClientNamenodeProtocolTranslatorPB over oser402531.wal-mart.com/10.224.183.38:8020 after 10 fail over attempts. Trying to fail over immediately.
java.io.IOException: com.google.protobuf.ServiceException: java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$DatanodeInfoProto
	at org.apache.hadoop.ipc.ProtobufHelper.getRemoteException(ProtobufHelper.java:47) ~[hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:260) ~[hadoop-hdfs-2.7.1.jar:?]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_71]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_71]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_71]
	at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_71]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) ~[hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) [hadoop-common-2.7.1.jar:?]
	at com.sun.proxy.$Proxy188.getBlockLocations(Unknown Source) [?:?]
	at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1240) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1227) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DFSClient.getBlockLocations(DFSClient.java:1285) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem$1.doCall(DistributedFileSystem.java:221) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem$1.doCall(DistributedFileSystem.java:217) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) [hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileBlockLocations(DistributedFileSystem.java:228) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileBlockLocations(DistributedFileSystem.java:209) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:397) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.lib.input.DelegatingInputFormat.getSplits(DelegatingInputFormat.java:115) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at java.security.AccessController.doPrivileged(Native Method) ~[?:1.7.0_71]
	at javax.security.auth.Subject.doAs(Subject.java:415) [?:1.7.0_71]
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) [hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at io.druid.indexer.DetermineHashedPartitionsJob.run(DetermineHashedPartitionsJob.java:116) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexer.JobHelper.runJobs(JobHelper.java:349) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexer.HadoopDruidDetermineConfigurationJob.run(HadoopDruidDetermineConfigurationJob.java:91) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.common.task.HadoopIndexTask$HadoopDetermineConfigInnerProcessing.runTask(HadoopIndexTask.java:291) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_71]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_71]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_71]
	at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_71]
	at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:201) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.common.task.HadoopIndexTask.run(HadoopIndexTask.java:175) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:436) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:408) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at java.util.concurrent.FutureTask.run(FutureTask.java:262) [?:1.7.0_71]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_71]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_71]
	at java.lang.Thread.run(Thread.java:745) [?:1.7.0_71]
Caused by: com.google.protobuf.ServiceException: java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$DatanodeInfoProto
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:271) ~[hadoop-common-2.7.1.jar:?]
	at com.sun.proxy.$Proxy187.getBlockLocations(Unknown Source) ~[?:?]
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:255) ~[hadoop-hdfs-2.7.1.jar:?]
	... 42 more
Caused by: java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$DatanodeInfoProto
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlockProto.<init>(HdfsProtos.java:14756) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlockProto.<init>(HdfsProtos.java:14685) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlockProto$1.parsePartialFrom(HdfsProtos.java:14881) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlockProto$1.parsePartialFrom(HdfsProtos.java:14876) ~[hadoop-hdfs-2.7.1.jar:?]
	at com.google.protobuf.CodedInputStream.readMessage(CodedInputStream.java:309) ~[protobuf-java-2.5.0.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlocksProto.<init>(HdfsProtos.java:21293) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlocksProto.<init>(HdfsProtos.java:21235) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlocksProto$1.parsePartialFrom(HdfsProtos.java:21365) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlocksProto$1.parsePartialFrom(HdfsProtos.java:21360) ~[hadoop-hdfs-2.7.1.jar:?]
	at com.google.protobuf.CodedInputStream.readMessage(CodedInputStream.java:309) ~[protobuf-java-2.5.0.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetBlockLocationsResponseProto.<init>(ClientNamenodeProtocolProtos.java:1425) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetBlockLocationsResponseProto.<init>(ClientNamenodeProtocolProtos.java:1372) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetBlockLocationsResponseProto$1.parsePartialFrom(ClientNamenodeProtocolProtos.java:1463) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetBlockLocationsResponseProto$1.parsePartialFrom(ClientNamenodeProtocolProtos.java:1458) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetBlockLocationsResponseProto$Builder.mergeFrom(ClientNamenodeProtocolProtos.java:1768) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetBlockLocationsResponseProto$Builder.mergeFrom(ClientNamenodeProtocolProtos.java:1651) ~[hadoop-hdfs-2.7.1.jar:?]
	at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:337) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:267) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessageLite$Builder.mergeFrom(AbstractMessageLite.java:170) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:882) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:267) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessageLite$Builder.mergeFrom(AbstractMessageLite.java:161) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:875) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:267) ~[protobuf-java-2.5.0.jar:?]
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:262) ~[hadoop-common-2.7.1.jar:?]
	at com.sun.proxy.$Proxy187.getBlockLocations(Unknown Source) ~[?:?]
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:255) ~[hadoop-hdfs-2.7.1.jar:?]
	... 42 more
2018-02-14T06:54:12,663 INFO [task-runner-0-priority-0] org.apache.hadoop.io.retry.RetryInvocationHandler - Exception while invoking getBlockLocations of class ClientNamenodeProtocolTranslatorPB over oser402527.wal-mart.com/10.224.183.35:8020 after 11 fail over attempts. Trying to fail over after sleeping for 13127ms.
org.apache.hadoop.ipc.RemoteException: Operation category READ is not supported in state standby
	at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
	at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1979)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1345)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1738)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:693)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:373)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)

	at org.apache.hadoop.ipc.Client.call(Client.java:1476) ~[hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.ipc.Client.call(Client.java:1407) ~[hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) ~[hadoop-common-2.7.1.jar:?]
	at com.sun.proxy.$Proxy187.getBlockLocations(Unknown Source) ~[?:?]
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:255) ~[hadoop-hdfs-2.7.1.jar:?]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_71]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_71]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_71]
	at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_71]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) ~[hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) [hadoop-common-2.7.1.jar:?]
	at com.sun.proxy.$Proxy188.getBlockLocations(Unknown Source) [?:?]
	at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1240) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1227) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DFSClient.getBlockLocations(DFSClient.java:1285) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem$1.doCall(DistributedFileSystem.java:221) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem$1.doCall(DistributedFileSystem.java:217) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) [hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileBlockLocations(DistributedFileSystem.java:228) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileBlockLocations(DistributedFileSystem.java:209) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:397) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.lib.input.DelegatingInputFormat.getSplits(DelegatingInputFormat.java:115) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at java.security.AccessController.doPrivileged(Native Method) ~[?:1.7.0_71]
	at javax.security.auth.Subject.doAs(Subject.java:415) [?:1.7.0_71]
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) [hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at io.druid.indexer.DetermineHashedPartitionsJob.run(DetermineHashedPartitionsJob.java:116) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexer.JobHelper.runJobs(JobHelper.java:349) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexer.HadoopDruidDetermineConfigurationJob.run(HadoopDruidDetermineConfigurationJob.java:91) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.common.task.HadoopIndexTask$HadoopDetermineConfigInnerProcessing.runTask(HadoopIndexTask.java:291) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_71]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_71]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_71]
	at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_71]
	at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:201) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.common.task.HadoopIndexTask.run(HadoopIndexTask.java:175) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:436) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:408) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at java.util.concurrent.FutureTask.run(FutureTask.java:262) [?:1.7.0_71]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_71]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_71]
	at java.lang.Thread.run(Thread.java:745) [?:1.7.0_71]
2018-02-14T06:54:23,858 INFO [MonitorScheduler-0] com.metamx.emitter.core.LoggingEmitter - Event [{"feed":"metrics","timestamp":"2018-02-14T06:54:23.843Z","service":"druid/middleManager","host":"oser402532.wal-mart.com:8100","metric":"jvm/mem/max","value":10737418240,"dataSource":["wikiticker"],"id":["index_hadoop_wikiticker_2018-02-14T06:53:18.176Z"],"memKind":"heap"}]
2018-02-14T06:54:23,858 INFO [MonitorScheduler-0] com.metamx.emitter.core.LoggingEmitter - Event [{"feed":"metrics","timestamp":"2018-02-14T06:54:23.858Z","service":"druid/middleManager","host":"oser402532.wal-mart.com:8100","metric":"jvm/mem/committed","value":138412032,"dataSource":["wikiticker"],"id":["index_hadoop_wikiticker_2018-02-14T06:53:18.176Z"],"memKind":"heap"}]
2018-02-14T06:54:23,858 INFO [MonitorScheduler-0] com.metamx.emitter.core.LoggingEmitter - Event [{"feed":"metrics","timestamp":"2018-02-14T06:54:23.858Z","service":"druid/middleManager","host":"oser402532.wal-mart.com:8100","metric":"jvm/mem/used","value":46728096,"dataSource":["wikiticker"],"id":["index_hadoop_wikiticker_2018-02-14T06:53:18.176Z"],"memKind":"heap"}]
2018-02-14T06:54:23,859 INFO [MonitorScheduler-0] com.metamx.emitter.core.LoggingEmitter - Event [{"feed":"metrics","timestamp":"2018-02-14T06:54:23.858Z","service":"druid/middleManager","host":"oser402532.wal-mart.com:8100","metric":"jvm/mem/init","value":2147483648,"dataSource":["wikiticker"],"id":["index_hadoop_wikiticker_2018-02-14T06:53:18.176Z"],"memKind":"heap"}]
2018-02-14T06:54:23,859 INFO [MonitorScheduler-0] com.metamx.emitter.core.LoggingEmitter - Event [{"feed":"metrics","timestamp":"2018-02-14T06:54:23.859Z","service":"druid/middleManager","host":"oser402532.wal-mart.com:8100","metric":"jvm/mem/max","value":136314880,"dataSource":["wikiticker"],"id":["index_hadoop_wikiticker_2018-02-14T06:53:18.176Z"],"memKind":"nonheap"}]
2018-02-14T06:54:23,859 INFO [MonitorScheduler-0] com.metamx.emitter.core.LoggingEmitter - Event [{"feed":"metrics","timestamp":"2018-02-14T06:54:23.859Z","service":"druid/middleManager","host":"oser402532.wal-mart.com:8100","metric":"jvm/mem/committed","value":89784320,"dataSource":["wikiticker"],"id":["index_hadoop_wikiticker_2018-02-14T06:53:18.176Z"],"memKind":"nonheap"}]
2018-02-14T06:54:23,860 INFO [MonitorScheduler-0] com.metamx.emitter.core.LoggingEmitter - Event [{"feed":"metrics","timestamp":"2018-02-14T06:54:23.859Z","service":"druid/middleManager","host":"oser402532.wal-mart.com:8100","metric":"jvm/mem/used","value":89443008,"dataSource":["wikiticker"],"id":["index_hadoop_wikiticker_2018-02-14T06:53:18.176Z"],"memKind":"nonheap"}]
2018-02-14T06:54:23,860 INFO [MonitorScheduler-0] com.metamx.emitter.core.LoggingEmitter - Event [{"feed":"metrics","timestamp":"2018-02-14T06:54:23.860Z","service":"druid/middleManager","host":"oser402532.wal-mart.com:8100","metric":"jvm/mem/init","value":23527424,"dataSource":["wikiticker"],"id":["index_hadoop_wikiticker_2018-02-14T06:53:18.176Z"],"memKind":"nonheap"}]
2018-02-14T06:54:23,861 INFO [MonitorScheduler-0] com.metamx.emitter.core.LoggingEmitter - Event [{"feed":"metrics","timestamp":"2018-02-14T06:54:23.860Z","service":"druid/middleManager","host":"oser402532.wal-mart.com:8100","metric":"jvm/pool/max","value":50331648,"dataSource":["wikiticker"],"id":["index_hadoop_wikiticker_2018-02-14T06:53:18.176Z"],"poolKind":"nonheap","poolName":"Code Cache"}]
2018-02-14T06:54:23,861 INFO [MonitorScheduler-0] com.metamx.emitter.core.LoggingEmitter - Event [{"feed":"metrics","timestamp":"2018-02-14T06:54:23.861Z","service":"druid/middleManager","host":"oser402532.wal-mart.com:8100","metric":"jvm/pool/committed","value":3801088,"dataSource":["wikiticker"],"id":["index_hadoop_wikiticker_2018-02-14T06:53:18.176Z"],"poolKind":"nonheap","poolName":"Code Cache"}]
2018-02-14T06:54:23,861 INFO [MonitorScheduler-0] com.metamx.emitter.core.LoggingEmitter - Event [{"feed":"metrics","timestamp":"2018-02-14T06:54:23.861Z","service":"druid/middleManager","host":"oser402532.wal-mart.com:8100","metric":"jvm/pool/used","value":3683904,"dataSource":["wikiticker"],"id":["index_hadoop_wikiticker_2018-02-14T06:53:18.176Z"],"poolKind":"nonheap","poolName":"Code Cache"}]
2018-02-14T06:54:23,862 INFO [MonitorScheduler-0] com.metamx.emitter.core.LoggingEmitter - Event [{"feed":"metrics","timestamp":"2018-02-14T06:54:23.861Z","service":"druid/middleManager","host":"oser402532.wal-mart.com:8100","metric":"jvm/pool/init","value":2555904,"dataSource":["wikiticker"],"id":["index_hadoop_wikiticker_2018-02-14T06:53:18.176Z"],"poolKind":"nonheap","poolName":"Code Cache"}]
2018-02-14T06:54:23,862 INFO [MonitorScheduler-0] com.metamx.emitter.core.LoggingEmitter - Event [{"feed":"metrics","timestamp":"2018-02-14T06:54:23.862Z","service":"druid/middleManager","host":"oser402532.wal-mart.com:8100","metric":"jvm/pool/max","value":-1,"dataSource":["wikiticker"],"id":["index_hadoop_wikiticker_2018-02-14T06:53:18.176Z"],"poolKind":"heap","poolName":"G1 Eden Space"}]
2018-02-14T06:54:23,862 INFO [MonitorScheduler-0] com.metamx.emitter.core.LoggingEmitter - Event [{"feed":"metrics","timestamp":"2018-02-14T06:54:23.862Z","service":"druid/middleManager","host":"oser402532.wal-mart.com:8100","metric":"jvm/pool/committed","value":82837504,"dataSource":["wikiticker"],"id":["index_hadoop_wikiticker_2018-02-14T06:53:18.176Z"],"poolKind":"heap","poolName":"G1 Eden Space"}]
2018-02-14T06:54:23,863 INFO [MonitorScheduler-0] com.metamx.emitter.core.LoggingEmitter - Event [{"feed":"metrics","timestamp":"2018-02-14T06:54:23.862Z","service":"druid/middleManager","host":"oser402532.wal-mart.com:8100","metric":"jvm/pool/used","value":7340032,"dataSource":["wikiticker"],"id":["index_hadoop_wikiticker_2018-02-14T06:53:18.176Z"],"poolKind":"heap","poolName":"G1 Eden Space"}]
2018-02-14T06:54:23,863 INFO [MonitorScheduler-0] com.metamx.emitter.core.LoggingEmitter - Event [{"feed":"metrics","timestamp":"2018-02-14T06:54:23.863Z","service":"druid/middleManager","host":"oser402532.wal-mart.com:8100","metric":"jvm/pool/init","value":113246208,"dataSource":["wikiticker"],"id":["index_hadoop_wikiticker_2018-02-14T06:53:18.176Z"],"poolKind":"heap","poolName":"G1 Eden Space"}]
2018-02-14T06:54:23,864 INFO [MonitorScheduler-0] com.metamx.emitter.core.LoggingEmitter - Event [{"feed":"metrics","timestamp":"2018-02-14T06:54:23.863Z","service":"druid/middleManager","host":"oser402532.wal-mart.com:8100","metric":"jvm/pool/max","value":-1,"dataSource":["wikiticker"],"id":["index_hadoop_wikiticker_2018-02-14T06:53:18.176Z"],"poolKind":"heap","poolName":"G1 Survivor Space"}]
2018-02-14T06:54:23,864 INFO [MonitorScheduler-0] com.metamx.emitter.core.LoggingEmitter - Event [{"feed":"metrics","timestamp":"2018-02-14T06:54:23.864Z","service":"druid/middleManager","host":"oser402532.wal-mart.com:8100","metric":"jvm/pool/committed","value":0,"dataSource":["wikiticker"],"id":["index_hadoop_wikiticker_2018-02-14T06:53:18.176Z"],"poolKind":"heap","poolName":"G1 Survivor Space"}]
2018-02-14T06:54:23,864 INFO [MonitorScheduler-0] com.metamx.emitter.core.LoggingEmitter - Event [{"feed":"metrics","timestamp":"2018-02-14T06:54:23.864Z","service":"druid/middleManager","host":"oser402532.wal-mart.com:8100","metric":"jvm/pool/used","value":0,"dataSource":["wikiticker"],"id":["index_hadoop_wikiticker_2018-02-14T06:53:18.176Z"],"poolKind":"heap","poolName":"G1 Survivor Space"}]
2018-02-14T06:54:23,865 INFO [MonitorScheduler-0] com.metamx.emitter.core.LoggingEmitter - Event [{"feed":"metrics","timestamp":"2018-02-14T06:54:23.864Z","service":"druid/middleManager","host":"oser402532.wal-mart.com:8100","metric":"jvm/pool/init","value":0,"dataSource":["wikiticker"],"id":["index_hadoop_wikiticker_2018-02-14T06:53:18.176Z"],"poolKind":"heap","poolName":"G1 Survivor Space"}]
2018-02-14T06:54:23,865 INFO [MonitorScheduler-0] com.metamx.emitter.core.LoggingEmitter - Event [{"feed":"metrics","timestamp":"2018-02-14T06:54:23.865Z","service":"druid/middleManager","host":"oser402532.wal-mart.com:8100","metric":"jvm/pool/max","value":10737418240,"dataSource":["wikiticker"],"id":["index_hadoop_wikiticker_2018-02-14T06:53:18.176Z"],"poolKind":"heap","poolName":"G1 Old Gen"}]
2018-02-14T06:54:23,865 INFO [MonitorScheduler-0] com.metamx.emitter.core.LoggingEmitter - Event [{"feed":"metrics","timestamp":"2018-02-14T06:54:23.865Z","service":"druid/middleManager","host":"oser402532.wal-mart.com:8100","metric":"jvm/pool/committed","value":55574528,"dataSource":["wikiticker"],"id":["index_hadoop_wikiticker_2018-02-14T06:53:18.176Z"],"poolKind":"heap","poolName":"G1 Old Gen"}]
2018-02-14T06:54:23,865 INFO [MonitorScheduler-0] com.metamx.emitter.core.LoggingEmitter - Event [{"feed":"metrics","timestamp":"2018-02-14T06:54:23.865Z","service":"druid/middleManager","host":"oser402532.wal-mart.com:8100","metric":"jvm/pool/used","value":41485216,"dataSource":["wikiticker"],"id":["index_hadoop_wikiticker_2018-02-14T06:53:18.176Z"],"poolKind":"heap","poolName":"G1 Old Gen"}]
2018-02-14T06:54:23,866 INFO [MonitorScheduler-0] com.metamx.emitter.core.LoggingEmitter - Event [{"feed":"metrics","timestamp":"2018-02-14T06:54:23.866Z","service":"druid/middleManager","host":"oser402532.wal-mart.com:8100","metric":"jvm/pool/init","value":2034237440,"dataSource":["wikiticker"],"id":["index_hadoop_wikiticker_2018-02-14T06:53:18.176Z"],"poolKind":"heap","poolName":"G1 Old Gen"}]
2018-02-14T06:54:23,866 INFO [MonitorScheduler-0] com.metamx.emitter.core.LoggingEmitter - Event [{"feed":"metrics","timestamp":"2018-02-14T06:54:23.866Z","service":"druid/middleManager","host":"oser402532.wal-mart.com:8100","metric":"jvm/pool/max","value":85983232,"dataSource":["wikiticker"],"id":["index_hadoop_wikiticker_2018-02-14T06:53:18.176Z"],"poolKind":"nonheap","poolName":"G1 Perm Gen"}]
2018-02-14T06:54:23,866 INFO [MonitorScheduler-0] com.metamx.emitter.core.LoggingEmitter - Event [{"feed":"metrics","timestamp":"2018-02-14T06:54:23.866Z","service":"druid/middleManager","host":"oser402532.wal-mart.com:8100","metric":"jvm/pool/committed","value":85983232,"dataSource":["wikiticker"],"id":["index_hadoop_wikiticker_2018-02-14T06:53:18.176Z"],"poolKind":"nonheap","poolName":"G1 Perm Gen"}]
2018-02-14T06:54:23,867 INFO [MonitorScheduler-0] com.metamx.emitter.core.LoggingEmitter - Event [{"feed":"metrics","timestamp":"2018-02-14T06:54:23.866Z","service":"druid/middleManager","host":"oser402532.wal-mart.com:8100","metric":"jvm/pool/used","value":85846280,"dataSource":["wikiticker"],"id":["index_hadoop_wikiticker_2018-02-14T06:53:18.176Z"],"poolKind":"nonheap","poolName":"G1 Perm Gen"}]
2018-02-14T06:54:23,867 INFO [MonitorScheduler-0] com.metamx.emitter.core.LoggingEmitter - Event [{"feed":"metrics","timestamp":"2018-02-14T06:54:23.867Z","service":"druid/middleManager","host":"oser402532.wal-mart.com:8100","metric":"jvm/pool/init","value":20971520,"dataSource":["wikiticker"],"id":["index_hadoop_wikiticker_2018-02-14T06:53:18.176Z"],"poolKind":"nonheap","poolName":"G1 Perm Gen"}]
2018-02-14T06:54:23,868 INFO [MonitorScheduler-0] com.metamx.emitter.core.LoggingEmitter - Event [{"feed":"metrics","timestamp":"2018-02-14T06:54:23.867Z","service":"druid/middleManager","host":"oser402532.wal-mart.com:8100","metric":"jvm/bufferpool/capacity","value":5277249,"bufferpoolName":"direct","dataSource":["wikiticker"],"id":["index_hadoop_wikiticker_2018-02-14T06:53:18.176Z"]}]
2018-02-14T06:54:23,868 INFO [MonitorScheduler-0] com.metamx.emitter.core.LoggingEmitter - Event [{"feed":"metrics","timestamp":"2018-02-14T06:54:23.868Z","service":"druid/middleManager","host":"oser402532.wal-mart.com:8100","metric":"jvm/bufferpool/used","value":5277249,"bufferpoolName":"direct","dataSource":["wikiticker"],"id":["index_hadoop_wikiticker_2018-02-14T06:53:18.176Z"]}]
2018-02-14T06:54:23,868 INFO [MonitorScheduler-0] com.metamx.emitter.core.LoggingEmitter - Event [{"feed":"metrics","timestamp":"2018-02-14T06:54:23.868Z","service":"druid/middleManager","host":"oser402532.wal-mart.com:8100","metric":"jvm/bufferpool/count","value":88,"bufferpoolName":"direct","dataSource":["wikiticker"],"id":["index_hadoop_wikiticker_2018-02-14T06:53:18.176Z"]}]
2018-02-14T06:54:23,869 INFO [MonitorScheduler-0] com.metamx.emitter.core.LoggingEmitter - Event [{"feed":"metrics","timestamp":"2018-02-14T06:54:23.869Z","service":"druid/middleManager","host":"oser402532.wal-mart.com:8100","metric":"jvm/bufferpool/capacity","value":0,"bufferpoolName":"mapped","dataSource":["wikiticker"],"id":["index_hadoop_wikiticker_2018-02-14T06:53:18.176Z"]}]
2018-02-14T06:54:23,869 INFO [MonitorScheduler-0] com.metamx.emitter.core.LoggingEmitter - Event [{"feed":"metrics","timestamp":"2018-02-14T06:54:23.869Z","service":"druid/middleManager","host":"oser402532.wal-mart.com:8100","metric":"jvm/bufferpool/used","value":0,"bufferpoolName":"mapped","dataSource":["wikiticker"],"id":["index_hadoop_wikiticker_2018-02-14T06:53:18.176Z"]}]
2018-02-14T06:54:23,870 INFO [MonitorScheduler-0] com.metamx.emitter.core.LoggingEmitter - Event [{"feed":"metrics","timestamp":"2018-02-14T06:54:23.869Z","service":"druid/middleManager","host":"oser402532.wal-mart.com:8100","metric":"jvm/bufferpool/count","value":0,"bufferpoolName":"mapped","dataSource":["wikiticker"],"id":["index_hadoop_wikiticker_2018-02-14T06:53:18.176Z"]}]
2018-02-14T06:54:23,871 INFO [MonitorScheduler-0] com.metamx.emitter.core.LoggingEmitter - Event [{"feed":"metrics","timestamp":"2018-02-14T06:54:23.870Z","service":"druid/middleManager","host":"oser402532.wal-mart.com:8100","metric":"jetty/numOpenConnections","value":0,"dataSource":["wikiticker"],"id":["index_hadoop_wikiticker_2018-02-14T06:53:18.176Z"]}]
2018-02-14T06:54:23,872 INFO [MonitorScheduler-0] com.metamx.emitter.core.LoggingEmitter - Event [{"feed":"metrics","timestamp":"2018-02-14T06:54:23.871Z","service":"druid/middleManager","host":"oser402532.wal-mart.com:8100","metric":"segment/scan/pending","value":0}]
2018-02-14T06:54:25,794 INFO [task-runner-0-priority-0] org.apache.hadoop.io.retry.RetryInvocationHandler - Exception while invoking getBlockLocations of class ClientNamenodeProtocolTranslatorPB over oser402531.wal-mart.com/10.224.183.38:8020 after 12 fail over attempts. Trying to fail over immediately.
java.io.IOException: com.google.protobuf.ServiceException: java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$DatanodeInfoProto
	at org.apache.hadoop.ipc.ProtobufHelper.getRemoteException(ProtobufHelper.java:47) ~[hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:260) ~[hadoop-hdfs-2.7.1.jar:?]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_71]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_71]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_71]
	at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_71]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) ~[hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) [hadoop-common-2.7.1.jar:?]
	at com.sun.proxy.$Proxy188.getBlockLocations(Unknown Source) [?:?]
	at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1240) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1227) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DFSClient.getBlockLocations(DFSClient.java:1285) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem$1.doCall(DistributedFileSystem.java:221) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem$1.doCall(DistributedFileSystem.java:217) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) [hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileBlockLocations(DistributedFileSystem.java:228) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileBlockLocations(DistributedFileSystem.java:209) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:397) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.lib.input.DelegatingInputFormat.getSplits(DelegatingInputFormat.java:115) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at java.security.AccessController.doPrivileged(Native Method) ~[?:1.7.0_71]
	at javax.security.auth.Subject.doAs(Subject.java:415) [?:1.7.0_71]
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) [hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at io.druid.indexer.DetermineHashedPartitionsJob.run(DetermineHashedPartitionsJob.java:116) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexer.JobHelper.runJobs(JobHelper.java:349) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexer.HadoopDruidDetermineConfigurationJob.run(HadoopDruidDetermineConfigurationJob.java:91) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.common.task.HadoopIndexTask$HadoopDetermineConfigInnerProcessing.runTask(HadoopIndexTask.java:291) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_71]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_71]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_71]
	at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_71]
	at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:201) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.common.task.HadoopIndexTask.run(HadoopIndexTask.java:175) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:436) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:408) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at java.util.concurrent.FutureTask.run(FutureTask.java:262) [?:1.7.0_71]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_71]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_71]
	at java.lang.Thread.run(Thread.java:745) [?:1.7.0_71]
Caused by: com.google.protobuf.ServiceException: java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$DatanodeInfoProto
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:271) ~[hadoop-common-2.7.1.jar:?]
	at com.sun.proxy.$Proxy187.getBlockLocations(Unknown Source) ~[?:?]
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:255) ~[hadoop-hdfs-2.7.1.jar:?]
	... 42 more
Caused by: java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$DatanodeInfoProto
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlockProto.<init>(HdfsProtos.java:14756) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlockProto.<init>(HdfsProtos.java:14685) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlockProto$1.parsePartialFrom(HdfsProtos.java:14881) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlockProto$1.parsePartialFrom(HdfsProtos.java:14876) ~[hadoop-hdfs-2.7.1.jar:?]
	at com.google.protobuf.CodedInputStream.readMessage(CodedInputStream.java:309) ~[protobuf-java-2.5.0.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlocksProto.<init>(HdfsProtos.java:21293) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlocksProto.<init>(HdfsProtos.java:21235) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlocksProto$1.parsePartialFrom(HdfsProtos.java:21365) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlocksProto$1.parsePartialFrom(HdfsProtos.java:21360) ~[hadoop-hdfs-2.7.1.jar:?]
	at com.google.protobuf.CodedInputStream.readMessage(CodedInputStream.java:309) ~[protobuf-java-2.5.0.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetBlockLocationsResponseProto.<init>(ClientNamenodeProtocolProtos.java:1425) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetBlockLocationsResponseProto.<init>(ClientNamenodeProtocolProtos.java:1372) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetBlockLocationsResponseProto$1.parsePartialFrom(ClientNamenodeProtocolProtos.java:1463) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetBlockLocationsResponseProto$1.parsePartialFrom(ClientNamenodeProtocolProtos.java:1458) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetBlockLocationsResponseProto$Builder.mergeFrom(ClientNamenodeProtocolProtos.java:1768) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetBlockLocationsResponseProto$Builder.mergeFrom(ClientNamenodeProtocolProtos.java:1651) ~[hadoop-hdfs-2.7.1.jar:?]
	at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:337) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:267) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessageLite$Builder.mergeFrom(AbstractMessageLite.java:170) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:882) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:267) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessageLite$Builder.mergeFrom(AbstractMessageLite.java:161) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:875) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:267) ~[protobuf-java-2.5.0.jar:?]
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:262) ~[hadoop-common-2.7.1.jar:?]
	at com.sun.proxy.$Proxy187.getBlockLocations(Unknown Source) ~[?:?]
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:255) ~[hadoop-hdfs-2.7.1.jar:?]
	... 42 more
2018-02-14T06:54:25,799 INFO [task-runner-0-priority-0] org.apache.hadoop.io.retry.RetryInvocationHandler - Exception while invoking getBlockLocations of class ClientNamenodeProtocolTranslatorPB over oser402527.wal-mart.com/10.224.183.35:8020 after 13 fail over attempts. Trying to fail over after sleeping for 13799ms.
org.apache.hadoop.ipc.RemoteException: Operation category READ is not supported in state standby
	at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
	at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1979)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1345)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1738)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:693)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:373)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)

	at org.apache.hadoop.ipc.Client.call(Client.java:1476) ~[hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.ipc.Client.call(Client.java:1407) ~[hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) ~[hadoop-common-2.7.1.jar:?]
	at com.sun.proxy.$Proxy187.getBlockLocations(Unknown Source) ~[?:?]
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:255) ~[hadoop-hdfs-2.7.1.jar:?]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_71]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_71]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_71]
	at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_71]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) ~[hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) [hadoop-common-2.7.1.jar:?]
	at com.sun.proxy.$Proxy188.getBlockLocations(Unknown Source) [?:?]
	at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1240) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1227) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DFSClient.getBlockLocations(DFSClient.java:1285) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem$1.doCall(DistributedFileSystem.java:221) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem$1.doCall(DistributedFileSystem.java:217) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) [hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileBlockLocations(DistributedFileSystem.java:228) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileBlockLocations(DistributedFileSystem.java:209) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:397) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.lib.input.DelegatingInputFormat.getSplits(DelegatingInputFormat.java:115) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at java.security.AccessController.doPrivileged(Native Method) ~[?:1.7.0_71]
	at javax.security.auth.Subject.doAs(Subject.java:415) [?:1.7.0_71]
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) [hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at io.druid.indexer.DetermineHashedPartitionsJob.run(DetermineHashedPartitionsJob.java:116) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexer.JobHelper.runJobs(JobHelper.java:349) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexer.HadoopDruidDetermineConfigurationJob.run(HadoopDruidDetermineConfigurationJob.java:91) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.common.task.HadoopIndexTask$HadoopDetermineConfigInnerProcessing.runTask(HadoopIndexTask.java:291) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_71]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_71]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_71]
	at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_71]
	at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:201) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.common.task.HadoopIndexTask.run(HadoopIndexTask.java:175) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:436) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:408) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at java.util.concurrent.FutureTask.run(FutureTask.java:262) [?:1.7.0_71]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_71]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_71]
	at java.lang.Thread.run(Thread.java:745) [?:1.7.0_71]
2018-02-14T06:54:39,602 INFO [task-runner-0-priority-0] org.apache.hadoop.io.retry.RetryInvocationHandler - Exception while invoking getBlockLocations of class ClientNamenodeProtocolTranslatorPB over oser402531.wal-mart.com/10.224.183.38:8020 after 14 fail over attempts. Trying to fail over immediately.
java.io.IOException: com.google.protobuf.ServiceException: java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$DatanodeInfoProto
	at org.apache.hadoop.ipc.ProtobufHelper.getRemoteException(ProtobufHelper.java:47) ~[hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:260) ~[hadoop-hdfs-2.7.1.jar:?]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_71]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_71]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_71]
	at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_71]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) ~[hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) [hadoop-common-2.7.1.jar:?]
	at com.sun.proxy.$Proxy188.getBlockLocations(Unknown Source) [?:?]
	at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1240) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1227) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DFSClient.getBlockLocations(DFSClient.java:1285) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem$1.doCall(DistributedFileSystem.java:221) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem$1.doCall(DistributedFileSystem.java:217) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) [hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileBlockLocations(DistributedFileSystem.java:228) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileBlockLocations(DistributedFileSystem.java:209) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:397) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.lib.input.DelegatingInputFormat.getSplits(DelegatingInputFormat.java:115) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at java.security.AccessController.doPrivileged(Native Method) ~[?:1.7.0_71]
	at javax.security.auth.Subject.doAs(Subject.java:415) [?:1.7.0_71]
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) [hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at io.druid.indexer.DetermineHashedPartitionsJob.run(DetermineHashedPartitionsJob.java:116) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexer.JobHelper.runJobs(JobHelper.java:349) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexer.HadoopDruidDetermineConfigurationJob.run(HadoopDruidDetermineConfigurationJob.java:91) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.common.task.HadoopIndexTask$HadoopDetermineConfigInnerProcessing.runTask(HadoopIndexTask.java:291) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_71]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_71]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_71]
	at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_71]
	at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:201) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.common.task.HadoopIndexTask.run(HadoopIndexTask.java:175) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:436) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:408) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at java.util.concurrent.FutureTask.run(FutureTask.java:262) [?:1.7.0_71]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_71]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_71]
	at java.lang.Thread.run(Thread.java:745) [?:1.7.0_71]
Caused by: com.google.protobuf.ServiceException: java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$DatanodeInfoProto
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:271) ~[hadoop-common-2.7.1.jar:?]
	at com.sun.proxy.$Proxy187.getBlockLocations(Unknown Source) ~[?:?]
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:255) ~[hadoop-hdfs-2.7.1.jar:?]
	... 42 more
Caused by: java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$DatanodeInfoProto
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlockProto.<init>(HdfsProtos.java:14756) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlockProto.<init>(HdfsProtos.java:14685) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlockProto$1.parsePartialFrom(HdfsProtos.java:14881) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlockProto$1.parsePartialFrom(HdfsProtos.java:14876) ~[hadoop-hdfs-2.7.1.jar:?]
	at com.google.protobuf.CodedInputStream.readMessage(CodedInputStream.java:309) ~[protobuf-java-2.5.0.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlocksProto.<init>(HdfsProtos.java:21293) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlocksProto.<init>(HdfsProtos.java:21235) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlocksProto$1.parsePartialFrom(HdfsProtos.java:21365) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$LocatedBlocksProto$1.parsePartialFrom(HdfsProtos.java:21360) ~[hadoop-hdfs-2.7.1.jar:?]
	at com.google.protobuf.CodedInputStream.readMessage(CodedInputStream.java:309) ~[protobuf-java-2.5.0.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetBlockLocationsResponseProto.<init>(ClientNamenodeProtocolProtos.java:1425) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetBlockLocationsResponseProto.<init>(ClientNamenodeProtocolProtos.java:1372) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetBlockLocationsResponseProto$1.parsePartialFrom(ClientNamenodeProtocolProtos.java:1463) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetBlockLocationsResponseProto$1.parsePartialFrom(ClientNamenodeProtocolProtos.java:1458) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetBlockLocationsResponseProto$Builder.mergeFrom(ClientNamenodeProtocolProtos.java:1768) ~[hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetBlockLocationsResponseProto$Builder.mergeFrom(ClientNamenodeProtocolProtos.java:1651) ~[hadoop-hdfs-2.7.1.jar:?]
	at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:337) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:267) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessageLite$Builder.mergeFrom(AbstractMessageLite.java:170) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:882) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:267) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessageLite$Builder.mergeFrom(AbstractMessageLite.java:161) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:875) ~[protobuf-java-2.5.0.jar:?]
	at com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:267) ~[protobuf-java-2.5.0.jar:?]
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:262) ~[hadoop-common-2.7.1.jar:?]
	at com.sun.proxy.$Proxy187.getBlockLocations(Unknown Source) ~[?:?]
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:255) ~[hadoop-hdfs-2.7.1.jar:?]
	... 42 more
2018-02-14T06:54:39,609 WARN [task-runner-0-priority-0] org.apache.hadoop.io.retry.RetryInvocationHandler - Exception while invoking class org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations over oser402527.wal-mart.com/10.224.183.35:8020. Not retrying because failovers (15) exceeded maximum allowed (15)
org.apache.hadoop.ipc.RemoteException: Operation category READ is not supported in state standby
	at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
	at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1979)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1345)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1738)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:693)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:373)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)

	at org.apache.hadoop.ipc.Client.call(Client.java:1476) ~[hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.ipc.Client.call(Client.java:1407) ~[hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) ~[hadoop-common-2.7.1.jar:?]
	at com.sun.proxy.$Proxy187.getBlockLocations(Unknown Source) ~[?:?]
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:255) ~[hadoop-hdfs-2.7.1.jar:?]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_71]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_71]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_71]
	at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_71]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) ~[hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) [hadoop-common-2.7.1.jar:?]
	at com.sun.proxy.$Proxy188.getBlockLocations(Unknown Source) [?:?]
	at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1240) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1227) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DFSClient.getBlockLocations(DFSClient.java:1285) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem$1.doCall(DistributedFileSystem.java:221) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem$1.doCall(DistributedFileSystem.java:217) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) [hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileBlockLocations(DistributedFileSystem.java:228) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileBlockLocations(DistributedFileSystem.java:209) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:397) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.lib.input.DelegatingInputFormat.getSplits(DelegatingInputFormat.java:115) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at java.security.AccessController.doPrivileged(Native Method) ~[?:1.7.0_71]
	at javax.security.auth.Subject.doAs(Subject.java:415) [?:1.7.0_71]
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) [hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at io.druid.indexer.DetermineHashedPartitionsJob.run(DetermineHashedPartitionsJob.java:116) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexer.JobHelper.runJobs(JobHelper.java:349) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexer.HadoopDruidDetermineConfigurationJob.run(HadoopDruidDetermineConfigurationJob.java:91) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.common.task.HadoopIndexTask$HadoopDetermineConfigInnerProcessing.runTask(HadoopIndexTask.java:291) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_71]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_71]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_71]
	at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_71]
	at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:201) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.common.task.HadoopIndexTask.run(HadoopIndexTask.java:175) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:436) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:408) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at java.util.concurrent.FutureTask.run(FutureTask.java:262) [?:1.7.0_71]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_71]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_71]
	at java.lang.Thread.run(Thread.java:745) [?:1.7.0_71]
2018-02-14T06:54:39,612 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.JobSubmitter - Cleaning up the staging area /user/svcrcs/.staging/job_1517538669309_131356
2018-02-14T06:54:39,619 INFO [task-runner-0-priority-0] org.apache.hadoop.io.retry.RetryInvocationHandler - Exception while invoking delete of class ClientNamenodeProtocolTranslatorPB over oser402527.wal-mart.com/10.224.183.35:8020. Trying to fail over immediately.
org.apache.hadoop.ipc.RemoteException: Operation category WRITE is not supported in state standby
	at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
	at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1979)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1345)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:1063)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:619)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)

	at org.apache.hadoop.ipc.Client.call(Client.java:1476) ~[hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.ipc.Client.call(Client.java:1407) ~[hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) ~[hadoop-common-2.7.1.jar:?]
	at com.sun.proxy.$Proxy187.delete(Unknown Source) ~[?:?]
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:540) ~[hadoop-hdfs-2.7.1.jar:?]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_71]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_71]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_71]
	at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_71]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) ~[hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) [hadoop-common-2.7.1.jar:?]
	at com.sun.proxy.$Proxy188.delete(Unknown Source) [?:?]
	at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:2052) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:707) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:703) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) [hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:714) [hadoop-hdfs-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:251) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at java.security.AccessController.doPrivileged(Native Method) ~[?:1.7.0_71]
	at javax.security.auth.Subject.doAs(Subject.java:415) [?:1.7.0_71]
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) [hadoop-common-2.7.1.jar:?]
	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) [hadoop-mapreduce-client-core-2.7.1.jar:?]
	at io.druid.indexer.DetermineHashedPartitionsJob.run(DetermineHashedPartitionsJob.java:116) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexer.JobHelper.runJobs(JobHelper.java:349) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexer.HadoopDruidDetermineConfigurationJob.run(HadoopDruidDetermineConfigurationJob.java:91) [druid-indexing-hadoop-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.common.task.HadoopIndexTask$HadoopDetermineConfigInnerProcessing.runTask(HadoopIndexTask.java:291) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_71]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_71]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_71]
	at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_71]
	at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:201) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.common.task.HadoopIndexTask.run(HadoopIndexTask.java:175) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:436) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:408) [druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at java.util.concurrent.FutureTask.run(FutureTask.java:262) [?:1.7.0_71]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_71]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_71]
	at java.lang.Thread.run(Thread.java:745) [?:1.7.0_71]
81.086: [Full GC 53M->22M(77M), 0.2614580 secs]
   [Eden: 14.0M(75.0M)->0.0B(43.0M) Survivors: 0.0B->0.0B Heap: 53.2M(132.0M)->22.9M(77.0M)]
 [Times: user=0.36 sys=0.01, real=0.26 secs] 
81.348: [Full GC 22M->22M(76M), 0.2294280 secs]
   [Eden: 0.0B(43.0M)->0.0B(43.0M) Survivors: 0.0B->0.0B Heap: 22.9M(77.0M)->22.8M(76.0M)]
 [Times: user=0.32 sys=0.01, real=0.23 secs] 
81.579: [Full GC 22M->22M(76M), 0.2207680 secs]
   [Eden: 1024.0K(43.0M)->0.0B(42.0M) Survivors: 0.0B->0.0B Heap: 22.9M(76.0M)->22.8M(76.0M)]
 [Times: user=0.31 sys=0.00, real=0.22 secs] 
81.801: [Full GC 22M->22M(76M), 0.2209840 secs]
   [Eden: 0.0B(42.0M)->0.0B(42.0M) Survivors: 0.0B->0.0B Heap: 22.8M(76.0M)->22.8M(76.0M)]
 [Times: user=0.31 sys=0.00, real=0.22 secs] 
82.023: [Full GC 22M->22M(75M), 0.2008460 secs]
   [Eden: 1024.0K(42.0M)->0.0B(42.0M) Survivors: 0.0B->0.0B Heap: 22.8M(76.0M)->22.4M(75.0M)]
 [Times: user=0.30 sys=0.00, real=0.20 secs] 
82.225: [Full GC 22M->22M(75M), 0.2014550 secs]
   [Eden: 0.0B(42.0M)->0.0B(42.0M) Survivors: 0.0B->0.0B Heap: 22.4M(75.0M)->22.4M(75.0M)]
 [Times: user=0.30 sys=0.01, real=0.20 secs] 
82.427: [Full GC 22M->22M(75M), 0.2330120 secs]
   [Eden: 1024.0K(42.0M)->0.0B(42.0M) Survivors: 0.0B->0.0B Heap: 22.5M(75.0M)->22.4M(75.0M)]
 [Times: user=0.34 sys=0.00, real=0.23 secs] 
82.661: [Full GC 22M->22M(75M), 0.2307760 secs]
   [Eden: 0.0B(42.0M)->0.0B(42.0M) Survivors: 0.0B->0.0B Heap: 22.4M(75.0M)->22.4M(75.0M)]
 [Times: user=0.32 sys=0.00, real=0.23 secs] 
82.893: [Full GC 22M->22M(75M), 0.2367390 secs]
   [Eden: 0.0B(42.0M)->0.0B(42.0M) Survivors: 0.0B->0.0B Heap: 22.4M(75.0M)->22.4M(75.0M)]
 [Times: user=0.33 sys=0.00, real=0.23 secs] 
83.130: [Full GC 22M->22M(75M), 0.2201230 secs]
   [Eden: 0.0B(42.0M)->0.0B(42.0M) Survivors: 0.0B->0.0B Heap: 22.4M(75.0M)->22.4M(75.0M)]
 [Times: user=0.32 sys=0.01, real=0.22 secs] 
83.351: [Full GC 22M->22M(75M), 0.2300080 secs]
   [Eden: 0.0B(42.0M)->0.0B(42.0M) Survivors: 0.0B->0.0B Heap: 22.4M(75.0M)->22.3M(75.0M)]
 [Times: user=0.31 sys=0.00, real=0.23 secs] 
83.581: [Full GC 22M->22M(75M), 0.2427890 secs]
   [Eden: 0.0B(42.0M)->0.0B(42.0M) Survivors: 0.0B->0.0B Heap: 22.3M(75.0M)->22.3M(75.0M)]
 [Times: user=0.33 sys=0.01, real=0.24 secs] 
2018-02-14T06:54:42,380 ERROR [main] io.druid.cli.CliPeon - Error when starting up.  Failing.
java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: PermGen space
	at com.google.common.base.Throwables.propagate(Throwables.java:160) ~[guava-16.0.1.jar:?]
	at io.druid.indexing.worker.executor.ExecutorLifecycle.join(ExecutorLifecycle.java:211) ~[druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	at io.druid.cli.CliPeon.run(CliPeon.java:287) [druid-services-0.9.2.1.jar:0.9.2.1]
	at io.druid.cli.Main.main(Main.java:106) [druid-services-0.9.2.1.jar:0.9.2.1]
Caused by: java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: PermGen space
	at com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:299) ~[guava-16.0.1.jar:?]
	at com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:286) ~[guava-16.0.1.jar:?]
	at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116) ~[guava-16.0.1.jar:?]
	at io.druid.indexing.worker.executor.ExecutorLifecycle.join(ExecutorLifecycle.java:208) ~[druid-indexing-service-0.9.2.1.jar:0.9.2.1]
	... 2 more
Caused by: java.lang.OutOfMemoryError: PermGen space
	at java.lang.ClassLoader.defineClass1(Native Method) ~[?:1.7.0_71]
	at java.lang.ClassLoader.defineClass(ClassLoader.java:800) ~[?:1.7.0_71]
	at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) ~[?:1.7.0_71]
	at java.net.URLClassLoader.defineClass(URLClassLoader.java:449) ~[?:1.7.0_71]
	at java.net.URLClassLoader.access$100(URLClassLoader.java:71) ~[?:1.7.0_71]
	at java.net.URLClassLoader$1.run(URLClassLoader.java:361) ~[?:1.7.0_71]
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355) ~[?:1.7.0_71]
	at java.security.AccessController.doPrivileged(Native Method) ~[?:1.7.0_71]
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354) ~[?:1.7.0_71]
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425) ~[?:1.7.0_71]
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) ~[?:1.7.0_71]
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358) ~[?:1.7.0_71]
	at java.lang.Class.forName0(Native Method) ~[?:1.7.0_71]
	at java.lang.Class.forName(Class.java:191) ~[?:1.7.0_71]
	at org.apache.logging.log4j.util.LoaderUtil.loadClass(LoaderUtil.java:122) ~[log4j-api-2.5.jar:2.5]
	at org.apache.logging.log4j.core.util.Loader.loadClass(Loader.java:228) ~[log4j-core-2.5.jar:2.5]
	at org.apache.logging.log4j.core.impl.ThrowableProxy.loadClass(ThrowableProxy.java:496) ~[log4j-core-2.5.jar:2.5]
	at org.apache.logging.log4j.core.impl.ThrowableProxy.toExtendedStackTrace(ThrowableProxy.java:617) ~[log4j-core-2.5.jar:2.5]
	at org.apache.logging.log4j.core.impl.ThrowableProxy.<init>(ThrowableProxy.java:163) ~[log4j-core-2.5.jar:2.5]
	at org.apache.logging.log4j.core.impl.ThrowableProxy.<init>(ThrowableProxy.java:165) ~[log4j-core-2.5.jar:2.5]
	at org.apache.logging.log4j.core.impl.ThrowableProxy.<init>(ThrowableProxy.java:138) ~[log4j-core-2.5.jar:2.5]
	at org.apache.logging.log4j.core.impl.ThrowableProxy.<init>(ThrowableProxy.java:117) ~[log4j-core-2.5.jar:2.5]
	at org.apache.logging.log4j.core.impl.Log4jLogEvent.getThrownProxy(Log4jLogEvent.java:482) ~[log4j-core-2.5.jar:2.5]
	at org.apache.logging.log4j.core.pattern.ExtendedThrowablePatternConverter.format(ExtendedThrowablePatternConverter.java:64) ~[log4j-core-2.5.jar:2.5]
	at org.apache.logging.log4j.core.pattern.PatternFormatter.format(PatternFormatter.java:36) ~[log4j-core-2.5.jar:2.5]
	at org.apache.logging.log4j.core.layout.PatternLayout$PatternSerializer.toSerializable(PatternLayout.java:292) ~[log4j-core-2.5.jar:2.5]
	at org.apache.logging.log4j.core.layout.PatternLayout.toSerializable(PatternLayout.java:206) ~[log4j-core-2.5.jar:2.5]
	at org.apache.logging.log4j.core.layout.PatternLayout.toSerializable(PatternLayout.java:56) ~[log4j-core-2.5.jar:2.5]
	at org.apache.logging.log4j.core.layout.AbstractStringLayout.toByteArray(AbstractStringLayout.java:148) ~[log4j-core-2.5.jar:2.5]
	at org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.append(AbstractOutputStreamAppender.java:112) ~[log4j-core-2.5.jar:2.5]
	at org.apache.logging.log4j.core.config.AppenderControl.tryCallAppender(AppenderControl.java:152) ~[log4j-core-2.5.jar:2.5]
	at org.apache.logging.log4j.core.config.AppenderControl.callAppender0(AppenderControl.java:125) ~[log4j-core-2.5.jar:2.5]
2018-02-14T06:54:42,385 INFO [Thread-105] io.druid.cli.CliPeon - Running shutdown hook
83.832: [GC pause (young), 0.0078120 secs]
   [Parallel Time: 5.7 ms, GC Workers: 28]
      [GC Worker Start (ms): Min: 83832.1, Avg: 83832.6, Max: 83833.0, Diff: 0.9]
      [Ext Root Scanning (ms): Min: 2.9, Avg: 3.9, Max: 4.9, Diff: 2.1, Sum: 110.1]
      [Update RS (ms): Min: 0.0, Avg: 0.0, Max: 0.2, Diff: 0.2, Sum: 1.0]
         [Processed Buffers: Min: 0, Avg: 2.4, Max: 14, Diff: 14, Sum: 66]
      [Scan RS (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0]
      [Code Root Scanning (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0]
      [Object Copy (ms): Min: 0.2, Avg: 0.3, Max: 1.1, Diff: 0.9, Sum: 7.1]
      [Termination (ms): Min: 0.0, Avg: 0.6, Max: 1.3, Diff: 1.3, Sum: 16.3]
      [GC Worker Other (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1, Sum: 0.9]
      [GC Worker Total (ms): Min: 4.4, Avg: 4.8, Max: 5.3, Diff: 0.9, Sum: 135.5]
      [GC Worker End (ms): Min: 83837.4, Avg: 83837.4, Max: 83837.4, Diff: 0.1]
   [Code Root Fixup: 0.0 ms]
   [Code Root Migration: 0.0 ms]
   [Clear CT: 1.1 ms]
   [Other: 1.0 ms]
      [Choose CSet: 0.0 ms]
      [Ref Proc: 0.8 ms]
      [Ref Enq: 0.0 ms]
      [Free CSet: 0.0 ms]
   [Eden: 1024.0K(42.0M)->0.0B(27.0M) Survivors: 0.0B->6144.0K Heap: 23.1M(75.0M)->33.1M(75.0M)]
 [Times: user=0.15 sys=0.00, real=0.00 secs] 
83.841: [Full GC 33M->22M(75M), 0.2291420 secs]
   [Eden: 1024.0K(27.0M)->0.0B(42.0M) Survivors: 6144.0K->0.0B Heap: 33.1M(75.0M)->22.4M(75.0M)]
 [Times: user=0.33 sys=0.01, real=0.22 secs] 
84.070: [Full GC 22M->22M(75M), 0.2277330 secs]
   [Eden: 0.0B(42.0M)->0.0B(42.0M) Survivors: 0.0B->0.0B Heap: 22.4M(75.0M)->22.4M(75.0M)]
 [Times: user=0.32 sys=0.00, real=0.23 secs] 
Exception in thread "Thread-105" 84.299: [Full GC 22M->22M(75M), 0.2299010 secs]
   [Eden: 1024.0K(42.0M)->0.0B(42.0M) Survivors: 0.0B->0.0B Heap: 22.4M(75.0M)->22.4M(75.0M)]
 [Times: user=0.35 sys=0.00, real=0.23 secs] 
84.529: [Full GC 22M->22M(75M), 0.2303740 secs]
   [Eden: 0.0B(42.0M)->0.0B(42.0M) Survivors: 0.0B->0.0B Heap: 22.4M(75.0M)->22.4M(75.0M)]
 [Times: user=0.37 sys=0.01, real=0.23 secs] 
84.760: [Full GC 22M->22M(75M), 0.2293500 secs]
   [Eden: 0.0B(42.0M)->0.0B(42.0M) Survivors: 0.0B->0.0B Heap: 22.4M(75.0M)->22.4M(75.0M)]
 [Times: user=0.33 sys=0.00, real=0.23 secs] 
84.990: [Full GC 22M->22M(75M), 0.2279290 secs]
   [Eden: 0.0B(42.0M)->0.0B(42.0M) Survivors: 0.0B->0.0B Heap: 22.4M(75.0M)->22.4M(75.0M)]
 [Times: user=0.35 sys=0.00, real=0.23 secs] 

Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "Thread-105"
85.220: [Full GC 22M->22M(75M), 0.2413770 secs]
   [Eden: 1024.0K(42.0M)->0.0B(42.0M) Survivors: 0.0B->0.0B Heap: 22.4M(75.0M)->22.4M(75.0M)]
 [Times: user=0.37 sys=0.00, real=0.25 secs] 
85.461: [Full GC 22M->22M(75M), 0.2416380 secs]
   [Eden: 0.0B(42.0M)->0.0B(42.0M) Survivors: 0.0B->0.0B Heap: 22.4M(75.0M)->22.4M(75.0M)]
 [Times: user=0.33 sys=0.00, real=0.24 secs] 
2018-02-14T06:54:44,258 WARN [Thread-108] org.apache.hadoop.util.ShutdownHookManager - ShutdownHook 'ClientFinalizer' failed, java.lang.OutOfMemoryError: PermGen space
java.lang.OutOfMemoryError: PermGen space
85.705: [Full GC 22M->22M(75M), 0.2334750 secs]
   [Eden: 1024.0K(42.0M)->0.0B(42.0M) Survivors: 0.0B->0.0B Heap: 22.5M(75.0M)->22.4M(75.0M)]
 [Times: user=0.33 sys=0.01, real=0.23 secs] 
85.939: [Full GC 22M->22M(75M), 0.2290150 secs]
   [Eden: 0.0B(42.0M)->0.0B(42.0M) Survivors: 0.0B->0.0B Heap: 22.4M(75.0M)->22.4M(75.0M)]
 [Times: user=0.35 sys=0.00, real=0.23 secs] 
2018-02-14T06:54:44,724 WARN [Thread-3] org.apache.hadoop.util.ShutdownHookManager - ShutdownHook 'ClientFinalizer' failed, java.lang.OutOfMemoryError: PermGen space
java.lang.OutOfMemoryError: PermGen space
Heap
 garbage-first heap   total 76800K, used 22893K [0x000000057ae00000, 0x000000057f900000, 0x00000007fae00000)
  region size 1024K, 1 young (1024K), 0 survivors (0K)
 compacting perm gen  total 83968K, used 83965K [0x00000007fae00000, 0x0000000800000000, 0x0000000800000000)
   the space 83968K,  99% used [0x00000007fae00000, 0x00000007fffff778, 0x00000007fffff800, 0x0000000800000000)
No shared spaces configured.






is it realated to hadoop jars ?

On Tuesday, February 13, 2018 at 8:49:20 PM UTC+5:30, Saurabh Tyagi wrote:

Jonathan Wei

unread,
Feb 14, 2018, 5:33:29 PM2/14/18
to druid...@googlegroups.com
Seems like it's the indexing task process that's hitting the PermGen limit, maybe try adding `-XX:MaxPermSize=5g` as an option to `druid.indexer.runner.javaOpts` in your middleManager runtime.properties and restart your cluster.

```
```

Saurabh Tyagi

unread,
Feb 15, 2018, 12:52:37 AM2/15/18
to Druid User
Hi,

This error is solved by adding permgen to middleManager javaOpts .i think it was unable to pick permGen conf from ingestion specs.


On Tuesday, February 13, 2018 at 8:49:20 PM UTC+5:30, Saurabh Tyagi wrote:

Gian Merlino

unread,
Feb 20, 2018, 2:50:05 AM2/20/18
to druid...@googlegroups.com
Hi Saurabh,

Glad to hear you got it solved, and I hope you are able to update to Java 8 before too long!

Gian

--
You received this message because you are subscribed to the Google Groups "Druid User" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-user+unsubscribe@googlegroups.com.
To post to this group, send email to druid...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages