Hadoop batch ingest job failed after upgraded to 0.7.1.1.

284 views
Skip to first unread message

Lu Xuechao

unread,
May 10, 2015, 11:29:33 PM5/10/15
to druid...@googlegroups.com
Hi Team,

I've run hadoop batch ingest job successfully with 0.6.171, but after I upgraded to 0.7.1.1 the hadoop job failed with exception:


2015-05-11 03:00:41,693 ERROR [main] org.apache.hadoop.mapred.YarnChild: Error running child : java.lang.NoSuchMethodError: com.fasterxml.jackson.core.JsonFactory.requiresPropertyOrdering()Z
	at com.fasterxml.jackson.databind.ObjectMapper.<init>(ObjectMapper.java:457)
	at com.fasterxml.jackson.databind.ObjectMapper.<init>(ObjectMapper.java:389)
	at io.druid.jackson.DefaultObjectMapper.<init>(DefaultObjectMapper.java:43)
	at io.druid.jackson.DefaultObjectMapper.<init>(DefaultObjectMapper.java:33)
	at io.druid.jackson.JacksonModule.jsonMapper(JacksonModule.java:44)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at com.google.inject.internal.ProviderMethod.get(ProviderMethod.java:104)
	at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:40)
	at com.google.inject.internal.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:46)
	at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1031)
	at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40)
	at com.google.inject.Scopes$1$1.get(Scopes.java:65)
	at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:40)
	at com.google.inject.internal.FactoryProxy.get(FactoryProxy.java:54)
	at com.google.inject.internal.SingleParameterInjector.inject(SingleParameterInjector.java:38)
	at com.google.inject.internal.SingleParameterInjector.getAll(SingleParameterInjector.java:62)
	at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:84)
	at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:254)
	at com.google.inject.internal.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:46)
	at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1031)
	at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40)
	at com.google.inject.Scopes$1$1.get(Scopes.java:65)
	at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:40)
	at com.google.inject.internal.SingleParameterInjector.inject(SingleParameterInjector.java:38)
	at com.google.inject.internal.SingleParameterInjector.getAll(SingleParameterInjector.java:62)
	at com.google.inject.internal.SingleMethodInjector.inject(SingleMethodInjector.java:83)
	at com.google.inject.internal.MembersInjectorImpl.injectMembers(MembersInjectorImpl.java:110)
	at com.google.inject.internal.MembersInjectorImpl$1.call(MembersInjectorImpl.java:75)
	at com.google.inject.internal.MembersInjectorImpl$1.call(MembersInjectorImpl.java:73)
	at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1024)
	at com.google.inject.internal.MembersInjectorImpl.injectAndNotify(MembersInjectorImpl.java:73)
	at com.google.inject.internal.Initializer$InjectableReference.get(Initializer.java:147)
	at com.google.inject.internal.Initializer.injectAll(Initializer.java:92)
	at com.google.inject.internal.InternalInjectorCreator.injectDynamically(InternalInjectorCreator.java:173)
	at com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:109)
	at com.google.inject.Guice.createInjector(Guice.java:95)
	at com.google.inject.Guice.createInjector(Guice.java:72)
	at io.druid.guice.GuiceInjectors.makeStartupInjector(GuiceInjectors.java:57)
	at io.druid.indexer.HadoopDruidIndexerConfig.<clinit>(HadoopDruidIndexerConfig.java:95)
	at io.druid.indexer.DetermineHashedPartitionsJob$DetermineHashedPartitionsPartitioner.setConf(DetermineHashedPartitionsJob.java:396)
	at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
	at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
	at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:678)
	at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:747)
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
	at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1557)
	at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)

The jackson version in 0.7.1.1 is 2.4.4, which has the JsonFactory.requiresPropertyOrdering() method. Also I checked the druid hadoop job configuration mapreduce.job.classpath.files, it also has the 2.4.4 jackson version and no other version found. what could be the casue?

thanks.
xulu

Xavier Léauté

unread,
May 11, 2015, 8:18:26 PM5/11/15
to Lu Xuechao, druid...@googlegroups.com
Hi Xulu, what version of hadoop are you running againts, and what version of Jackson does your hadoop distribution ship with?

--
You received this message because you are subscribed to the Google Groups "Druid User" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-user+...@googlegroups.com.
To post to this group, send email to druid...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-user/7dd66d9e-b13c-47b5-a0a2-e66c101a0945%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Lu Xuechao

unread,
May 11, 2015, 10:09:46 PM5/11/15
to druid...@googlegroups.com, lux...@gmail.com
I printed the classpath of the mapreduce job, it showed(omitted unrelated jars):

/apache/hadoop-2.4.0.2.1.2.0-402/share/hadoop/common/lib/jackson-core-2.2.3.jar
/apache/hadoop-2.4.0.2.1.2.0-402/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar
/hadoop/4/scratch/local/usercache/b_pulsar_coe/appcache/application_1426279437401_1370645/container_1426279437401_1370645_01_000137/jackson-core-2.4.4.jar
/hadoop/4/scratch/local/usercache/b_pulsar_coe/appcache/application_1426279437401_1370645/container_1426279437401_1370645_01_000137/jackson-mapper-asl-1.9.13.jar

the hadoop jars came before druid's jars, and classes in jackson-core-2.2.3.jar were loaded. I tried to set below property in mapred-site.xml, but that made the mapreduce job failed with below exception:

    <property>
        <name>mapreduce.job.user.classpath.first</name>
        <value>true</value>
    </property>

Application application_1426279437401_1372299 failed 2 times due to AM Container for appattempt_1426279437401_1372299_000002 exited with exitCode: 1 due to: Exception from container-launch:
org.apache.hadoop.util.Shell$ExitCodeException:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
at org.apache.hadoop.util.Shell.run(Shell.java:418)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:279)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
main : command provided 1
main : user is hadoop
main : requested yarn user is b_pulsar_coe
Container exited with a non-zero exit code 1
.Failing this attempt.. Failing the application.

thanks.

Lu Xuechao

unread,
May 12, 2015, 5:03:59 AM5/12/15
to druid...@googlegroups.com
Thanks for help.

This issue is resolved by changing hadoop version in ~\.m2\repository\io\druid\druid\0.7.1.1\druid-0.7.1.1.pom to 2.4.0.

Krzysztof Zarzycki

unread,
Jun 20, 2015, 8:54:33 AM6/20/15
to druid...@googlegroups.com
Hi Lu, I have similar problem on my cluster. If you could help me workaround that issue, I'll be very grateful. Could you please describe what exactly you made to make it work? 
1. You changed the hadoop version of hadoop-client from 2.3.0 to 2.4.0 in ~\.m2\repository\io\druid\druid\0.7.1.1\druid-0.7.1.1.pom
2. and then...? 

Thank you! 
Krzysztof

Fangjin Yang

unread,
Jun 26, 2015, 4:13:01 PM6/26/15
to druid...@googlegroups.com, k.zar...@gmail.com
Hi Krzystof, do any of the suggestions mentioned in http://druid.io/docs/latest/operations/other-hadoop.html help?
Reply all
Reply to author
Forward
0 new messages