Jackson error while running example

32 views
Skip to first unread message

Santhosh Swaminathan

unread,
Jun 19, 2015, 5:00:42 PM6/19/15
to cubert...@googlegroups.com
While compiling the wordcount.cmr, I get the following error:

Exception in thread "main" java.lang.NoSuchMethodError: org.codehaus.jackson.node.ObjectNode.has(Ljava/lang/String;)Z
        at com.linkedin.cubert.plan.physical.PhysicalParser$PhysicalListener.exitOutputCommand(PhysicalParser.java:448)
        at com.linkedin.cubert.antlr4.CubertPhysicalParser$OutputCommandContext.exitRule(CubertPhysicalParser.java:1666)
        at org.antlr.v4.runtime.tree.ParseTreeWalker.exitRule(ParseTreeWalker.java:71)
        at org.antlr.v4.runtime.tree.ParseTreeWalker.walk(ParseTreeWalker.java:54)
        at org.antlr.v4.runtime.tree.ParseTreeWalker.walk(ParseTreeWalker.java:52)
        at org.antlr.v4.runtime.tree.ParseTreeWalker.walk(ParseTreeWalker.java:52)
        at org.antlr.v4.runtime.tree.ParseTreeWalker.walk(ParseTreeWalker.java:52)
        at com.linkedin.cubert.plan.physical.PhysicalParser.parsingTask(PhysicalParser.java:192)
        at com.linkedin.cubert.plan.physical.PhysicalParser.parseInputStream(PhysicalParser.java:161)
        at com.linkedin.cubert.plan.physical.PhysicalParser.parseProgram(PhysicalParser.java:156)
        at com.linkedin.cubert.ScriptExecutor.compile(ScriptExecutor.java:304)
        at com.linkedin.cubert.ScriptExecutor.main(ScriptExecutor.java:523)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:197)


Any help?

Thanks
Exception in thread "main" java.lang.NoSuchMethodError: org.codehaus.jackson.node.ObjectNode.has(Ljava/lang/String;)Z
        at com.linkedin.cubert.plan.physical.PhysicalParser$PhysicalListener.exitOutputCommand(PhysicalParser.java:448)
        at com.linkedin.cubert.antlr4.CubertPhysicalParser$OutputCommandContext.exitRule(CubertPhysicalParser.java:1666)
        at org.antlr.v4.runtime.tree.ParseTreeWalker.exitRule(ParseTreeWalker.java:71)
        at org.antlr.v4.runtime.tree.ParseTreeWalker.walk(ParseTreeWalker.java:54)
        at org.antlr.v4.runtime.tree.ParseTreeWalker.walk(ParseTreeWalker.java:52)
        at org.antlr.v4.runtime.tree.ParseTreeWalker.walk(ParseTreeWalker.java:52)
        at org.antlr.v4.runtime.tree.ParseTreeWalker.walk(ParseTreeWalker.java:52)
        at com.linkedin.cubert.plan.physical.PhysicalParser.parsingTask(PhysicalParser.java:192)
        at com.linkedin.cubert.plan.physical.PhysicalParser.parseInputStream(PhysicalParser.java:161)
        at com.linkedin.cubert.plan.physical.PhysicalParser.parseProgram(PhysicalParser.java:156)
        at com.linkedin.cubert.ScriptExecutor.compile(ScriptExecutor.java:304)
        at com.linkedin.cubert.ScriptExecutor.main(ScriptExecutor.java:523)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:197)

Mani Parkhe

unread,
Jun 19, 2015, 7:11:37 PM6/19/15
to cubert...@googlegroups.com, sant...@gmail.com
Hi Santhosh,

What hadoop version are you using? By default cubert open source project uses hadoop version is 1.2.1. Hadoop client will pull in this jackson dependency as jackson-mapper-asl (version 1.8.8) during compile and runtime. You can ensure this by calling "./gradlew dependencies" at root level.

mani

Santhosh Swaminathan

unread,
Jun 22, 2015, 9:27:19 AM6/22/15
to cubert...@googlegroups.com, sant...@gmail.com
Thanks Mani,
   
I am using 1.0.3 (Mapr). Here is the dependency list (partial)

org.apache.hadoop:hadoop-client:1.2.1
|    \--- org.apache.hadoop:hadoop-core:1.2.1
|         +--- xmlenc:xmlenc:0.52
|         +--- com.sun.jersey:jersey-core:1.8
|         +--- com.sun.jersey:jersey-json:1.8
|         |    +--- org.codehaus.jettison:jettison:1.1
|         |    |    \--- stax:stax-api:1.0.1
|         |    +--- com.sun.xml.bind:jaxb-impl:2.2.3-1
|         |    |    \--- javax.xml.bind:jaxb-api:2.2.2
|         |    |         +--- javax.xml.stream:stax-api:1.0-2
|         |    |         \--- javax.activation:activation:1.1
|         |    +--- org.codehaus.jackson:jackson-core-asl:1.7.1 -> 1.8.8
|         |    +--- org.codehaus.jackson:jackson-mapper-asl:1.7.1 -> 1.8.8
|         |    |    \--- org.codehaus.jackson:jackson-core-asl:1.8.8
|         |    +--- org.codehaus.jackson:jackson-jaxrs:1.7.1
|         |    |    +--- org.codehaus.jackson:jackson-core-asl:1.7.1 -> 1.8.8
|         |    |    \--- org.codehaus.jackson:jackson-mapper-asl:1.7.1 -> 1.8.8 (*)
|         |    +--- org.codehaus.jackson:jackson-xc:1.7.1
|         |    |    +--- org.codehaus.jackson:jackson-core-asl:1.7.1 -> 1.8.8
|         |    |    \--- org.codehaus.jackson:jackson-mapper-asl:1.7.1 -> 1.8.8 (*)
|         |    \--- com.sun.jersey:jersey-core:1.8
|         +--- com.sun.jersey:jersey-server:1.8


-Santhosh

Santhosh Swaminathan

unread,
Jun 22, 2015, 11:36:18 AM6/22/15
to cubert...@googlegroups.com
I went ahead and downloaded Hadoop 1.2.1 (locally) and was able to compile the wordcount.cmr. But when i wanted to submit the job, I get the following error:

Analyzing job [count words]...
Executing jobs serially
Executing job [count words]....
Setting partitioner: com.linkedin.cubert.plan.physical.CubertPartitioner
15/06/22 11:31:10 INFO util.NativeCodeLoader: Loaded the native-hadoop library
15/06/22 11:31:10 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
15/06/22 11:31:10 INFO input.FileInputFormat: Total input paths to process : 1
15/06/22 11:31:10 INFO util.MapRedUtil: Total input paths to process : 1
15/06/22 11:31:10 WARN snappy.LoadSnappy: Snappy native library not loaded
15/06/22 11:31:10 WARN mapred.LocalJobRunner: LocalJobRunner does not support symlinking into current working dir.
15/06/22 11:31:11 INFO mapred.LocalJobRunner: Waiting for map tasks
15/06/22 11:31:11 INFO mapred.LocalJobRunner: Starting task: attempt_local517226951_0001_m_000000_0
Job: [count words], More information at: http://localhost:8080/
15/06/22 11:31:11 INFO util.ProcessTree: setsid exited with exit code 0
15/06/22 11:31:11 INFO mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@75c2b15c
15/06/22 11:31:11 INFO mapred.MapTask: Processing split: file:[path to release]/words.txt:0+27294 (class org.apache.hadoop.mapreduce.lib.input.FileSplit) [0]
15/06/22 11:31:11 INFO mapred.MapTask: io.sort.mb = 100
15/06/22 11:31:11 INFO mapred.MapTask: data buffer = 79691776/99614720
15/06/22 11:31:11 INFO mapred.MapTask: record buffer = 262144/327680
Mapper init  ----------------------------------
Executed operator chain for 1 block(s) in 254 ms
Mapper complete ----------------------------------
MemoryStats: #GC calls: 2 Total GC Time: 172 ms
15/06/22 11:31:11 INFO mapred.MapTask: Starting flush of map output
15/06/22 11:31:11 INFO mapred.MapTask: Finished spill 0
15/06/22 11:31:11 INFO mapred.Task: Task:attempt_local517226951_0001_m_000000_0 is done. And is in the process of commiting
15/06/22 11:31:11 INFO mapred.LocalJobRunner: 
15/06/22 11:31:11 INFO mapred.Task: Task 'attempt_local517226951_0001_m_000000_0' done.
15/06/22 11:31:11 INFO mapred.LocalJobRunner: Finishing task: attempt_local517226951_0001_m_000000_0
15/06/22 11:31:11 INFO mapred.LocalJobRunner: Map task executor complete.
15/06/22 11:31:11 INFO mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@7d5a3c12
15/06/22 11:31:11 INFO mapred.LocalJobRunner: 
15/06/22 11:31:11 INFO mapred.Merger: Merging 1 sorted segments
15/06/22 11:31:12 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 20639 bytes
15/06/22 11:31:12 INFO mapred.LocalJobRunner: 
Reducer init --------------------------------
15/06/22 11:31:12 WARN mapred.LocalJobRunner: job_local517226951_0001
java.lang.NoClassDefFoundError: com/google/common/base/Charsets
        at org.apache.pig.impl.util.StorageUtil.putField(StorageUtil.java:185)
        at org.apache.pig.impl.util.StorageUtil.putField(StorageUtil.java:116)
        at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigTextOutputFormat$PigLineRecordWriter.write(PigTextOutputFormat.java:68)
        at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigTextOutputFormat$PigLineRecordWriter.write(PigTextOutputFormat.java:44)
        at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:586)
        at org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)
        at com.linkedin.cubert.plan.physical.CubertReducer$ReduceContext.write(CubertReducer.java:144)
        at com.linkedin.cubert.io.text.TextBlockWriter.write(TextBlockWriter.java:49)
        at com.linkedin.cubert.plan.physical.CubertReducer.run(CubertReducer.java:84)
        at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:649)
        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:418)
        at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:398)
Caused by: java.lang.ClassNotFoundException: com.google.common.base.Charsets
        at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
        ... 12 more
Exception in thread "main" java.lang.InterruptedException: Job count words failed!
        at com.linkedin.cubert.plan.physical.JobExecutor.run(JobExecutor.java:160)
        at com.linkedin.cubert.plan.physical.ExecutorService.executeJob(ExecutorService.java:253)
        at com.linkedin.cubert.plan.physical.ExecutorService.executeJobId(ExecutorService.java:219)
        at com.linkedin.cubert.plan.physical.ExecutorService.execute(ExecutorService.java:163)
        at com.linkedin.cubert.ScriptExecutor.execute(ScriptExecutor.java:385)
        at com.linkedin.cubert.ScriptExecutor.main(ScriptExecutor.java:575)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:160)

Santhosh Swaminathan

unread,
Jun 22, 2015, 12:21:31 PM6/22/15
to cubert...@googlegroups.com
my pig version is 0.13.1-mapr-1410. This version is not available on the maven central. Should I change the version in the gradle & also the repository (http://repository.pentaho.org/artifactory/repo/org/apache/pig/pig-withouthadoop/0.13.1-mapr-1410/)

Santhosh Swaminathan

unread,
Jun 22, 2015, 4:43:59 PM6/22/15
to cubert...@googlegroups.com
I changed the repository and I was able to run the wordcount job.

Santhosh
Reply all
Reply to author
Forward
0 new messages