Getting Error While Running Scoobi Job on Hadoop 2

37 views
Skip to first unread message

Prashant Bhardwaj

unread,
Dec 10, 2014, 5:00:32 AM12/10/14
to scoobi...@googlegroups.com
I'm getting following error while running Word Count Example on Hadoop2. The error is showed after MR job was finished. No output was in output directory.


[INFO] Sink - Output path: /user/dvasthimal/sample_output
[INFO] Source - Input path: /user/dvasthimal/sample.txt (658.76 KiB)
[INFO] HadoopMode - =======================================================================
[INFO] HadoopMode - ===== START OF SCOOBI JOB 'WordCount$-1210-024004--1071350455' ========
[INFO] HadoopMode - =======================================================================

[INFO] HadoopMode - ===== START OF MAP REDUCE JOB 1 of 1 (mscr id = 1) ======

14/12/10 02:40:16 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
[INFO] MapReduceJob - Total input size: 658.76 KiB
[INFO] MapReduceJob - Number of reducers: 1
[INFO] RMProxy - Connecting to ResourceManager at resourcemanager-364987.XXXXXX.XXXXX
[INFO] FileInputFormat - Total input paths to process : 1
[INFO] JobSubmitter - number of splits:1
[INFO] JobSubmitter - Submitting tokens for job: job_1414108019550_0024
[INFO] Job - The url to track the job: http://resourcemanager-364987.XXXXX.XXXXX
[INFO] MapReduceJob - MapReduce job 'job_1414108019550_0024' submitted. Please see http://resourcemanager-364987.XXXXX.XXXXX for more info.
[INFO] MapReduceJob - Map 100%    Reduce   0%
[INFO] MapReduceJob - Map 100%    Reduce 100%
[INFO] HadoopMode - Map reduce job sinks:  Vector(OrcSink: /user/dvasthimal/sample_output)
[INFO] HadoopMode - ===== END OF MAP REDUCE JOB 1 of 1 (mscr id = 1, Scoobi job = WordCount$-1210-024004--1071350455) ======

Exception in thread "main" java.lang.NoSuchMethodError: com.nicta.scoobi.impl.util.Compatibility$.getScheme(Lorg/apache/hadoop/fs/FileSystem;)Ljava/lang/String;
        at com.nicta.scoobi.impl.io.Files$$anonfun$moveTo$2.apply(Files.scala:56)
        at com.nicta.scoobi.impl.io.Files$$anonfun$moveTo$2.apply(Files.scala:48)
        at com.nicta.scoobi.impl.exec.HadoopMode$$anonfun$setJobSuccess$1.apply(HadoopMode.scala:113)
        at com.nicta.scoobi.impl.exec.HadoopMode$$anonfun$setJobSuccess$1.apply(HadoopMode.scala:111)
        at scala.collection.Iterator$class.foreach(Iterator.scala:743)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1195)
        at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
        at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
        at com.nicta.scoobi.impl.exec.HadoopMode.setJobSuccess(HadoopMode.scala:111)
        at com.nicta.scoobi.impl.exec.HadoopMode.com$nicta$scoobi$impl$exec$HadoopMode$$executeMscrs(HadoopMode.scala:96)
        at com.nicta.scoobi.impl.exec.HadoopMode$$anonfun$executeNode$1.apply(HadoopMode.scala:74)
        at com.nicta.scoobi.impl.exec.HadoopMode$$anonfun$executeNode$1.apply(HadoopMode.scala:67)
        at org.kiama.attribution.AttributionCore$CachedAttribute.apply(AttributionCore.scala:63)
        at scalaz.syntax.IdOps.$bar$greater(IdOps.scala:28)
        at com.nicta.scoobi.impl.exec.HadoopMode.execute(HadoopMode.scala:54)
        at com.nicta.scoobi.impl.exec.HadoopMode.execute(HadoopMode.scala:50)
        at com.nicta.scoobi.impl.Persister.persist(Persister.scala:44)
        at com.nicta.scoobi.impl.ScoobiConfigurationImpl.persist(ScoobiConfigurationImpl.scala:390)
        at com.nicta.scoobi.application.Persist$class.persist(Persist.scala:33)
        at com.ebay.scoobi.example.WordCount$.persist(WordCount.scala:6)
        at com.nicta.scoobi.application.Persist$PersistableList.persist(Persist.scala:151)
        at com.ebay.scoobi.example.WordCount$.run(WordCount.scala:15)
        at com.nicta.scoobi.application.ScoobiApp$$anonfun$main$1.apply$mcV$sp(ScoobiApp.scala:81)
        at com.nicta.scoobi.application.ScoobiApp$$anonfun$main$1.apply(ScoobiApp.scala:76)
        at com.nicta.scoobi.application.ScoobiApp$$anonfun$main$1.apply(ScoobiApp.scala:76)
        at com.nicta.scoobi.application.Hadoop$class.runOnCluster(Hadoop.scala:115)
        at com.ebay.scoobi.example.WordCount$.runOnCluster(WordCount.scala:6)
        at com.nicta.scoobi.application.Hadoop$class.executeOnCluster(Hadoop.scala:69)
        at com.ebay.scoobi.example.WordCount$.executeOnCluster(WordCount.scala:6)
        at com.nicta.scoobi.application.Hadoop$$anonfun$onCluster$1.apply(Hadoop.scala:55)
        at com.nicta.scoobi.application.InMemoryHadoop$class.withTimer(InMemory.scala:71)
        at com.ebay.scoobi.example.WordCount$.withTimer(WordCount.scala:6)
        at com.nicta.scoobi.application.InMemoryHadoop$class.showTime(InMemory.scala:79)
        at com.ebay.scoobi.example.WordCount$.showTime(WordCount.scala:6)
        at com.nicta.scoobi.application.Hadoop$class.onCluster(Hadoop.scala:55)
        at com.ebay.scoobi.example.WordCount$.onCluster(WordCount.scala:6)
        at com.nicta.scoobi.application.Hadoop$class.onHadoop(Hadoop.scala:61)
        at com.ebay.scoobi.example.WordCount$.onHadoop(WordCount.scala:6)
        at com.nicta.scoobi.application.ScoobiApp$class.main(ScoobiApp.scala:76)
        at com.ebay.scoobi.example.WordCount$.main(WordCount.scala:6)
        at com.ebay.scoobi.example.WordCount.main(WordCount.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:212)



Thanks
- Prashant
Reply all
Reply to author
Forward
0 new messages