Running Spark on Tachyon i got error

6 views
Skip to first unread message

江振兴

unread,
Feb 28, 2014, 4:52:19 AM2/28/14
to tachyo...@googlegroups.com
i following the http://tachyon-project.org/Running-Spark-on-Tachyon.html  
  1. val s = sc.textFile("tachyon://localhost:19998/hosts")  ok
  2. s.count()                                                                  ok
  3. s.saveAsTextFile("tachyon://localhost:19998/Y")         error
got follow  errors
java.lang.IncompatibleClassChangeError: Implementing class
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(ClassLoader.java:631)
at java.lang.ClassLoader.defineClass(ClassLoader.java:615)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:171)
at org.apache.hadoop.mapred.SparkHadoopMapRedUtil$class.firstAvailableClass(SparkHadoopMapRedUtil.scala:48)
at org.apache.hadoop.mapred.SparkHadoopMapRedUtil$class.newJobContext(SparkHadoopMapRedUtil.scala:23)
at org.apache.hadoop.mapred.SparkHadoopWriter.newJobContext(SparkHadoopWriter.scala:40)
at org.apache.hadoop.mapred.SparkHadoopWriter.getJobContext(SparkHadoopWriter.scala:149)
at org.apache.hadoop.mapred.SparkHadoopWriter.preSetup(SparkHadoopWriter.scala:64)
at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopDataset(PairRDDFunctions.scala:713)
at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopFile(PairRDDFunctions.scala:686)
at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopFile(PairRDDFunctions.scala:572)
at org.apache.spark.rdd.RDD.saveAsTextFile(RDD.scala:894)
at $iwC$$iwC$$iwC$$iwC.<init>(<console>:15)
at $iwC$$iwC$$iwC.<init>(<console>:20)
at $iwC$$iwC.<init>(<console>:22)
at $iwC.<init>(<console>:24)
at <init>(<console>:26)
at .<init>(<console>:30)
at .<clinit>(<console>)
at .<init>(<console>:7)
at .<clinit>(<console>)
at $print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:772)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1040)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:609)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:640)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:604)
at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:788)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:833)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:745)
at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:593)
at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:600)
at org.apache.spark.repl.SparkILoop.loop(SparkILoop.scala:603)
at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply$mcZ$sp(SparkILoop.scala:926)
at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply(SparkILoop.scala:876)
at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply(SparkILoop.scala:876)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:876)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:968)
at org.apache.spark.repl.Main$.main(Main.scala:31)
at org.apache.spark.repl.Main.main(Main.scala)

江振兴

unread,
Feb 28, 2014, 4:53:44 AM2/28/14
to tachyo...@googlegroups.com
btw: my spark version spark-0.9.0-incubating-bin-cdh4

在 2014年2月28日星期五UTC+8下午5时52分19秒,江振兴写道:

Haoyuan Li

unread,
Mar 1, 2014, 2:46:16 PM3/1/14
to 江振兴, tachyo...@googlegroups.com
Did you recompile Tachyon to use the same Hadoop Version? By default, Tachyon compiles with Hadoop 1.0.4.

You can do it by " mvn -Dhadoop.version=HadoopVersionYouRun clean package" to compile Tachyon code.

Haoyuan

--
You received this message because you are subscribed to the Google Groups "Tachyon Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tachyon-user...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.



--
Haoyuan Li
Algorithms, Machines, People Lab, EECS, UC Berkeley

江振兴

unread,
Mar 3, 2014, 12:43:19 AM3/3/14
to tachyo...@googlegroups.com, 江振兴
thanks a lot.  it's work for me.


在 2014年3月2日星期日UTC+8上午3时46分16秒,Haoyuan Li写道:

Haoyuan Li

unread,
Mar 3, 2014, 3:52:26 PM3/3/14
to 江振兴, tachyo...@googlegroups.com
Great! Thanks for letting us know.

Deep Pradhan

unread,
Mar 17, 2014, 2:13:34 AM3/17/14
to tachyo...@googlegroups.com
Could anyone please tell me if any supercomputer has Spark installed in it, or if there is a facility of installing Spark in any supercomputer?
Reply all
Reply to author
Forward
0 new messages