Hi,
I have a Spark standalone cluster in EC2; I have a jar with code I'd
like to use in REPL session, so I log into the master, run spark-shell,
then, since I've downloaded the jar locally to the master, call:
scala> sc.addJar("file:///...jar")
13/01/09 22:05:33 INFO spark.SparkContext: Added JAR
file:///home/hadoop/....jar at
http://10.70.7.20:55974/jars/...jar with
timestamp 1357769133587
But now I cannot import the code from the jar:
scala> import com.foo._
<console>:10: error: object foo is not a member of package com
Instead, I've put my application's jar in spark/lib_managed/jars, which
seems somewhat hacky, but then spark-shell can load it; am I missing a
better way?
- Stephen