my yarn's version is hadop-2.4.0.x
, spark is spark-1.5.1-bin-hadoop2.4
and spark-cassandra-connector is spark-cassandra-connector_2.10-1.5.0-M2
, when I executed the following command:
bin/spark-shell --driver-class-path $(echo lib/*.jar | sed 's/ /:/g') --master yarn-client
--deploy-mode client --conf spark.cassandra.connection.host=192.21.0.209
--conf spark.cassandra.auth.username=username --conf spark.cassandra.auth.password=password --conf spark.sql.dialect=sql
--jars lib/guava-16.0.jar,spark-cassandra-connector_2.10-1.5.0-M2.jar,lib/cassandra-driver-core-2.2.0-rc3.jar
After starting, I input the following scala under the prompt:
import org.apache.spark.sql.cassandra.CassandraSQLContext
import org.apache.spark.sql.{DataFrame, SaveMode}
import org.apache.spark.{Logging, SparkConf, SparkContext}
import org.joda.time.{DateTime, Days, LocalDate}
val cc = new CassandraSQLContext(sc)
val rdd: DataFrame = cc.sql("select user_id,tag_models,dmp_province," +
"zp_gender,zp_age,zp_edu,stg_stage,zp_income,type " +
"from user_center.users_test")
I got the classic error:
Caused by: java.lang.NoSuchMethodError:
com.google.common.util.concurrent.Futures.withFallback
(Lcom/google/common/util/concurrent/ListenableFuture;
Lcom/google/common/util/concurrent/FutureFallback;
Ljava/util/concurrent/Executor;)
Lcom/google/common/util/concurrent/ListenableFuture;
After search this error in google
and stackoverflower
, I know that the conflict between the different versions of guava
caused this error, and found hadoop 2.4 use guava-11.0.2
but spark-cassandra-connector_2.10-1.5.0-M2 use guava-16.0.1
.
How to resolve this kind of error, any advice will be appreciated!
Can you share it for everybody please ?
--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-...@lists.datastax.com.
I wrote a blog to describe the detail. http://ben-tech.blogspot.com/2016/04/how-to-resolve-spark-cassandra.html
Do we know, how to handle this when using Jupyter Notebook, instead of spark-submit?