I am experimenting writing into a cassandra table having 25 columns using spark-shell 2.0.1, SCC 2.0.0-M2-s_2.11, scala 2.11.8. My spark-shell stuck at saveToCassandra and I did not see the job is even submitted to the cluster.
The last log is:
{"level": "WARN ", "timestamp": "2017-02-02 17:46:26,987", "classname": "org.apache.spark.sql.SparkSession$Builder", "body": "Use an existing SparkSession, some configuration may not take effect."}
Below is the code:
import org.apache.spark.SparkConf
import org.apache.spark.sql.{Row, SparkSession}
import com.datastax.spark.connector.SomeColumns
val conf = new SparkConf(true)
val spark = SparkSession
.builder()
.appName("Spark SQL basic example")
.config("spark.some.config.option", "some-value")
.getOrCreate()
import spark.implicits._
val selectQuery = "select c1, c2... c25 from ams_uat.test_users_derived_1485959100"
val userDF = spark.sql(selectQuery)
case class UserAttr(c1: String, c2 : String,...c25 : String)
import com.datastax.spark.connector._
val userDS = userDF.as[UserAttr]
userDS.rdd.saveToCassandra("ams", "users_attr_test")
Any help is highly appreciated!
--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-...@lists.datastax.com.
The problem is solved after I run my code line by line. Not sure what happened when running it as a block. Spark-shell just stuck there and the spark UI does not show my job when doing so. Thanks for responding.
Russell,
The problem is solved after I run my code line by line. Not sure what happened when running it as a block. Spark-shell just stuck there and the spark UI does not show my job when doing so. Thanks for responding.