|
I want to do a computation across each partition and since my computation is independent across partitions I would like the computation to be executed in parallel. so I am wondering if foreachPartition or foreachPartitionAsync can execute across spark worker machines in parallel. Also it sounds like foreachPartition is blocking and foreachPartitionAsync is non-blocking however it would be great if someone can explain where in Spark execution path this would make a difference and why? javaFunctions(sc).cassandraTable().foreachPartition() Thanks a lot! |
|
javaFunctions(sc).cassandraTable().foreachPartition() is it Cassandra partition or RDD partition? I read somewhere that multiple Cassandra Partitions can be mapped to one RDD partition. is that true? |
Spark partition, token ranges from Cassandra are mapped to single spark partitions
--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-...@lists.datastax.com.
|
Hi Russell, Is it the entire token range of one Cassandra node mapped to single spark partition? that way number of Cassandra nodes is equal to number of number of spark partitions or is there is a possibility a token range of single Cassandra node is broken down to multiple sub ranges and then each sub range is mapped to single spark partition? Thanks! |
|
Hi Russell, Thanks for this video. This clarifies lot of my questions but how do I read all Cassandra rows that belong to a particular Cassandra partition? and I do want to parallelize this process across Cassandra partitions. kant |
|
Hi Russell, I tried SpanBy but look like there is a strange error that happening no matter which way I try. Like the one here described for Java solution. java.lang.ClassCastException: cannot assign instance of scala.collection.immutable.List$SerializationProxy to field org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD JavaPairRDD<ByteBuffer, Iterable<CassandraRow>> cassandraRowsRDD= javaFunctions(sc).cassandraTable("test", "hello" ) .select("col1", "col2", "col3" ) .spanBy(new Function<CassandraRow, ByteBuffer>() { @Override public ByteBuffer call(CassandraRow v1) { return v1.getBytes("rowkey"); } }, ByteBuffer.class); And then here I do this here is where the problem occurs List<Tuple2<ByteBuffer, Iterable<CassandraRow>>> listOftuples = cassandraRowsRDD.collect(); // ERROR OCCURS HERE Tuple2<ByteBuffer, Iterable<CassandraRow>> tuple = listOftuples.iterator().next(); ByteBuffer partitionKey = tuple._1(); for(CassandraRow cassandraRow: tuple._2()) { System.out.println(cassandraRow.getLong("col1")); } so I tried this and same error Iterable<Tuple2<ByteBuffer, Iterable<CassandraRow>>> listOftuples = cassandraRowsRDD.collect(); // ERROR OCCURS HERE Tuple2<ByteBuffer, Iterable<CassandraRow>> tuple = listOftuples.iterator().next(); ByteBuffer partitionKey = tuple._1(); for(CassandraRow cassandraRow: tuple._2()) { System.out.println(cassandraRow.getLong("col1")); } I have also tried cassandraRowsRDD.collect().forEach() and cassandraRowsRDD.stream().forEachPartition() and the same exact error occurs. May I know how can I fix this? Thanks, kant |
|
Changed everything to byte[] array so there is no more bytebuffers in the code but still same exact error persists. Still trying to debug further.. |
|
Hi Russell, Thanks for the effort I did look into these and I am still scratching my head. I am running everything locally and in a stand alone mode so my spark cluster is just running on localhost. Scala code runner version 2.11.8 // when I run scala -version or even ./spark-shell compile group: 'org.apache.spark' name: 'spark-core_2.11' version: '2.0.0' compile group: 'org.apache.spark' name: 'spark-streaming_2.11' version: '2.0.0' compile group: 'org.apache.spark' name: 'spark-sql_2.11' version: '2.0.0' compile group: 'com.datastax.spark' name: 'spark-cassandra-connector_2.11' version: '2.0.0-M3': So I don't see anything wrong with these versions. 2) one of the link says you should mark dependencies "provided". I use Java and gradle so I am not sure how to do that. 3) I am bundling everything into one jar and so far it did worked out well except for this error. |
|
I am not yet using Spark Submit because there are bunch of other projects that work on follow the same pattern as below. SparkConf sparkConf = config.buildSparkConfig(); sparkConf.setJars(JavaSparkContext.jarOfClass(SparkDriver.class)); JavaStreamingContext ssc = new JavaStreamingContext(sparkConf, new Duration(config.getSparkStremingBatchInterval())); ssc.sparkContext().setLogLevel("ERROR"); Receiver receiver = new Receiver(config); JavaReceiverInputDStream<String> jsonMessagesDStream = ssc.receiverStream(receiver); jsonMessagesDStream.count() ssc.start(); ssc.awaitTermination(); And I run gradle clean build which builds one Jar. And I just run that one Jar For spark-cassandra-connector project as well I followed the similar pattern as above and I was able to read a sample row and I was able to do a count which returned me that there were 1M rows (which is accurate). so this approach is indeed working except when I used SpanBy and want to print only one cassandra partition (which has about 40 rows) and thats where I get this error |
java.lang.ClassCastException: cannot assign instance of scala.collection.immutable.List$SerializationProxy to field org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD |
If it isn't working for you with your code as is perhaps you should try it the supported way instead?
|
Sure but I wouldn't really say it is not working because It Indeed worked in every case besides this one and since I am new to spark I don't really know how spark submit works. There is some learning I have to do which I am looking at right now..but the bigger question for me is not suspecting the approach since it worked in almost every other case but rather why it isn't working for this one case..Anyways looking into spark submit.. |
|
This is it I think.. JavaPairRDD<byte[], Iterable<CassandraRow>> cassandraRowsRDD= javaFunctions(sc).cassandraTable("test", "hello" ) .select("rowkey", "col1", "col2", "col3", ) .spanBy(new Function<CassandraRow, byte[]>() { @Override public byte[] call(CassandraRow v1) { return v1.getBytes("rowkey"); } }, byte[].class); Iterable<Tuple2<byte[], Iterable<CassandraRow>>> listOftuples = cassandraRowsRDD.collect(); Tuple2<byte[], Iterable<CassandraRow>> tuple = listOftuples.iterator().next(); byte[] partitionKey = tuple._1(); for(CassandraRow cassandraRow: tuple._2()) { System.out.println("************START************"); System.out.println(new String(partitionKey)); System.out.println("************END************"); } I thought I was doing select col1, col2, col3 from hello where rowkey="oxab" but I clearly wasn't. Now I get the following error. Am I not just printing one Cassandra Partition from the code above? I have 1M row in my Cassandra node. 16/10/09 22:12:07 ERROR TaskSchedulerImpl: Lost executor 0 on 192.168.1.182: Remote RPC client disassociated. Likely due to containers exceeding thresholds, or network issues. Check driver logs for WARN messages. 16/10/09 22:12:14 ERROR TaskSchedulerImpl: Lost executor 1 on 192.168.1.182: Remote RPC client disassociated. Likely due to containers exceeding thresholds, or network issues. Check driver logs for WARN messages. 16/10/09 22:12:23 ERROR TaskSchedulerImpl: Lost executor 2 on 192.168.1.182: Remote RPC client disassociated. Likely due to containers exceeding thresholds, or network issues. Check driver logs for WARN messages. 16/10/09 22:12:31 ERROR TaskSchedulerImpl: Lost executor 3 on 192.168.1.182: Remote RPC client disassociated. Likely due to containers exceeding thresholds, or network issues. Check driver logs for WARN messages. |
|
Thanks a ton for that clarification! so I dropped my entire Keyspace which had 1M rows. I am reading only from one table and that table has 10 rows. Still the error persists : Job aborted due to stage failure: Task 1 in stage 0.0 failed 4 times, most recent failure: Lost task 1.3 in stage 0.0 (TID 23, 192.168.1.182): ExecutorLostFailure (executor 3 exited caused by one of the running tasks) Reason: Remote RPC client disassociated. Likely due to containers exceeding thresholds, or network issues. Check driver logs for WARN messages. Now I checked the executor logs it says the following ERROR CoarseGrainedExecutorBackend: Unable to create executor due to Can't assign requested address: Service 'org.apache.spark.network.netty.NettyBlockTransferService' failed after 16 retries! Consider explicitly setting the appropriate port for the service 'org.apache.spark.network.netty.NettyBlockTransferService' (for example spark.ui.port for SparkUI) to an available port or increasing spark.port.maxRetries. java.net.BindException: Can't assign requested address: Service 'org.apache.spark.network.netty.NettyBlockTransferService' failed after 16 retries! Consider explicitly setting the appropriate port for the service 'org.apache.spark.network.netty.NettyBlockTransferService' (for example spark.ui.port for SparkUI) to an available port or increasing spark.port.maxRetries Still debugging further. |
|
Based on the suggestion on Google for the Exception in the executor logs |
java.net.BindException: Can't assign requested address: Service 'org.apache.spark.network.netty.NettyBlockTransferService' failed after 16 retries! Consider explicitly setting the appropriate port for the service 'org.apache.spark.network.netty.NettyBlockTransferService' (for example spark.ui.port for SparkUI) to an available port or increasing spark.port.maxRetries |
I went to spark_home/bin/load-spark-env.sh and I added the following line export SPARK_LOCAL_IP="127.0.0.1" and I restarted the cluster..Now I am back to square 1.. I get the original error Caused by: java.lang.ClassCastException: cannot assign instance of scala.collection.immutable.List$SerializationProxy to field org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD But my code is the same as below |
JavaPairRDD<byte[], Iterable<CassandraRow>> cassandraRowsRDD= javaFunctions(sc).cassandraTable("test", "hello" ) .select("rowkey", "col1", "col2", "col3", ) .spanBy(new Function<CassandraRow, byte[]>() { @Override public byte[] call(CassandraRow v1) { return v1.getBytes("rowkey"); } }, byte[].class); Iterable<Tuple2<byte[], Iterable<CassandraRow>>> listOftuples = cassandraRowsRDD.collect(); Tuple2<byte[], Iterable<CassandraRow>> tuple = listOftuples.iterator().next(); byte[] partitionKey = tuple._1(); for(CassandraRow cassandraRow: tuple._2()) { System.out.println("************START************"); System.out.println(new String(partitionKey)); System.out.println("************END************"); } |
@Russell you pointed to not use M2 but I am using M3. should I change it to the one below? compile group: 'com.datastax.spark', name: 'spark-cassandra-connector_2.10', version: '1.6.2' Thanks! |
DO NOT CHANGE TO M2. Like I said that is only for if you are using the "Packages" repository.
https://spark-packages.org/package/datastax/spark-cassandra-connector
The maven artifact M3 is the correct one. There was an error publishing the M2 artifact to maven.
The Spark Local is yet another thing Spark Submit would be taking care of.
Have you tried my spark shell examples yet? yes did it this morning and it works fine. It works even if I change all my gradle dependencies to compile and it works when I follow the sample you gave me although I am highly skeptical of this gradle code if is right or not so I am attaching them here so please let me which looks more correct. And finally if I don't use spark submit just like I used to in the past it doesn't work and I dont understand where the problem is as it is not working for only this problem but even if I do simple count on cassandra without using spark-submit approach it worked fine.
Or mismatch https://issues.apache.org/jira/browse/SPARK-9219
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-user+unsub...@lists.datastax.com.
----
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-user+unsub...@lists.datastax.com.
--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-user+unsub...@lists.datastax.com.
----
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-user+unsub...@lists.datastax.com.
--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-user+unsub...@lists.datastax.com.
--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-user+unsub...@lists.datastax.com.
--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-user+unsub...@lists.datastax.com.
--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-user+unsub...@lists.datastax.com.
--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-user+unsub...@lists.datastax.com.
--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-user+unsub...@lists.datastax.com.
----
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-user+unsub...@lists.datastax.com.
--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-user+unsub...@lists.datastax.com.
----
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-user+unsub...@lists.datastax.com.
--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-user+unsub...@lists.datastax.com.
----
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-user+unsub...@lists.datastax.com.
--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-user+unsub...@lists.datastax.com.
--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-user+unsub...@lists.datastax.com.
--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-user+unsub...@lists.datastax.com.
--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-user+unsub...@lists.datastax.com.
Or mismatch https://issues.apache.org/jira/browse/SPARK-9219
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-...@lists.datastax.com.
----
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-...@lists.datastax.com.
--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-...@lists.datastax.com.
----
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-...@lists.datastax.com.
--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-...@lists.datastax.com.
--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-...@lists.datastax.com.
--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-...@lists.datastax.com.
--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-...@lists.datastax.com.
--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-...@lists.datastax.com.
--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-...@lists.datastax.com.
----
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-...@lists.datastax.com.
--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-...@lists.datastax.com.
----
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-...@lists.datastax.com.
--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-...@lists.datastax.com.
----
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-...@lists.datastax.com.
--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-...@lists.datastax.com.
--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-...@lists.datastax.com.
--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-...@lists.datastax.com.
--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-...@lists.datastax.com.
--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-...@lists.datastax.com.
Or mismatch https://issues.apache.org/jira/browse/SPARK-9219
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-user+unsub...@lists.datastax.com.
----
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-user+unsub...@lists.datastax.com.
--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-user+unsub...@lists.datastax.com.
----
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-user+unsub...@lists.datastax.com.
--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-user+unsub...@lists.datastax.com.
--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-user+unsub...@lists.datastax.com.
--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-user+unsub...@lists.datastax.com.
--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-user+unsub...@lists.datastax.com.
--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-user+unsub...@lists.datastax.com.
--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-user+unsub...@lists.datastax.com.
----
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-user+unsub...@lists.datastax.com.
--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-user+unsub...@lists.datastax.com.
----
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-user+unsub...@lists.datastax.com.
--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-user+unsub...@lists.datastax.com.
----
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-user+unsub...@lists.datastax.com.
--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-user+unsub...@lists.datastax.com.
--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-user+unsub...@lists.datastax.com.
--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-user+unsub...@lists.datastax.com.
--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-user+unsub...@lists.datastax.com.
--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-user+unsub...@lists.datastax.com.
--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-user+unsub...@lists.datastax.com.
Or mismatch https://issues.apache.org/jira/browse/SPARK-9219
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-user+unsubscrib...@lists.datastax.com.
----
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-user+unsubscrib...@lists.datastax.com.
--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-user+unsubscrib...@lists.datastax.com.
...
[Message clipped]
Or mismatch https://issues.apache.org/jira/browse/SPARK-9219
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-user+unsubscrib...@lists.datastax.com.
----
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-user+unsubscrib...@lists.datastax.com.
--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-user+unsubscrib...@lists.datastax.com.
...
[Message clipped]