local class incompatible. serialVersionUID different.

7,718 views
Skip to first unread message

Jintao

unread,
May 28, 2015, 10:28:50 PM5/28/15
to spark-conn...@lists.datastax.com
Hi All,

I am using Spark-1.2.1 and spark-cassandra-connector-1.2.1.

My code is simple.

JavaRDD cassandraRDD = javaFunctions(sc).cassandraTable("test", "hello");
cassandraRDD.saveAsTextFile("hdfs://localhost:9000/user/jintao.guan/pixel/cassandraRDD");


Now I can load data from Cassandra. After that I want to write the RDD to HDFS.

The code is working fine on local mode (--master local[4]). But it is not working on the local spark cluster (--master spark://localhost:7077).

Then I encountered this error:

Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, 10.8.1.140): java.io.InvalidClassException: com.datastax.spark.connector.rdd.partitioner.CassandraPartition; local class incompatible: stream classdesc serialVersionUID = 147531139326522345, local class serialVersionUID = 7519445919350035664
at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:617)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1622)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1517)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:87)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:182)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)


My pom.xml looks like this:

<dependencies>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.10</artifactId>
<version>1.2.1</version>
</dependency>

<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.10</artifactId>
<version>1.2.1</version>
</dependency>

<dependency>
<groupId>com.datastax.spark</groupId>
<artifactId>spark-cassandra-connector-java_2.10</artifactId>
<version>1.2.1</version>
</dependency>
</dependencies>


Can someone help me with this ?

Thank you.

Russell Spitzer

unread,
May 28, 2015, 10:41:08 PM5/28/15
to spark-conn...@lists.datastax.com
This error means that the version of the class which is on some of your machines does not match the version on the other. 99% of the time this is because of a version mismatch in the dependencies on the driver classpath vs the executor classpath.

To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-...@lists.datastax.com.

Jintao Guan

unread,
May 29, 2015, 12:22:41 AM5/29/15
to spark-conn...@lists.datastax.com
Hi Russell,

Thank you for your advice.
Problem resolved.

Philip K. Adetiloye

unread,
Jun 10, 2015, 2:57:44 AM6/10/15
to spark-conn...@lists.datastax.com
Hi Jintao,
How did you resolve this problem ?
Am having exact same issue here....my POM file is exactly similar to yours.

Thanks,

Optimiti 2015

unread,
Sep 6, 2016, 8:45:54 AM9/6/16
to DataStax Spark Connector for Apache Cassandra, jinta...@bloomreach.com
Can you please share the pom which used to resolve the issue
Message has been deleted

Jitu @Bangalore

unread,
Sep 7, 2016, 2:43:42 AM9/7/16
to DataStax Spark Connector for Apache Cassandra, jinta...@bloomreach.com
Hi,
Even i am facing same issue. What was the change that fixed the issue.

Thanks,

Russell Spitzer

unread,
Sep 7, 2016, 5:41:55 PM9/7/16
to DataStax Spark Connector for Apache Cassandra, jinta...@bloomreach.com
The issue is caused by most commonly by having a version mismatch between driver classpath and executor classpath

Posting your Pom and Launch command would be the fastest way to find any errors.

--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.

Optimiti 2015

unread,
Sep 23, 2016, 1:34:50 AM9/23/16
to DataStax Spark Connector for Apache Cassandra, jinta...@bloomreach.com
pom.xml

Optimiti 2015

unread,
Sep 23, 2016, 1:44:43 AM9/23/16
to DataStax Spark Connector for Apache Cassandra, jinta...@bloomreach.com

I am also facing same issue. Already shared my pom

Jitu @Bangalore

unread,
Sep 23, 2016, 5:59:38 AM9/23/16
to DataStax Spark Connector for Apache Cassandra, jinta...@bloomreach.com
Hi Russell,
Please find the Pom file attached and let me know what changes i need to make to get it working. Thanks in advance.
pom.xml

Russell Spitzer

unread,
Sep 23, 2016, 12:40:03 PM9/23/16
to DataStax Spark Connector for Apache Cassandra, jinta...@bloomreach.com
What version is the spark cluster you are running against? Also switch the connector from the Milestone Release to the latest one for your spark version (1.6.2) 

On Fri, Sep 23, 2016 at 2:59 AM Jitu @Bangalore <abj...@gmail.com> wrote:
Hi Russell,
Please find the Pom file attached and let me know what changes i need to make to get it working. Thanks in advance.

--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-...@lists.datastax.com.
--

Jitu @Bangalore

unread,
Sep 26, 2016, 5:34:49 AM9/26/16
to DataStax Spark Connector for Apache Cassandra, jinta...@bloomreach.com
Hi Russell,
We are using spark cluster version 1.6.1

Optimiti 2015

unread,
Sep 29, 2016, 6:25:16 AM9/29/16
to DataStax Spark Connector for Apache Cassandra, jinta...@bloomreach.com
On Thursday, September 8, 2016 at 3:11:55 AM UTC+5:30, Russell Spitzer wrote:

Hi Russell,

I have tried updating the version still I get errors
16/09/28 10:26:51 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 3.0 (TID 71, cluster-1-w-3.c.optimiti-1096.internal): java.io.InvalidClassException: com.datastax.spark.connector.rdd.partitioner.CassandraPartition; local class incompatible: stream classdesc serialVersionUID = 7247106480529291035, local class serialVersionUID = 147531139326522345
at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:616)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1630)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1521)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1781)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:373)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:76)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:115)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:194)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

16/09/28 10:26:51 ERROR org.apache.spark.scheduler.TaskSetManager: Task 0 in stage 3.0 failed 4 times; aborting job
16/09/28 10:26:51 ERROR org.apache.spark.streaming.scheduler.JobScheduler: Error running job streaming job 1475058408000 ms.0
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 3.0 failed 4 times, most recent failure: Lost task 0.3 in stage 3.0 (TID 74, cluster-1-w-1.c.optimiti-1096.internal): java.io.InvalidClassException: com.datastax.spark.connector.rdd.partitioner.CassandraPartition; local class incompatible: stream classdesc serialVersionUID = 7247106480529291035, local class serialVersionUID = 147531139326522345
at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:616)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1630)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1521)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1781)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:373)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:76)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:115)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:194)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1858)
at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1328)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.RDD.take(RDD.scala:1302)
at org.apache.spark.streaming.dstream.DStream$$anonfun$print$2$$anonfun$foreachFunc$5$1.apply(DStream.scala:768)
at org.apache.spark.streaming.dstream.DStream$$anonfun$print$2$$anonfun$foreachFunc$5$1.apply(DStream.scala:767)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:50)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50)
at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:426)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:49)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49)
at scala.util.Try$.apply(Try.scala:161)
at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:224)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:224)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:224)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:223)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.InvalidClassException: com.datastax.spark.connector.rdd.partitioner.CassandraPartition; local class incompatible: stream classdesc serialVersionUID = 7247106480529291035, local class serialVersionUID = 147531139326522345
at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:616)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1630)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1521)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1781)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:373)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:76)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:115)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:194)
... 3 more
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 3.0 failed 4 times, most recent failure: Lost task 0.3 in stage 3.0 (TID 74, cluster-1-w-1.c.optimiti-1096.internal): java.io.InvalidClassException: com.datastax.spark.connector.rdd.partitioner.CassandraPartition; local class incompatible: stream classdesc serialVersionUID = 7247106480529291035, local class serialVersionUID = 147531139326522345
at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:616)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1630)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1521)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1781)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:373)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:76)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:115)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:194)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

Russell Spitzer

unread,
Sep 29, 2016, 11:32:37 AM9/29/16
to DataStax Spark Connector for Apache Cassandra, jinta...@bloomreach.com
Let me try to clarify. The error is caused by a class being sent over the network not being the same as the class loaded on a remote machine. IE a CassandraPartition is sent from the driver to the Executor. The executor tries to deserialize it and it fails because the object doesn't match it's name.

This means that somehow, the spark-cassandra-connector is being loaded with one version on the executors and a different version on the driver. You need to figure out what version is running on your executors. Has someone placed jars in a lib directory? Manually added jars to the classpath. Are you using DSE which has a version of the connector prebuilt? Do your scala versions match?

Check the command that you are using to launch your application, are you specifying a package version that doesn't match your jar's version? 

Charbel Kaed

unread,
Sep 30, 2016, 1:48:37 PM9/30/16
to DataStax Spark Connector for Apache Cassandra, jinta...@bloomreach.com
Hello,

I am having a similar error depending on the number of partitions with Spark SQL.
I have a cluster with 2 slaves, when the partition number is higher then 2.
Not sure if the number of partitions is dependent on the number of slaves. Still trying to figure out the best tuning.

Map<String, String> options = new HashMap<>();
options.put("driver", MSSQL_JDBC_DRIVER);
options.put("url", getMSSQLConnectionURL());
//options.put("dbtable","timeseries");
options.put("dbtable",
"("+Sql+") as tble");
options.put("partitionColumn", Id);
options.put("lowerBound", "1");
options.put("upperBound", "499999");
options.put("numPartitions", "3");


Will Slade

unread,
Oct 20, 2016, 6:05:01 PM10/20/16
to DataStax Spark Connector for Apache Cassandra, jinta...@bloomreach.com
We are getting this exact same error when submitting our job from a local Windows environment via Eclipse Scala IDE. I have attached the POM - you can see I have been trying some different versions of the connector.

The SparkConf is set as follows:

val myJar = getClass.getProtectionDomain.getCodeSource.getLocation.getPath + "spark-cassandra-connector_2.10-1.6.2.jar"

val conf = new SparkConf()
.setMaster("spark://<redacted for google post>:7077")
.setAppName("My App")
.setJars(Array(myJar))

Where the connector .jar file was downloaded pre-compiled from the mvn repo.
We are attempting to run this in standalone mode on 1.6.0, Scala version 2.10.6, Cassandra 3.5.

A few noteworthy comments:
1. When compiling our code into an executable .jar, moving it to the server, and running via spark submit with the connector in the extraClassPath, everything works.
2. When executing our code directly from the IDE against a local Spark 1.6.0 and not touching the cluster, everything works.
3. When adding the connector to the cluster server "classpath" (not extraClassPath - deprecated but functioning), and not pushing the connector with our code in the SparkConf, we still get the SerialVersionUID error.

Could the fact that we're using a pre-compiled version of the connector be an issue? Do we need to download the source and compile it ourselves to make sure everything is uniform with the submitting environment?

pom.xml

Russell Spitzer

unread,
Oct 20, 2016, 6:18:42 PM10/20/16
to DataStax Spark Connector for Apache Cassandra, jinta...@bloomreach.com
I strongly recommend against using setJars. You should use spark submit. Your key issues are coming from the executor class-paths not being set up correctly. In particular your above code will not move any of the dependencies since you are only adding the Connector jar.

Use spark packages for your dependency
http://spark-packages.org/package/datastax/spark-cassandra-connector

Then use spark-submit with the --packages arg from the above website. This is probably the safest way to get all of the classpaths correctly.

Will Slade

unread,
Nov 10, 2016, 6:33:44 PM11/10/16
to spark-conn...@lists.datastax.com
Thanks... we have this resolved fyi.  I know this is taking a stab and this doesn't really pertain to Cassandra or Datastax, but I'm having an issue trying to write a dataframe to Oracle.  I can't seem to find information on this particular error... can anyone help?

Writing random forest prediction and probability to Oracle

Code:
     var df = predictions.select("student_pidm", "assessment_code", "activity_date", "assess_pass_rate", "student_pass_rate", "otp_probability", "pre_assess_result", 
                                 "assmnt_request_id", "prediction", "probability".toString())

     
val prop = new java.util.Properties
prop.setProperty("user","redacted")
prop.setProperty("password","redacted")     

df.write.mode("Append").jdbc("jdbc:oracle:thin:redactedun/redactedpw@//redactedhost:1521/redactedservicename", "schema.tbl_assess_pr_processed", prop)

Error:

Exception in thread "main" java.lang.IllegalArgumentException: Can't get JDBC type for vector .......(on the write line)

Thanks!

To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-user+unsub...@lists.datastax.com.

--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-user+unsub...@lists.datastax.com.

Russell Spitzer

unread,
Nov 10, 2016, 6:56:36 PM11/10/16
to spark-conn...@lists.datastax.com
The thing about spark is that everything is lazy. So although it says "write" is the issue it can actually be any of the dependencies prior to writing as well. 

One thing I see right of the bat is
"probability".toString() is essentially a noop. If you mean to convert this column to a string type you need to use the conversion functions.


To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-...@lists.datastax.com.

--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-...@lists.datastax.com.

--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-...@lists.datastax.com.
Reply all
Reply to author
Forward
0 new messages