Consistency Level Ignored with CassandraConnector?

175 views
Skip to first unread message

Eric Meisel

unread,
Jan 5, 2016, 11:53:24 AM1/5/16
to DataStax Spark Connector for Apache Cassandra
Hi there -

Recently updated my application to 1.5-M3. Noticed that LOCAL_QUORUM was now the default for the consistency level on writes, so I overrode that with LOCAL_ONE for single-node tests (as LOCAL_QUORUM will always require 2+ nodes).

This works fine with standard saveToCassandra and dataframe writes. Manual connections from the CassandraConnector seem to ignore this value though...

Poking around here (and other files) didn't show a holder for Consistency Level, but I could be missing it:

https://github.com/datastax/spark-cassandra-connector/blob/master/spark-cassandra-connector/src/main/scala/com/datastax/spark/connector/cql/CassandraConnectorConf.scala

Is this intended/expected right now? What would be a good workaround here?

Russell Spitzer

unread,
Jan 5, 2016, 12:04:16 PM1/5/16
to DataStax Spark Connector for Apache Cassandra
Manual Connections use the Java Driver which allows you to set consistency level on a per statement (or prepared statement) basis. Please see the api docs

http://docs.datastax.com/en/drivers/java/2.1/com/datastax/driver/core/Statement.html#setConsistencyLevel-com.datastax.driver.core.ConsistencyLevel-


--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-...@lists.datastax.com.
--

Eric Meisel

unread,
Jan 5, 2016, 12:07:24 PM1/5/16
to DataStax Spark Connector for Apache Cassandra

Cool, thanks for the workaround.

Does this seem reasonable for an improvement item in JIRA (CassandraConnector leveraging all spark.cassandra conf options)?

Russell Spitzer

unread,
Jan 5, 2016, 12:17:15 PM1/5/16
to DataStax Spark Connector for Apache Cassandra
We could, Just do you want it to use the write or the read level by default :D Kinda gets a little tricky

Eric Meisel

unread,
Jan 5, 2016, 12:59:57 PM1/5/16
to DataStax Spark Connector for Apache Cassandra
Hah, great question. Ideally we'd separate the two and apply them as needed depending on what's being executed. Same with any of the other read/write options that are set with the spark.cassandra conf. 

Eric Meisel

unread,
Jan 5, 2016, 2:58:27 PM1/5/16
to DataStax Spark Connector for Apache Cassandra

I've created a JIRA for this item:

https://datastax-oss.atlassian.net/browse/SPARKC-310

Reply all
Reply to author
Forward
0 new messages