if I'm not using LOCAL_ONE consistency level, then why am I seeing it?

991 views
Skip to first unread message

Mitch Gitman

unread,
Aug 17, 2016, 2:57:09 PM8/17/16
to java-dri...@lists.datastax.com
I feel like I could just as well be asking this question on the cassandra-user list, but I'll ask it here. I'm using version 3.0.1 of the DataStax Java Driver pointing at Cassandra 2.1.12 (DSE 4.8.4).

I have a keyspace that's using a replication factor of 3.

I have some inserts that are timing out, which is fine:
com.datastax.driver.core.exceptions.WriteTimeoutException: Cassandra timeout during write query at consistency LOCAL_ONE (1 replica were required but only 0 acknowledged the write)
at com.datastax.driver.core.exceptions.WriteTimeoutException.copy(WriteTimeoutException.java:100) ~[tools-timeseries-migrator.jar:?]
at com.datastax.driver.core.Responses$Error.asException(Responses.java:122) ~[tools-timeseries-migrator.jar:?]
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:471) [tools-timeseries-migrator.jar:?]
at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1013) [tools-timeseries-migrator.jar:?]
...
Caused by: com.datastax.driver.core.exceptions.WriteTimeoutException: Cassandra timeout during write query at consistency LOCAL_ONE (1 replica were required but only 0 acknowledged the write)
at com.datastax.driver.core.Responses$Error$1.decode(Responses.java:59) ~[tools-timeseries-migrator.jar:?]
at com.datastax.driver.core.Responses$Error$1.decode(Responses.java:37) ~[tools-timeseries-migrator.jar:?]
...

What's strange to me is that they're timing out at consistency level LOCAL_ONE when the queries themselves have been configured to use LOCAL_QUORUM.

When I saw this occurring, it reminded me of the behavior with DowngradingConsistencyRetryPolicy, as described in this DataStax Developer Blog post: http://www.datastax.com/dev/blog/cassandra-error-handling-done-right. So LOCAL_QUORUM would downgrade to LOCAL_ONE. The thing is, I'm not using DowngradingConsistencyRetryPolicy. I'm not specifying a RetyPolicy when I build my Cluster object, which defaults to DefaultRetryPolicy, and, as you see from the API docs for DefaultRetryPolicy: "This retry policy is conservative in that it will never retry with a different consistency level than the one of the initial operation."

So I'm puzzled as to why LOCAL_ONE is showing up in the exception. Anyone have any idea?

My immediate practical plan of action is to keep the cluster from getting overloaded just to make the timeouts go away, and then the fact the timeouts were occurring with the wrong consistency level becomes academic. But I can't help but think this is a deeper sign of something amiss or something I'm missing.

Andrew Tolbert

unread,
Aug 19, 2016, 4:40:43 PM8/19/16
to DataStax Java Driver for Apache Cassandra User Mailing List
Hi Mitch,

The only thing that pops into my mind is that 'LOCAL_ONE' is the default Consistency Level employed by the Java Driver if not consistency level is configured explicitly.   How are you specifying the consistency level?  Is it at the Statement level or the QueryOptions level?  Can you provide an example?

Thanks,
Andy

Mitch Gitman

unread,
Aug 31, 2016, 11:58:57 AM8/31/16
to java-dri...@lists.datastax.com
Andrew, thanks for getting back to me, and I regret taking so long to respond now myself when I was the one asking the question.

Your answer got me sniffing in the right direction, and I'm realizing now that, while I've been meticulous about instantiating INSERT and SELECT statements with the respective desired consistency levels, I was instantiating a BatchStatement without specifying the desired consistency level. Hence it was defaulting to LOCAL_ONE, as you suggest.

I could be making use of QueryOptions to keep all this from falling through the cracks, and I see I'm calling QueryOptions.setConsistencyLevel() in another component, but for this particular component where I'm employing batches, I've been wanting to reserve the ability to configure different read and write consistency levels. That may be ill-advised in itself (why not stick with one consistency level for both reads and writes?), but that's something I can revisit soon enough.

Thanks!


--
You received this message because you are subscribed to the Google Groups "DataStax Java Driver for Apache Cassandra User Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to java-driver-user+unsubscribe@lists.datastax.com.

Reply all
Reply to author
Forward
0 new messages