custom RetryPolicy: change consistency

258 views
Skip to first unread message

Vincent de Lagabbe

unread,
Jun 22, 2016, 4:52:48 AM6/22/16
to DataStax Node.js Driver for Apache Cassandra Mailing List
Hey there,

from the doc (http://docs.datastax.com/en/latest-nodejs-driver-api/module-policies_retry-RetryPolicy.html) or code (https://github.com/datastax/nodejs-driver/blob/master/lib/policies/retry.js) I don't see a way to retry a failed request with a different consistency. Use case: 2 Cassandra datacenters, if LOCAL_ONE fails, then retry with QUORUM.

I can handle this at the app level (check the error and re launch the query with a different consistency, for each query) but it does not seem possible to do so at the driver level ?

This functionality apparently exists in the Java driver, eg in this custom retry policy: http://docs.datastax.com/en/drivers/java/3.0/com/datastax/driver/core/policies/DowngradingConsistencyRetryPolicy.html

Thank you for your help!

Jorge Bay Gondra

unread,
Jun 22, 2016, 5:02:11 AM6/22/16
to nodejs-dr...@lists.datastax.com
Hi,

if LOCAL_ONE fails, then retry with QUORUM

You mean the other way around right?

There isn't a DowngradingConsistencyRetryPolicy as part of the driver *yet* :)

To implement one, you should inherit from RetryPolicy and implement 4 methods (onUnavailable(), onWriteTimeout(), onReadTimeout() and onRequestError()).

If you implement a generic DowngradingConsistencyRetryPolicy and send a pull request, it will likely get merged in.

Thanks,
Jorge


--
You received this message because you are subscribed to the Google Groups "DataStax Node.js Driver for Apache Cassandra Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to nodejs-driver-u...@lists.datastax.com.

Vincent de Lagabbe

unread,
Jun 22, 2016, 5:46:35 AM6/22/16
to DataStax Node.js Driver for Apache Cassandra Mailing List
Hey Jorge,

no, not the other way around :-)

I have a RF 1 in a "read only datacenter" and if a node is down, I want to retry using the big DC (RF 3): so local_one first (or local_quorum, same thing with a RF 1) and in case of failure quorum: (3 + 1) / 2 + 1 = 3 nodes must be up in total (so I can have 1 down in the read only DC) 

I missed this in the code: https://github.com/datastax/nodejs-driver/blob/862056517ae773a2342eb8013f2a71f173ed1538/lib/request-handler.js#L324 and my custom retry policy was based on a earlier than Apr the 25th version of the default retryPolicy :-)

Thanks a lot!
To unsubscribe from this group and stop receiving emails from it, send an email to nodejs-driver-user+unsub...@lists.datastax.com.

Jorge Bay Gondra

unread,
Jun 22, 2016, 6:03:47 AM6/22/16
to nodejs-dr...@lists.datastax.com
Ok, if you want that behaviour, a generic retry policy won't work. You should build a custom one, something like:


const consistency = cassandra.types.consistencies;
YourCustomRetryPolicy.prototype.onUnavailable = function (requestInfo, consistency, required, alive) {
  if (requestInfo.nbRetry > 0) { 
    return this.rethrowResult();
  }
  if (consistency === consistency.localOne) {
    return this.retryResult(consistency.localQuorum, false);
  }
  return this.retryResult(undefined, false);
};

Disclaimer: this is a code sample, not intended for production use.

That way, it will be retried with LOCAL_QUORUM the second time on the next host. Which is the "next host" **depends on the load balancing policy (lbp)**, you should use a lbp that selects remote replicas after local replicas could not be reached, something like:

const DCAwareRoundRobinPolicy = cassandra.policies.loadBalancing.DCAwareRoundRobinPolicy;
const TokenAwarePolicy = cassandra.policies.loadBalancing.TokenAwarePolicy;
// Use 6 remote nodes for failover
const lbp = new TokenAwarePolicy(new DCAwareRoundRobinPolicy('my-local-dc', 6)); 
const client = new cassandra.Client({
  policies: { loadBalancing : lbp },
  contactPoints: yourContactPoints
});


Hope it helps,
Jorge

To unsubscribe from this group and stop receiving emails from it, send an email to nodejs-driver-u...@lists.datastax.com.

--
You received this message because you are subscribed to the Google Groups "DataStax Node.js Driver for Apache Cassandra Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to nodejs-driver-u...@lists.datastax.com.

Vincent de Lagabbe

unread,
Jun 22, 2016, 6:16:51 AM6/22/16
to DataStax Node.js Driver for Apache Cassandra Mailing List
Yes, that's what I did :-)

Thank you anyway for the code snippet!
To unsubscribe from this group and stop receiving emails from it, send an email to nodejs-driver-user+unsub...@lists.datastax.com.

--
You received this message because you are subscribed to the Google Groups "DataStax Node.js Driver for Apache Cassandra Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to nodejs-driver-user+unsub...@lists.datastax.com.

Vincent de Lagabbe

unread,
Jun 22, 2016, 8:54:25 AM6/22/16
to DataStax Node.js Driver for Apache Cassandra Mailing List
FYI: this just doesn't work, no idea why.

A working solution for my topology though is to use a consistency of ONE (reads will be performed in the local "read only" datacenter first and fallback the "big read/write" DC if a local node is down.
Reply all
Reply to author
Forward
0 new messages