Reconnection timeouts after restarting one cassandra node

1,239 views
Skip to first unread message

Tuukka Mustonen

unread,
Oct 7, 2014, 7:34:53 AM10/7/14
to python-dr...@lists.datastax.com
Hi,

A somewhat bloated mail due to included logs. Apologies.

I have a 3 node Cassandra cluster. When I restart one of them (sudo service cassandra restart), the connection to that node gets dropped from the clients (as is expected). I have specified ExponentialReconnectionPolicy that kicks in after the connections drop. However, reconnection attempts fail to a timeout. If I restart the whole process, connections are made successfully.

For the tests, I ran a single-node application with 1 uwsgi process and without threads for simplicity. I've configured the IPs of all the nodes into the address list given to Cluster.

When I run sudo service cassandra restart on one of the nodes, after couple of seconds I get:

2014-10-07 10:23:32,501 - cassandra.connection - DEBUG - Message pushed from server: <EventMessage(stream_id=-1, event_type=u'STATUS_CHANGE', event_args={'change_type': u'DOWN', 'address': ('10.0.10.17', 9042)}, trace_id=None)>
2014-10-07 10:23:32,501 - cassandra.cluster - WARNING - Host 10.0.10.17 has been marked down
2014-10-07 10:23:32,501 - cassandra.cluster - DEBUG - Removed connection pool for <Host: 10.0.10.17 DC1>
2014-10-07 10:23:32,502 - cassandra.cluster - DEBUG - Starting reconnector for host 10.0.10.17
2014-10-07 10:23:32,502 - cassandra.io.asyncorereactor - DEBUG - Closing connection (31103376) to 10.0.10.17
2014-10-07 10:23:32,502 - cassandra.io.asyncorereactor - DEBUG - Closed socket to 10.0.10.17
2014-10-07 10:23:32,503 - cassandra.io.asyncorereactor - DEBUG - Closing connection (31764752) to 10.0.10.17
2014-10-07 10:23:32,503 - cassandra.io.asyncorereactor - DEBUG - Closed socket to 10.0.10.17

Any idea why connection/socket is closed twice (or at least reported twice)?

After 10 seconds, it gives connection refused. As you would expect, but I noted that it doesn't actually log details about the reconnection attempt (only result).

2014-10-07 10:23:42,500 - cassandra.pool - WARNING - Error attempting to reconnect to 10.0.10.17, scheduling retry in 4.0 seconds: [Errno 111] Tried connecting to [('10.0.10.17', 9042)]. Last error: Connection refused
2014-10-07 10:23:42,501 - cassandra.pool - DEBUG - Reconnection error details
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/cassandra/pool.py", line 175, in run
    conn = self.try_reconnect()
  File "/usr/local/lib/python2.7/dist-packages/cassandra/pool.py", line 246, in try_reconnect
    return self.connection_factory()
  File "/usr/local/lib/python2.7/dist-packages/cassandra/io/asyncorereactor.py", line 162, in factory
    conn = cls(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/cassandra/io/asyncorereactor.py", line 195, in __init__
    raise socket.error(sockerr.errno, "Tried connecting to %s. Last error: %s" % ([a[4] for a in addresses], sockerr.strerror))
error: [Errno 111] Tried connecting to [('10.0.10.17', 9042)]. Last error: Connection refused

After this, I get similar connection refused re-connection attempt (as you would expect) but timestamps are odd:

10:23:52 (scheduling retry in 8.0 seconds)
10:24:02 (scheduling retry in 16.0 seconds)
10:24:22 (scheduling retry in 30.0 seconds)

The timestamps don't quite follow the re-connection schedule (even if you add 10s timeouts), why is that?

When the service finally comes back up, I see:

2014-10-07 10:24:42,500 - cassandra.connection - DEBUG - Message pushed from server: <EventMessage(stream_id=-1, event_type=u'STATUS_CHANGE', event_args={'change_type': u'DOWN', 'address': ('10.0.10.17', 9042)}, trace_id=None)>
2014-10-07 10:24:42,500 - cassandra.connection - DEBUG - Message pushed from server: <EventMessage(stream_id=-1, event_type=u'STATUS_CHANGE', event_args={'change_type': u'UP', 'address': ('10.0.10.17', 9042)}, trace_id=None)>
2014-10-07 10:24:42,500 - cassandra.connection - DEBUG - Message pushed from server: <EventMessage(stream_id=-1, event_type=u'STATUS_CHANGE', event_args={'change_type': u'UP', 'address': ('10.0.10.17', 9042)}, trace_id=None)>
2014-10-07 10:24:42,501 - cassandra.connection - DEBUG - Message pushed from server: <EventMessage(stream_id=-1, event_type=u'STATUS_CHANGE', event_args={'change_type': u'UP', 'address': ('10.0.10.17', 9042)}, trace_id=None)>
2014-10-07 10:24:42,501 - cassandra.connection - DEBUG - Message pushed from server: <EventMessage(stream_id=-1, event_type=u'STATUS_CHANGE', event_args={'change_type': u'UP', 'address': ('10.0.10.17', 9042)}, trace_id=None)>
2014-10-07 10:24:42,501 - cassandra.connection - DEBUG - Message pushed from server: <EventMessage(stream_id=-1, event_type=u'STATUS_CHANGE', event_args={'change_type': u'UP', 'address': ('10.0.10.17', 9042)}, trace_id=None)>
2014-10-07 10:24:42,502 - cassandra.connection - DEBUG - Message pushed from server: <EventMessage(stream_id=-1, event_type=u'STATUS_CHANGE', event_args={'change_type': u'UP', 'address': ('10.0.10.17', 9042)}, trace_id=None)>

Any idea why it is announced multiple times?

Anyway, after 10 seconds (why 10 seconds?) of receiving that announcement, new connection attempt begins:

2014-10-07 10:24:52,500 - cassandra.cluster - DEBUG - Waiting to acquire lock for handling up status of node 10.0.10.17
2014-10-07 10:24:52,500 - cassandra.cluster - DEBUG - Starting to handle up status of node 10.0.10.17
2014-10-07 10:24:52,500 - cassandra.cluster - INFO - Host 10.0.10.17 may be up; will prepare queries and open connection pool
2014-10-07 10:24:52,501 - cassandra.cluster - DEBUG - Now that host 10.0.10.17 is up, cancelling the reconnection handler
2014-10-07 10:24:52,501 - cassandra.cluster - DEBUG - Done preparing all queries for host 10.0.10.17, 
2014-10-07 10:24:52,501 - cassandra.cluster - DEBUG - Signalling to load balancing policy that host 10.0.10.17 is up
2014-10-07 10:24:52,501 - cassandra.cluster - DEBUG - Signalling to control connection that host 10.0.10.17 is up
2014-10-07 10:24:52,501 - cassandra.cluster - DEBUG - Attempting to open new connection pools for host 10.0.10.17
2014-10-07 10:24:52,502 - cassandra.cluster - DEBUG - Waiting to acquire lock for handling up status of node 10.0.10.17
2014-10-07 10:24:52,500 - cassandra.cluster - DEBUG - Waiting to acquire lock for handling up status of node 10.0.10.17
2014-10-07 10:24:52,502 - cassandra.cluster - DEBUG - Another thread is already handling up status of node 10.0.10.17
2014-10-07 10:24:52,502 - cassandra.cluster - DEBUG - Waiting to acquire lock for handling up status of node 10.0.10.17
2014-10-07 10:24:52,502 - cassandra.cluster - DEBUG - Another thread is already handling up status of node 10.0.10.17
2014-10-07 10:24:52,502 - cassandra.cluster - DEBUG - Waiting to acquire lock for handling up status of node 10.0.10.17
2014-10-07 10:24:52,502 - cassandra.cluster - DEBUG - Another thread is already handling up status of node 10.0.10.17
2014-10-07 10:24:52,503 - cassandra.cluster - DEBUG - Waiting to acquire lock for handling up status of node 10.0.10.17
2014-10-07 10:24:52,503 - cassandra.cluster - DEBUG - Another thread is already handling up status of node 10.0.10.17

Any idea why attempts to grab the lock in such a speedy fashion? Also, as I'm running only single process (without threads enabled on uwsgi), I don't know what else could be reserving the lock...

Anyway, connection actually starts immediately now (it did not report this before when it tried to re-connect through my ReconnectionPolicy):

2014-10-07 10:24:52,503 - cassandra.pool - DEBUG - Initializing new connection pool for host 10.0.10.17
2014-10-07 10:24:52,503 - cassandra.cluster - DEBUG - Another thread is already handling up status of node 10.0.10.17
2014-10-07 10:24:52,504 - cassandra.connection - DEBUG - Not sending options message for new connection(32560016) to 10.0.10.17 because compression is disabled and a cql version was not specified
2014-10-07 10:24:52,504 - cassandra.connection - DEBUG - Sending StartupMessage on <AsyncoreConnection(32560016) 10.0.10.17:9042>
2014-10-07 10:24:52,504 - cassandra.connection - DEBUG - Sent StartupMessage on <AsyncoreConnection(32560016) 10.0.10.17:9042>
2014-10-07 10:24:52,505 - cassandra.connection - DEBUG - Got AuthenticateMessage on new connection (32560016) from 10.0.10.17: org.apache.cassandra.auth.PasswordAuthenticator
2014-10-07 10:24:52,506 - cassandra.connection - DEBUG - Sending SASL-based auth response on <AsyncoreConnection(32560016) 10.0.10.17:9042>

But after 10 seconds, connection is closed to a timeout:

2014-10-07 10:25:02,500 - cassandra.io.asyncorereactor - DEBUG - Closing connection (32560016) to 10.0.10.17
2014-10-07 10:25:02,501 - cassandra.io.asyncorereactor - DEBUG - Closed socket to 10.0.10.17
2014-10-07 10:25:02,501 - cassandra.connection - DEBUG - Connection to 10.0.10.17 was closed during the authentication process
2014-10-07 10:25:02,501 - cassandra.cluster - WARNING - Failed to create connection pool for new host 10.0.10.17:
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/cassandra/cluster.py", line 1493, in run_add_or_renew_pool
    new_pool = HostConnectionPool(host, distance, self)
  File "/usr/local/lib/python2.7/dist-packages/cassandra/pool.py", line 397, in __init__
    for i in range(core_conns)]
  File "/usr/local/lib/python2.7/dist-packages/cassandra/cluster.py", line 632, in connection_factory
    return self.connection_class.factory(address, *args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/cassandra/io/asyncorereactor.py", line 168, in factory
    raise OperationTimedOut("Timed out creating connection")
OperationTimedOut: errors=Timed out creating connection, last_host=None
2014-10-07 10:25:02,502 - cassandra.cluster - WARNING - Host 10.0.10.17 has been marked down
2014-10-07 10:25:02,502 - cassandra.cluster - DEBUG - Starting reconnector for host 10.0.10.17
2014-10-07 10:25:02,502 - cassandra.cluster - DEBUG - Connection pool could not be created, not marking node 10.0.10.17 up
2014-10-07 10:25:02,503 - cassandra.cluster - DEBUG - Old host reconnector found for 10.0.10.17, cancelling
2014-10-07 10:25:02,503 - cassandra.cluster - DEBUG - Starting reconnector for host 10.0.10.17

So even though it was announced to be available, I got timeout.

10 seconds later (why 10 seconds, is it just hardcoded?) it starts to actually reconnect again:

2014-10-07 10:25:12,501 - cassandra.connection - DEBUG - Not sending options message for new connection(30358288) to 10.0.10.17 because compression is disabled and a cql version was not specified
2014-10-07 10:25:12,501 - cassandra.connection - DEBUG - Sending StartupMessage on <AsyncoreConnection(30358288) 10.0.10.17:9042>
2014-10-07 10:25:12,501 - cassandra.connection - DEBUG - Sent StartupMessage on <AsyncoreConnection(30358288) 10.0.10.17:9042>
2014-10-07 10:25:12,503 - cassandra.connection - DEBUG - Got AuthenticateMessage on new connection (30358288) from 10.0.10.17: org.apache.cassandra.auth.PasswordAuthenticator
2014-10-07 10:25:12,503 - cassandra.connection - DEBUG - Sending SASL-based auth response on <AsyncoreConnection(30358288) 10.0.10.17:9042>

And after 10 seconds, connection attempt is again closed due to timeout ("Timed out creating connection, last_host=None").

This loops until I get:

2014-10-07 11:06:32,499 - cassandra.pool - WARNING - Will not continue to retry reconnection attempts due to an exhausted retry schedule

As said, if I restart the process (at any point after the node is back up), connections are made successfully without timeouts. All in all, connection works just ok (ensured also with cqlsh and even telnet).

Any pointers?

Tuukka

Tuukka Mustonen

unread,
Oct 7, 2014, 10:21:44 AM10/7/14
to python-dr...@lists.datastax.com
Well, stupid me, duplicate messages can be explained by my app having *2 daos*, both creating their own clusters and sessions. Didn't remember I added that for tests at some point and didn't disable it on remote runs... well, disabled it for now.

That still should not explain the fight for thread lock as there's still only 1 process/thread.

And the actual issue: WriteTimeouts should not be not explained by 2 clusters/sessions either. So I'm still without ideas of what might be causing them?

Tuukka

Adam Holmberg

unread,
Oct 7, 2014, 10:45:23 AM10/7/14
to python-dr...@lists.datastax.com
There is a lot at play here, so I'll try to answer a few questions and ask a few of you. 


Any idea why connection/socket is closed twice (or at least reported twice)?

In for protocol versions < v3, each pool tries to maintain at least DEFAULT_MIN_CONNECTIONS_PER_LOCAL_HOST connections to each host, in this case 2. 

The timestamps don't quite follow the re-connection schedule (even if you add 10s timeouts), why is that?

The reconnects would be starting at the scheduled time, and failing after some other time, either due to a connect timeout or other connection failure. The mode of failure could introduce some variance. 

Any idea why it is announced multiple times?

Protocol events can be sent from the server multiple times. This should be harmless, although I'm a little surprised to see that many over a single control connection.

Any idea why attempts to grab the lock in such a speedy fashion? Also, as I'm running only single process (without threads enabled on uwsgi), I don't know what else could be reserving the lock...

That is all the UP events from before being processed. It looks like each one runs through without contention (log message is emitted whether lock is held or not).
 
Regarding reconnect after the host is back up, to me the interesting thing is that it's hanging during the authentication exchange. I've seen that before where the client is using a protocol version not supported by the server. I don't think you mentioned what your Cassandra and driver versions were. If you could provide those it might be helpful to someone else to try this out. In the mean time you might want also review your auth settings and driver configuration (although it would be strange that it works during the initial connect but not after).

If you can reliably reproduce this with the latest driver and Cassandra major version of your choice, we would appreciate an issue filed:

Regards,
Adam Holmberg

To unsubscribe from this group and stop receiving emails from it, send an email to python-driver-u...@lists.datastax.com.

Tuukka Mustonen

unread,
Oct 7, 2014, 2:13:56 PM10/7/14
to python-dr...@lists.datastax.com
Thanks for prompt reply, Adam.

I'm using Cassandra 2.1.0 with 2.1.1 driver version. I have not specified CQL version 3 so it's using driver's default of 2.

In for protocol versions < v3, each pool tries to maintain at least DEFAULT_MIN_CONNECTIONS_PER_LOCAL_HOST connections to each host, in this case 2. 
 
That's interesting, I thought the reason for that were 2 my daos (explained in my previous mail) but if it's not so, I would expect the connection closed message to actually appear 4 times... but then, I did not receive all messages twice, so I assume the driver is doing some optimization and actually holding the same connections behind multiple Clusters and Sessions(?). I'll check tomorrow.

I will also try pumping to CQL 3. I did not actually notice that I have to instruct it to use version 3. Makes sense, of course.

The reconnects would be starting at the scheduled time, and failing after some other time, either due to a connect timeout or other connection failure. The mode of failure could introduce some variance. 

I'm not sure... they all failed to "Connection refused". And now that I think about it, there shouldn't be any timeouts related to those as if port is not listening, connection fails immediately. So I don't think there should be any variance. I'll try to reproduce tomorrow.
 
Any idea why it is announced multiple times?

Protocol events can be sent from the server multiple times. This should be harmless, although I'm a little surprised to see that many over a single control connection.

Ok.
 
Any idea why attempts to grab the lock in such a speedy fashion? Also, as I'm running only single process (without threads enabled on uwsgi), I don't know what else could be reserving the lock...

That is all the UP events from before being processed. It looks like each one runs through without contention (log message is emitted whether lock is held or not).

But it does report: "Another thread is already handling up status of node 10.0.10.17". Doesn't that mean it's fighting for the lock...

I have to actually refresh my brain about background processes and scheduling here. I doubt I really understand what's going on here. So let's not focus on this.
 
Regarding reconnect after the host is back up, to me the interesting thing is that it's hanging during the authentication exchange. I've seen that before where the client is using a protocol version not supported by the server. I don't think you mentioned what your Cassandra and driver versions were. If you could provide those it might be helpful to someone else to try this out. In the mean time you might want also review your auth settings and driver configuration (although it would be strange that it works during the initial connect but not after).

Indeed, versions should be compatible here. And auth settings just can't be incorrect as nothing changes and connection really works after restart. So I tend to believe there's something else in play here. But I'll check if I get something on the server side logs.
 
If you can reliably reproduce this with the latest driver and Cassandra major version of your choice, we would appreciate an issue filed:

Will do, once I'm sure it's actually a bug in the driver and not in my head.

Btw. I accidentally mentioned word "WriteTimeout" in my last mail. I should have just written "timeout" (I am also experiencing WriteTimeouts from CAS operations, but that's completely different issue).

Finally, I'm not too experienced with Cassandra, so this is for pure curiosity: is CQL connection negotiation/handling dramatically different from Thrift implementation or do they share some code or principles?

Tuukka

Tuukka Mustonen

unread,
Oct 8, 2014, 3:56:17 AM10/8/14
to python-dr...@lists.datastax.com
Well, this is interesting:

I tried pumping protocol version to 3 - no effect.
I tried disabling the second DAO - no (real) effect (not even less logs my eyes are not lying).
I tried looking at cassandra logs - nothing interesting there.

While doing something else, I increased the replication factor to 3 - and reconnection logic started working!

First:

cqlsh> DESC KEYSPACE perf;

CREATE KEYSPACE perf WITH replication = {'class': 'NetworkTopologyStrategy', 'DC1': '1'}  AND durable_writes = true;

If I pump DC1 to 2, no effect. But when I pump it to 3 it even fixes everything on the fly. On application logs (application is looping the timeouts when I make the change):

2014-10-08 07:51:43,496 - cassandra.connection - DEBUG - Message pushed from server: <EventMessage(stream_id=-1, event_type=u'SCHEMA_CHANGE', event_args={'keyspace': u'perf', 'change_type': u'UPDATED'}, trace_id=None)>
2014-10-08 07:51:43,496 - cassandra.cluster - DEBUG - [control connection] Waiting for schema agreement
2014-10-08 07:51:43,512 - cassandra.cluster - DEBUG - [control connection] Schemas match
2014-10-08 07:51:43,523 - cassandra.cluster - DEBUG - [control connection] Fetched keyspace info for perf, rebuilding metadata

And when the reconnector kicks in again:

2014-10-08 07:52:03,656 - cassandra.connection - DEBUG - Not sending options message for new connection(41557776) to 10.0.10.17 because compression is disabled and a cql version was not specified
2014-10-08 07:52:03,657 - cassandra.connection - DEBUG - Sending StartupMessage on <AsyncoreConnection(41557776) 10.0.10.17:9042>
2014-10-08 07:52:03,657 - cassandra.connection - DEBUG - Sent StartupMessage on <AsyncoreConnection(41557776) 10.0.10.17:9042>
2014-10-08 07:52:03,659 - cassandra.connection - DEBUG - Got AuthenticateMessage on new connection (41557776) from 10.0.10.17: org.apache.cassandra.auth.PasswordAuthenticator
2014-10-08 07:52:03,659 - cassandra.connection - DEBUG - Sending SASL-based auth response on <AsyncoreConnection(41557776) 10.0.10.17:9042>
2014-10-08 07:52:03,933 - cassandra.connection - DEBUG - Connection <AsyncoreConnection(41557776) 10.0.10.17:9042> successfully authenticated
2014-10-08 07:52:13,493 - cassandra.pool - INFO - Successful reconnection to 10.0.10.17, marking node up if it isn't already
2014-10-08 07:52:13,494 - cassandra.cluster - DEBUG - Waiting to acquire lock for handling up status of node 10.0.10.17
2014-10-08 07:52:13,494 - cassandra.cluster - DEBUG - Starting to handle up status of node 10.0.10.17
2014-10-08 07:52:13,494 - cassandra.cluster - INFO - Host 10.0.10.17 may be up; will prepare queries and open connection pool
2014-10-08 07:52:13,494 - cassandra.cluster - DEBUG - Now that host 10.0.10.17 is up, cancelling the reconnection handler
2014-10-08 07:52:13,494 - cassandra.cluster - DEBUG - Done preparing all queries for host 10.0.10.17, 
2014-10-08 07:52:13,495 - cassandra.cluster - DEBUG - Signalling to load balancing policy that host 10.0.10.17 is up
2014-10-08 07:52:13,495 - cassandra.cluster - DEBUG - Signalling to control connection that host 10.0.10.17 is up
2014-10-08 07:52:13,495 - cassandra.cluster - DEBUG - Attempting to open new connection pools for host 10.0.10.17
2014-10-08 07:52:13,495 - cassandra.pool - DEBUG - Initializing connection for host 10.0.10.17
2014-10-08 07:52:13,496 - cassandra.io.asyncorereactor - DEBUG - Closing connection (41557776) to 10.0.10.17
2014-10-08 07:52:13,496 - cassandra.io.asyncorereactor - DEBUG - Closed socket to 10.0.10.17
2014-10-08 07:52:13,496 - cassandra.connection - DEBUG - Not sending options message for new connection(40211408) to 10.0.10.17 because compression is disabled and a cql version was not specified
2014-10-08 07:52:13,497 - cassandra.connection - DEBUG - Sending StartupMessage on <AsyncoreConnection(40211408) 10.0.10.17:9042>
2014-10-08 07:52:13,497 - cassandra.connection - DEBUG - Sent StartupMessage on <AsyncoreConnection(40211408) 10.0.10.17:9042>
2014-10-08 07:52:13,499 - cassandra.connection - DEBUG - Got AuthenticateMessage on new connection (40211408) from 10.0.10.17: org.apache.cassandra.auth.PasswordAuthenticator
2014-10-08 07:52:13,499 - cassandra.connection - DEBUG - Sending SASL-based auth response on <AsyncoreConnection(40211408) 10.0.10.17:9042>
2014-10-08 07:52:13,933 - cassandra.connection - DEBUG - Connection <AsyncoreConnection(40211408) 10.0.10.17:9042> successfully authenticated
2014-10-08 07:52:23,495 - cassandra.pool - DEBUG - Finished initializing connection for host 10.0.10.17
2014-10-08 07:52:23,495 - cassandra.cluster - DEBUG - Added pool for host 10.0.10.17 to session
2014-10-08 07:52:23,495 - cassandra.pool - DEBUG - Host 10.0.10.17 is now marked up

As observed before, with replication factor of 1 and 2 the connection seems to hang on this line:

2014-10-08 07:32:28,582 - cassandra.connection - DEBUG - Sending SASL-based auth response on <AsyncoreConnection(22471184) 10.0.10.17:9042>

And that leads to timeout then.

I'm puzzled. Adam, do you have insight on this?

Tuukka
Regards,
Adam Holmberg

To unsubscribe from this group and stop receiving emails from it, send an email to python-driver-user+unsub...@lists.datastax.com.


Adam Holmberg

unread,
Oct 8, 2014, 9:58:22 AM10/8/14
to python-dr...@lists.datastax.com
Tuukka,

Thanks for your continuing input. 

Without having dug in further, I'm equally perplexed. Replication factor on a user table should have no effect on the connection protocol.

If you can reproduce this in isolation we would appreciate a JIRA issue with details. It could be a day or so before I really get a chance to dig in.

Thanks,
Adam

To unsubscribe from this group and stop receiving emails from it, send an email to python-driver-u...@lists.datastax.com.

Tuukka Mustonen

unread,
Oct 9, 2014, 8:22:46 AM10/9/14
to python-dr...@lists.datastax.com
Sure, I will try to reproduce it on a clean keyspace and just python console. Should be quick.

BUT, as you'd guess, I can't do it just now (finally got our systems running stable, so I need to tackle other things). If I don't have time tomorrow, I'll try to run more tests on Monday.

Tuukka

Regards,
Adam Holmberg

To unsubscribe from this group and stop receiving emails from it, send an email to python-driver-user+unsubscribe@lists.datastax.com.


Reply all
Reply to author
Forward
0 new messages