In Datastax4, the entire Cassandra node list is refreshed via MetadataManager#refreshNodeList. This happens in two cases: 1. when a session is initialized, and 2. when we reconnect to a node that was temporarily unavailable.
Note that a refresh does not happen if a node goes down because (for example) it was terminated ungracefully. In that case the driver holds on to the old node and tries to reconnect periodically, leading to warn-level log messages that look like this:
2022-03-24 16:07:23.846 WARN 11077 --- [Thread-1] c.d.o.d.i.c.p.ChannelPool : [sessionname|/
1.2.3.4:1234] Error while opening new channel (ConnectionInitException: [sessionname|connecting...] Protocol initialization request, step 1 (STARTUP {CQL_VERSION=3.0.0, DRIVER_NAME=DataStax Java driver for Apache Cassandra(R), DRIVER_VERSION=4.13.0, CLIENT_ID=16f1615a-ef11-4111-a111-c9b01112f453}): failed to send request (java.nio.channels.NotYetConnectedException))
Datastasx3 was more aggressive where it came to refreshing node lists, for instance it would refresh the list when connecting to a new node [1] [2]. This means that when a node came up (with a different IP) to replace any node that was terminated ungracefully, the entire node list would be refreshed and the bad node would be removed.
My question here is: is it worth revisiting the behavior w.r.t node list refresh frequency in Datastax4? If not I can implement a workaround on our end e.g. by implementing a NodeStateListener (should be straightforward, but any advice there would be appreciated too!)