The main problem the current H2 clustering mechanism is trying to
solve is high availability. That means, you start two database
servers, and even if one fails the other one can still be used. The
idea is not that you manually start and stop one server and then the
other: you let both servers run.
I know there are many use cases that the H2 cluster mechanism doesn't
solve at all, like "scalable writes" or "synchronizing changes between
cluster nodes". The H2 cluster mechanism is very limited. I don't
currently plan to add new features; probably it would make more sense
to write a new kind of cluster mechanism that solves a lot more
problems than the current mechanism.
Regards,
Thomas
I'm assuming the expectation is for the node to be readded manually using the create cluster tool. If that's the expectation, I think it would be prudent to make eviction from the cluster permanent until an administrator intervenes. Once SessionRemote sets CLUSTER='' an administrator should be forced to run the CreateCluster tool to resynchronize the failed node.
If you end up implementing a new clustering mechanism, I would really like to see some type of fail fast mode in addition to high availability. To clarify what I mean, if even one node in the cluster can't be reached, the cluster is taken offline. I think it would be useful for some types of small business applications where budgetary constraints increase the likelyhood of hardware or network failure (networks that are poorly designed by local admins and end up being prone to partitioning). In most of those cases redundant hardware is usually minimal and the applications are fairly low capacity. For those situations, I think providing data consistency at the expense of availability can be easy to justify because inconsistent or lost data ends up being more detrimental than taking the application offline temporarily while the root cause of the failure is identified and fixed.
Let me know if you'd like me to clarify anything. Hopefully I haven't misinterpreted what SessionRemote is doing.
Ryan