The end-goal is the ability to instantiate an "embedded" h2 that uses "dumb" (NoSQL) cloud storage to persist the data and index B-Trees, while retaining the full RDBMS SQL capability of the upper layers.
a) mount the cloud storage per SSHFS and access the normal DB file
b) or start the H2 server in the cloud and access it via TCP
What are the Pros of your ideas and what are the Cons of the "traditional" approach (despite having no buzzwords in the description).
- But I don't like database servers.- So my idea is to move to embedded H2 with cloud backing store and get rid of the database servers, while keeping SQL, JDBC, JdbcTemplate, and Hibernate.
This configuration would still retain an undesirable (to me) characteristic of a traditional RDBMS, mainly, that you have one node (running H2) that acts as a bottleneck for all data access.Here I have an unstated requirement: I want the ability for some consumers to "backdoor" and read the data in the cloud backing store directly through native APIs without going through H2.
Pardon my ignorance, but to me this looks like contradicting requirements.Either you will use "Embedded" mode not sharing, then you can use any file (wether local or mounted on a network does not matter).Or you will use "Shared" mode and will rely on a kind of Service Dispatcher, which is usually the H2 SQL Server over a TCP connection.
I've come down to a decision-point with three choices... 1) reimplement MVStore abstraction, 2) reimplement filesystem abstraction used by MVStore, or 3) implement pluggable tables, one layer up.