I have an H2 database where, after loading the data, the database grows to 24GB.
If I dump & recreate the database, this shrinks to 1.5GB - but obviously this takes time to do.
I'd like to try to reduce the size of the database a little - I'm not necessarily looking at compression or anything, but I'm assuming some of this 20x overhead is checkpoints etc.
I also know that by default H2 spends ~500ms compacting the database, but I'd rather not leave this work until the end.
Is it possible to run this compaction task in the background on an ongoing basis, like other databases do?
Thanks in advance.