One of our H2 databases (MVStore cached tables, h2-1.4.200, embedded mode, JDK13, dedicated Linux server with 8G RAM and 4 CPUs) is about 1.8G in size. The database is both queried and updated quite frequently by many application users at the same time. The database does grow but at a very slow pace of perhaps 20% annually.
The last couple of months the performance of the application decreased significantly. During high load CPU would increase to ~400% with load values (top) of up to 20. But even during almost no load (at night when nobody is using the application) CPU would be at 200%.
I was able to eliminate all application background processes as the possible cause so I decided to do a complete SQL dump/recreate of the database. The result was stunning: CPU during high load averages around 100% and it will drop to 0% when things quit down.
Can you explain this? Is it a general rule that database performance will degrade over time regardless of growth factor and that it is advisable to dump/restore periodically? If so, are there any metrics that can identify if/when such a step would be called for?
Thanks for any insights.