H2 as read-only cache, strategy for refresh

84 views
Skip to first unread message

Leonel Gayard

unread,
Jun 26, 2020, 3:17:37 PM6/26/20
to H2 Database
Hi,

I want to use H2 in-memory as a read-only cache in front of a larger database.

This cache will hold about 20 tables, a total of 1Gb, and should be refreshed every 15 minutes.The fresh data will be available as a CSV file, so H2's creation from CSV is attractive: http://h2database.com/html/tutorial.html#csv

What is a good strategy for refreshing the data?

I was thinking on dropping the table and re-creating it from the fresh data, but how do I cope with in-between requests?

Thanks
--
Leonel



Noel Grandin

unread,
Jun 26, 2020, 3:46:59 PM6/26/20
to H2 Database

create new table from CSV

drop old table

rename old table to new table

Noel Grandin

unread,
Jun 26, 2020, 3:47:27 PM6/26/20
to H2 Database
last line should have been

"rename new table to proper name"

Leonel Gayard

unread,
Jun 26, 2020, 3:53:15 PM6/26/20
to h2-da...@googlegroups.com
Looks like a good strat.

For people coming from a search engine, here's the command to rename a table: http://h2database.com/html/commands.html#alter_table_rename


On Fri, Jun 26, 2020 at 4:47 PM Noel Grandin <noelg...@gmail.com> wrote:
last line should have been

"rename new table to proper name"

--
You received this message because you are subscribed to the Google Groups "H2 Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email to h2-database...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/h2-database/CAFYHVnU-mLeAQ9%3DLT6HQpVsv2%2B_xMsqVz7h2KZfSF1Ze4ftatw%40mail.gmail.com.

Ashkrit Personal

unread,
Jun 26, 2020, 10:59:34 PM6/26/20
to h2-da...@googlegroups.com
I have used h2 as cache for analytics query. This is working very well. Things to consider 
- manage connection pool and make sure you keep atleast 1 open
- when doing changes use copy on write pattern to avoid any slowness on read query. Disk are very cheap!
- disable all the index before large insert and enable it again.
- choose your index based on read access pattern 
- compact database after large insert.
- Shard db based on performance requirement.

Using these strategy I am able to manage close to billion records and giving millisecond response time 

Regards
Ashkrit sharma

On 27 Jun 2020, at 03:53, Leonel Gayard <leonel...@gmail.com> wrote:


Reply all
Reply to author
Forward
0 new messages