regarding data partition in h2db

426 views
Skip to first unread message

Karun

unread,
Jan 4, 2012, 5:39:38 AM1/4/12
to H2 Database
Hi,

Hope you are doing well. Need your help on h2db.

I am using h2db version 1.3.162 for my project. I would like to know
below details on h2db in-memory mode

1) Does h2db support data partitioning?
2) If yes, then can you please tell me how it can be achieved


Note: Planning to store 160 million records per day in h2db in-memory
mode.

Could you please help me in these matters.

Thanks & Regards,
Karun

Noel Grandin

unread,
Jan 4, 2012, 8:34:47 AM1/4/12
to h2-da...@googlegroups.com, Karun
H2 doesn't have any automatic data partitioning support.

If you're storing that much data and it's purely in-memory, why do you need a database?
Would be less overhead to store it in an ArrayList or HashMap.

Also, at the scale, you are going to have significant GC pauses.
You might want to check out the heap storage solutions like
http://terracotta.org/products/bigmemory
and
memcached

vrota...@gmail.com

unread,
Jan 4, 2012, 12:42:42 PM1/4/12
to h2-da...@googlegroups.com
There was a suggestion some time ago about using a RAM disk as db storage as an alternative to in-memory database. An advantage will be that you should not worry about GC, but it will still be slower that a plain in-memory db. Maybe, the nio stuff in H2 can alleviate that.

Otherwise.. the problem with hard questions is that they have no obvious questions.

--
   Vasile Rotaru

Christoph Läubrich

unread,
Jan 4, 2012, 1:17:51 PM1/4/12
to h2-da...@googlegroups.com
If you are running on Linux you can "store" the db in the special
/dev/shm device which is in fact a (shared) memory disk, access to this
device is really fast and you even don't waste any java heapspace. One
nice thing is, when you run out of memory it autmatically uses the
swapspace if I rember right.

> --
> You received this message because you are subscribed to the Google
> Groups "H2 Database" group.
> To post to this group, send email to h2-da...@googlegroups.com.
> To unsubscribe from this group, send email to
> h2-database...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/h2-database?hl=en.

Karun

unread,
Jan 5, 2012, 7:32:47 AM1/5/12
to H2 Database
Hi

Thanks for quick reply.

Regards,
Mohanty

On Jan 4, 11:17 pm, Christoph Läubrich <lae...@googlemail.com> wrote:
> If you are running on Linux you can "store" the db in the special
> /dev/shm device which is in fact a (shared) memory disk, access to this
> device is really fast and you even don't waste any java heapspace. One
> nice thing is, when you run out of memory it autmatically uses the
> swapspace if I rember right.
>

Karun

unread,
Jan 9, 2012, 1:48:49 AM1/9/12
to H2 Database
Hi Christoph,

Thanks for your suggestion.

I tried to use dev/shm in linux machine to populate more data using in-
memory database.

But was able to load only 1 million data only.after that i get
JDBCException "OutOfMemoryError: GC overhead limit exceeded".

I tried these below steps:
1)install db under dev/shm
2)started server from this directory in one process
3) loaded data(using .csv file read) from same machine with another
process.

Can you please help me out if there is anything I am missing when i
use dev/shm directory.
or kindly tell me if in-memory db can hold more data(approx 160
million data )

Thanks in advance.

Regards,
Karun

Christoph Läubrich

unread,
Jan 9, 2012, 2:05:36 AM1/9/12
to h2-da...@googlegroups.com
Hi Karun,

what was your connect string? Do you try to insert all data in one
transaction or do you flush the data from time to time? I rember
somwhere else was mentioned that h2 itself holds some data in memory
while a transaction is running and might exceed gc limit when inserting
large very datasets.

Am 09.01.2012 07:48, schrieb Karun:
> Hi Christoph,
>
> Thanks for your suggestion.
>
> I tried to use dev/shm in linux machine to populate more data using in-
> memory database.
>
> But was able to load only 1 million data only.after that i get
> JDBCException "OutOfMemoryError: GC overhead limit exceeded".
>
> I tried these below steps:
> 1)install db under dev/shm
> 2)started server from this directory in one process
> 3) loaded data(using .csv file read) from same machine with another
> process.
>
> Can you please help me out if there is anything I am missing when i
> use dev/shm directory.
> or kindly tell me if in-memory db can hold more data(approx 160
> million data )
>
> Thanks in advance.
>
> Regards,
> Karun
>
>
>
>
>
> On Jan 5, 5:32 pm, Karun<karun.moha...@gmail.com> wrote:
>
>> Hi
>>
>> Thanks for quick reply.
>>
>> Regards,
>> Mohanty
>>

Karun

unread,
Jan 9, 2012, 2:14:35 AM1/9/12
to H2 Database
Hi Christoph,

Connection string is : jdbc:h2:tcp://<ipadderess>:8001/mem:testh2memdb
Tried to load 2 million data only.

Created .csv files of 1 million data, then inserted it by redaing
these 2 csv files.
Inbetween of two insertion..there is gap of 5 mintues time interval.

Christoph Läubrich

unread,
Jan 9, 2012, 2:26:19 AM1/9/12
to h2-da...@googlegroups.com
I see, you must not put the server in the shm, but the database file:
For example:
jdbc:h2:tcp://<ipadderess>:8001/dev/shm/

of course you can create an own folder for the db, just use the shm like
a regular harddisk but keep in mind the content is lost whenever you
reboot or power off our pc

Karun

unread,
Jan 9, 2012, 2:43:11 AM1/9/12
to H2 Database
Thanks for your suggestion.Will try it out and get back to you soon.
Reply all
Reply to author
Forward
0 new messages