Very large set of data

24 views
Skip to first unread message

Solido

unread,
Aug 23, 2010, 9:26:36 AM8/23/10
to hawtdb
Hi,

I'm still testing hawtdb for futures projets, those will used a lot of
entity
for the basic one let's say it contains a long as the id and 4
bidDecimal.
About 2.5 Millions of entities.

I've tested it with a jvm tuned -Xmx2048 -Xms2048 but it freeze at
about
2 300 000 units.

My questions

Is hawtdb suitable for such project as a replacement for Cassandra or
Redis?

I insert all entities and then i call tx.commit(), is this the better
way ?

Thank for helping me better understand HawtDB :)

Hiram Chirino

unread,
Aug 23, 2010, 11:13:58 AM8/23/10
to haw...@googlegroups.com
It's best to inset in batches and commit. Where a batch has couple of thousand.
Make sure your running on a 64 bit JVM. It uses mmap extensively and
you may run out of address space on 32 bits.

But no, hawtdb is not really apple to apples like redis and cassandra,
both of those are partitioning data stores and therefore scale
horizontally. HawtDB just provides simple low level java based
indexes.

--
Regards,
Hiram

Blog: http://hiramchirino.com

Open Source SOA
http://fusesource.com/

Solido

unread,
Aug 24, 2010, 7:49:02 AM8/24/10
to hawtdb
I'm using the 64 bits version but i was waiting for the full commit
instead
of regular thousand one. Now even Xmx1024m is enough, it just slow
down
at the end of the process. The index is about 2GO.

Thank you Hiram.
Reply all
Reply to author
Forward
0 new messages