Using Kona For Large Datasets?

152 views
Skip to first unread message

Chip Kent

unread,
Jun 3, 2013, 2:48:51 PM6/3/13
to kona...@googlegroups.com

About 4 years ago, I did a kdb evaluation.  It appeared to be an excellent product.  We didn't end up purchasing the product for a few reasons unrelated to the product.  

I just happened across Kona recently and was curious about how it handles large data sets.  From looking at Kona and comparing it to KDB, it appears that there isn't any native support for partitioned or splayed tables in Kona.  It also appears that you would implement a database by loading a dataset into a map, writing the dataset out to disk as a binary file, and then reading the binary file in as a memory mapped file.  From my quick look, this appears different than the native support of splayed and partitioned tables in KDB.

1) Is my view on how to implement a DB in Kona correct?  When I did the eval, I was proficient at q.  Since then, my proficiency has decayed away.
2) How large of a dataset can reasonably be handled using this method in Kona?
3) Is there any ongoing effort to implement tables in a kdb/q-like way?

I have a 300GB dataset I'd like to look at.  Before throwing down the non-negligible KDB costs, I wanted to give Kona and J/JDB a look.  I'd appreciate any thoughts on the suitability of Kona for this project before I invest the time to relearn K.  Also, if you have tried J/JDB, do you think it is more/less suited to this task?




Reply all
Reply to author
Forward
0 new messages