> --
> Job Board: http://jobs.nodejs.org/
> Posting guidelines:
> https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
> You received this message because you are subscribed to the Google
> Groups "nodejs" group.
> To post to this group, send email to nod...@googlegroups.com
> To unsubscribe from this group, send email to
> nodejs+un...@googlegroups.com
> For more options, visit this group at
> http://groups.google.com/group/nodejs?hl=en?hl=en
Last I checked, most of the CouchDB tests were passing. :)
K.
---
http://blitz.io
@pcapr
var db = {}
// Custom code here
Remember that javascript objects have some pretty small limits when
doing these kinds of things. The GC gets very angry when there are
millions of objects or millions of properties in an object (including
array keys).
You can packs all sorts of data into node buffers though. I once made
a database for a tile-based game in node. The world was split into
1024x1024 tile sections of map. Each square could be one of 256
different tiles. I kept each of these map tiles in a node buffer 1MB
in size.
These larger tiles were kept in a javascript two-dimensional sparse
array. I would allocate a new 1mb tile every time a value was set in
an area that's never been used before. The basic idea is something
like:
function Chunk(){}
Chunk.prototype.get = function (x, y) {
function getValue(x, y) {
var tx = x >> 10;
var ty = y >> 10;
var ix = x % 1024;
var iy = y % 1024;
return numToTile(tiles[y][x][y * 1024 + x]);
}
The important part is that I balance my objects so that there is both
minimal number of objects and minimal number of properties on each
object. I store the bulk of the data in tightly packed node buffers
(which live outside the V8 heap).
I wrote a script to stress test this system and it happily used 20Gb
of ram before I got tired of running my laptop on pure swap space.
On the other hand, nStore used to keep an in-memory index of the file
offsets of all the fields in it's key-value database. The actual
values were stored on disk in JSON format within a single file. When
I stress tested this at a million documents, the GC in V8 fell apart
and brought my process to a crawl. I was measuring thousands of ms
for each and every new property I added to my index object.
And if not using redis, you can also define mysql tables to be in-memory. And mysql cluster tables/indexes are in memory as well.
I'm working on a Node.js implementation of a b-tree.
Indexes - It is an index.
Persistence and fault tolerance - Leaf page writes are always appends.
Branch and leaf page rewrites at split and merge create replacement
files that are linked into place.
Deployment - Written for Node.js in CoffeeScript. Deploy along with the
rest of your Node.js modules.
It's not in memory implementation because it's not that much more
difficult to do the paging. The premise of app specific database
architecture is the key premise.
My desire is to implement a networked consensus algorithm, and then
offer the community the database primitives necessary to experiment with
different database designs, b-tree index, write-ahead log, consensus.
This b-tree can be used as both an index and a write ahead log.
Not done. Working on balancing the tree. You're welcome to watch the
repo if it interests you.
http://bigeasy.github.com/strata/
https://github.com/bigeasy/strata
--
Alan Gutierrez - http://twitter.com/bigeasy - http://github.com/bigeasy
Would be interested if it wasn't in CoffeeScript.
The node.js hospital check in Kiosk that was posted here a few days ago uses Caché as the database. Globals is 'stripped down' version of Caché. Michael can detail it better than I can I'm sure, but basically Globals is Caché stripped down to the database engine, with API's only, instead of the native scripting language.
Caché, Globals, GT.M is what the NoSQL databases will be when they grow up... (Putting on flame resistant suit :-) )
Mark