proposed design for MemoryMappedFile

8 views
Skip to first unread message

Justin

unread,
Aug 2, 2010, 2:47:47 PM8/2/10
to hawtdb
I think what is needed is a less memory intensive abstraction:
- fixed pool of MappedByteBuffer
- instances enter and leave the pool (via map/unmap)
as needed depending on concurrent requests to read and write

Basically you want to try to map only active regions of the file using
a LRU map to determine which segments to unmap when you run into your
limit:

// something like the following, but one that is actually thread safe
and does not leak buffer references
// probably we would need consumer threads to explicitly acquire/
release buffers when they were done
TreeMap<Long, LRUEntriy<MappedByteBuffer>> entries;
MappedByteBuffer loadBuffer(int index) {
LRUEntry<MappedByteBuffer> entry = lruMap.get(index);
if (entry != null) {
return entry.value();
}
else {
if (lruMap.size() > maxSize) {
lruMap.evictOldest();
return lruMap.allocate(index);
}
}
}

// except you

Un-mapping seems like it would be expensive since you would need to
sync before un-mapping.
I'm not sure how this would affect transaction boundaries.
Reply all
Reply to author
Forward
0 new messages