You do not have permission to delete messages in this group
Copy link
Report message
Show original message
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to hawtdb
I think what is needed is a less memory intensive abstraction:
- fixed pool of MappedByteBuffer
- instances enter and leave the pool (via map/unmap)
as needed depending on concurrent requests to read and write
Basically you want to try to map only active regions of the file using
a LRU map to determine which segments to unmap when you run into your
limit:
// something like the following, but one that is actually thread safe
and does not leak buffer references
// probably we would need consumer threads to explicitly acquire/
release buffers when they were done
TreeMap<Long, LRUEntriy<MappedByteBuffer>> entries;
MappedByteBuffer loadBuffer(int index) {
LRUEntry<MappedByteBuffer> entry = lruMap.get(index);
if (entry != null) {
return entry.value();
}
else {
if (lruMap.size() > maxSize) {
lruMap.evictOldest();
return lruMap.allocate(index);
}
}
}
// except you
Un-mapping seems like it would be expensive since you would need to
sync before un-mapping.
I'm not sure how this would affect transaction boundaries.