Hi Steve,
I have seen similar issue before, although it was not JVM crash just OOEM exception. It was caused by not enough free space on EXT4 fs. I deleted files, but those were still linked (and not released) by other running process.
MapDB maps storage files in 1MB chunks, for 64GB it opens 64K mmap pointers. Prior 1.0.5 there was also leak which did not released pointers correctly, causing some problems on Windows.
Mmap files are sort of black magic for me. I encountered handful of JVM bugs which I had to workaround. I think allocating way too many pointers might be triggering another bug, so I am reworking this section for 2.0.
I hope data loss is not critical. ".commitFileSyncDisable()" pretty much eliminates WAL durability. But if store was closed correctly before reopening than data were flushed correctly, and there could be a problem in MapDB.
There are some assertions and checksums in MapDB to prevent loading incorrect data. In theory we could disable those and attempt to rescue some data from HTreeMap.
Jan
--
You received this message because you are subscribed to the Google Groups "MapDB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mapdb+un...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
How are you getting around this long standing limitation:
http://bugs.java.com/view_bug.do?bug_id=6893654
You received this message because you are subscribed to a topic in the Google Groups "MapDB" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/mapdb/PIfjkv8mDZM/unsubscribe.
To unsubscribe from this group and all its topics, send an email to mapdb+un...@googlegroups.com.
I was not aware of this. I just assume that UUID is thread safe.
Inconsistent hash could in theory explain JVM crash. If MapDB reads wrong file offset, we could end with very high file pointer, and that could crash JVM.
There is workaround. Ignore UUID.hashCode and UUID.equals methods. Use custom hasher:
Map hashMap = db.createHashMap()
.hasher(new UUIDHasher).keySerializer(Serializer.UUID).makeOrGet()
Original link does not work, here is link to google cache:
I raised new issue to investigate this for 2.0
https://github.com/jankotek/MapDB/issues/387
Jan
This seems to be only problem with Java6, it was rewritten in java 7
Jan
Ah ok. I was actually referring to the problem of not being able to reliably unmap files. There was a workaround on that page showing how to call the cleaner method reflectively
MapDB already uses reflection to unmap files when they are closed.
I think that is only problem on Windows (files are locked until unmapped).
MapDB does not shrink (unmap) files unless compaction is involved.
Perhaps it could cause problem if you open/close storage frequently.
Jan
But is there any way mapDB handles this if I don't want to turn off memory mapping for my maps?