I've been using MapDB for around 5 months in production. Its wonderful.
Occasionally a dead lock occurs and the executing thread hangs for ever. The only way to release the thread is to restart the tomcat. I've put in the stack trace below. Kindly let me know the workaround or fix if any for this.
Caused by: java.lang.RuntimeException: java.lang.InterruptedException
at org.mapdb.AsyncWriteEngine.close(AsyncWriteEngine.java:410)
at org.mapdb.EngineWrapper.close(EngineWrapper.java:72)
at org.mapdb.EngineWrapper.close(EngineWrapper.java:72)
at org.mapdb.CacheHashTable.close(CacheHashTable.java:169)
at org.mapdb.HTreeMap.close(HTreeMap.java:1233)
at com.mediaiq.csvprocessor.component.filter.AggregationFilter.run(AggregationFilter.java:94)
at com.mediaiq.csvprocessor.component.filter.BaseFilter.run(BaseFilter.java:46)
at com.mediaiq.csvprocessor.component.filter.RowFilter.run(RowFilter.java:85)
at com.mediaiq.miqweb.core.filter.DateFilter.run(DateFilter.java:174)
at com.mediaiq.csvprocessor.FilterChain.call(FilterChain.java:51)
at com.mediaiq.csvprocessor.Runner.run(Runner.java:46)
... 15 more
Caused by: java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2052)
at java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:374)
at org.mapdb.AsyncWriteEngine.close(AsyncWriteEngine.java:395)
@Override
public <A> long put(A value, Serializer<A> serializer) {
if(commitLock!=null) commitLock.readLock().lock();
try{
try {
Long recid = newRecids.take(); //TODO possible deadlock while closing
update(recid, value, serializer);
return recid;
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
}finally{
if(commitLock!=null) commitLock.readLock().unlock();
}
} // ...
}