Deadlock while closing

155 views
Skip to first unread message

Nisha Sowdri NM

unread,
Jul 31, 2013, 5:43:12 AM7/31/13
to ma...@googlegroups.com
Hi,

I've been using MapDB for around 5 months in production. Its wonderful. 

Occasionally a dead lock occurs and the executing thread hangs for ever. The only way to release the thread is to restart the tomcat. I've put in the stack trace below. Kindly let me know the workaround or fix if any for this.

Caused by: java.lang.RuntimeException: java.lang.InterruptedException
        at org.mapdb.AsyncWriteEngine.close(AsyncWriteEngine.java:410)
        at org.mapdb.EngineWrapper.close(EngineWrapper.java:72)
        at org.mapdb.EngineWrapper.close(EngineWrapper.java:72)
        at org.mapdb.CacheHashTable.close(CacheHashTable.java:169)
        at org.mapdb.HTreeMap.close(HTreeMap.java:1233)
        at com.mediaiq.csvprocessor.component.filter.AggregationFilter.run(AggregationFilter.java:94)
        at com.mediaiq.csvprocessor.component.filter.BaseFilter.run(BaseFilter.java:46)
        at com.mediaiq.csvprocessor.component.filter.RowFilter.run(RowFilter.java:85)
        at com.mediaiq.miqweb.core.filter.DateFilter.run(DateFilter.java:174)
        at com.mediaiq.csvprocessor.FilterChain.call(FilterChain.java:51)
        at com.mediaiq.csvprocessor.Runner.run(Runner.java:46)
        ... 15 more
Caused by: java.lang.InterruptedException
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2052)
        at java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:374)
        at org.mapdb.AsyncWriteEngine.close(AsyncWriteEngine.java:395)

Somewhere in the code (While just googling for 'deadlock + mapdb') I found the below piece of code. Does it have anything to do with the deadlock?

public class AsyncWriteEngine extends EngineWrapper implements Engine {
  
 // ...
 @Override
    public <A> long put(A value, Serializer<A> serializer) {
        if(commitLock!=null) commitLock.readLock().lock();
        try{
            try {
                Long recid = newRecids.take(); //TODO possible deadlock while closing
                update(recid, value, serializer);
                return recid;
            } catch (InterruptedException e) {
                throw new RuntimeException(e);
            }
        }finally{
            if(commitLock!=null) commitLock.readLock().unlock();
        }

    }
  // ...
}


Thanks, 

Jan Kotek

unread,
Jul 31, 2013, 4:46:26 PM7/31/13
to ma...@googlegroups.com, Nisha Sowdri NM
Hi Nisha,

There were several problems with AsyncWriteEngine over last months.
And this class was also rewritten a few times. So version you are using seems
to be obsolete, line numbers are not matching current version.

TODO was already solved/rewritten.

So please update to recent version and send new stacktrace if it would happen
again. Workaround is to disable asyncWriteEngine. It is not that slow.

Jan
--
Jan Kotek

Hope this helps.
signature.asc

Nisha Sowdri NM

unread,
Aug 1, 2013, 12:42:29 AM8/1/13
to ma...@googlegroups.com, Nisha Sowdri NM, j...@kotek.net
Hi Jan,

Thank you for your reply. 

I'm using the latest stable release 0.9.3. I'll update to the snapshot version and check (will post if it happens again).  

BTW I'm using DBMaker.newTempHashMap() to create a temp hashmap. So how to disable async writes here. I couldn't find this in the docs.

Regards, 
-sowdri-
Reply all
Reply to author
Forward
0 new messages