contextDataMap.put(storageKey, objectMapper.writeValueAsBytes(json));
contextDataDb.commit();
</snippet>
When the system gets under load, we are seeing the whole thing come crashing down with a nio ClosedChannelException:
java.nio.channels.ClosedChannelException
at sun.nio.ch.FileChannelImpl.ensureOpen(FileChannelImpl.java:99)
at sun.nio.ch.FileChannelImpl.truncate(FileChannelImpl.java:319)
at org.mapdb.Volume$FileChannelVol.tryAvailable(Volume.java:683)
at org.mapdb.Volume.ensureAvailable(Volume.java:58)
at org.mapdb.StoreWAL.logAllocate(StoreWAL.java:310)
at org.mapdb.StoreWAL.put(StoreWAL.java:225)
at org.mapdb.EngineWrapper.put(EngineWrapper.java:54)
at org.mapdb.Caches$LRU.put(Caches.java:46)
at org.mapdb.HTreeMap.putInner(HTreeMap.java:559)
at org.mapdb.HTreeMap.put(HTreeMap.java:458)
at com.levelsbeyond.plugin.workflow.context.WorkflowContextDataServiceImpl.updateContextDataValue(WorkflowContextDataServiceImpl.java:208)
at com.levelsbeyond.plugin.workflow.context.WorkflowContextDataServiceImpl.updateContextDataValue(WorkflowContextDataServiceImpl.java:187)
at com.levelsbeyond.plugin.workflow.step.VideoConversionStepExecution.monitorMediaConversion(VideoConversionStepExecution.java:398)
at com.levelsbeyond.plugin.workflow.step.VideoConversionStepExecution.execute(VideoConversionStepExecution.java:112)
at com.levelsbeyond.plugin.workflow.execution.WorkflowStepExecutor$StepCallable.call(WorkflowStepExecutor.java:237)
at com.levelsbeyond.plugin.workflow.execution.WorkflowStepExecutor$StepCallable.call(WorkflowStepExecutor.java:185)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
The MapDB is only used by the WorkflowContextDataServiceImpl class, though it is heavily multi-threaded, there is potential for 50+ threads to be writing simultaneously. We definitely only see this issue under load.
I'm frankly unsure what to do to correct/avoid this issue. Can someone provide any hints or info on things I can try?
Thanks,
Dave Lamy
Hi,
I have no idea why it is throwing this exception.
It looks like there is race condition between log-update and commit, which should not happen in theory.
I will have to do code audit around stack trace you send.
For start you could workaround it by using memory mapped file:
DBMaker...
.mmapFileEnable()
Also async write could workaround this problem:
DBMaker...
.asyncWriteEnable()
Hope this helps,
Jan
--
You received this message because you are subscribed to the Google Groups "MapDB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mapdb+un...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
I believe this is fixed in 0.9.11 which will come out today.
Jan
For more options, visit https://groups.google.com/d/optout.
To unsubscribe from this group and stop receiving emails from it, send an email to mapdb+unsubscribe@googlegroups.com.
Hi,
the allocation logic has changed a lot, so there is very good chance this is fixed. I have not investigated this too deeply yet.
Jan
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "MapDB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mapdb+un...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.