Encountering deadlock when "GC overhead limit is exceeded"

68 views
Skip to first unread message

michelle arianne Dingcon

unread,
Jun 17, 2016, 5:53:39 AM6/17/16
to MapDB
Hello!

We encountered the a deadlock somewhere in our program and it seems because GC is trying to free up memory.

The only working thread is this.  And this seems to hold the lock.
Any help is appreciated.  Thank you!

"Indexer-0-20160612_125959_bfs-hs22-13s7-vm1.bfs.openwave.com_http_sorted_7.c1259.csv.gz" prio=10 tid=0x00007fb384ef0800 nid=0x2fda runnable [0x00007fb2cef8c000]
   java.lang.Thread.State: TIMED_WAITING (parking)
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:349)
        at org.mapdb.BTreeMap.lock(BTreeMap.java:2882)
        at org.mapdb.BTreeMap.put2(BTreeMap.java:686)
        at org.mapdb.BTreeMap.put(BTreeMap.java:643)
        at com.openwave.sst.server.mapdb.Partition.put(Partition.java:94)
        at com.openwave.sst.server.mapdb.MapDBImpl.addSubscriberPreCommit(MapDBImpl.java:94)
        at com.openwave.sst.server.index.IndexerJob.insertToMapPrecommit(IndexerJob.java:101)
        at com.openwave.sst.server.index.IndexerJob.run(IndexerJob.java:340)
        at com.openwave.sst.server.index.FileConsumer.run(FileConsumer.java:64)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)




THanks!

-Mitch

Jan Kotek

unread,
Jun 20, 2016, 11:56:23 AM6/20/16
to ma...@googlegroups.com
Hi,

it seems that other threads quit when exception was thrown, without releaseing the lock. That should be fixed in newer versions, there is try{}catch to prevent that.

What version are you using?

Jan
--
You received this message because you are subscribed to the Google Groups "MapDB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mapdb+un...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

michelle arianne Dingcon

unread,
Jun 20, 2016, 10:36:26 PM6/20/16
to MapDB
Hello!

We are currently using 1.0.7 version.
The workaround we did is during data cleanup, we reopen the db with cache disabled.

Cleanup operation scans all entries in db which probably leads to active caching.
Idea is to open mapDB file with disabled caching while performing cleanup. Then reopen with enabled caching for indexing.

But then another problem comes in. 
Cleanup takes too much time. One HTTP index partition cleanup takes 10-20 min. After two days of indexing we have ~50 partitions. So cleanup will be working the whole day in background affecting performance of indexer and search.

mi_yane

unread,
Jun 21, 2016, 3:09:02 AM6/21/16
to MapDB
Will DB.compact() solve the problem?

mi_yane

unread,
Jun 21, 2016, 5:19:37 AM6/21/16
to MapDB
May I ask how can we optimize deletion from a BTreeMap?
What we do is we iterate through the map to see which item to be deleted and delete it.
I think we are having and overhead in deletion.

We have removed caching during deletion because it takes up the resources to be used for indexing and searching which we think causes the OOM.
But now, the deletion is very slow.

Thank you!

mi_yane

unread,
Jun 22, 2016, 12:24:36 AM6/22/16
to MapDB
I'm trying to use the 2.0-beta13 release.
But i'm encountering this exception when opening the file db created on 1.0.7.

ERROR   2016-06-22 04:58:00,082 [SSTQuartz_Worker-1] - Error occurred while performing index cleanup. org.mapdb.DBException$WrongConfig: This is not MapDB file
        at org.mapdb.StoreDirect.initOpen(StoreDirect.java:175)
        at org.mapdb.StoreWAL.initOpenPost(StoreWAL.java:237)
        at org.mapdb.StoreWAL.initOpen(StoreWAL.java:224)
        at org.mapdb.StoreDirect.init(StoreDirect.java:131)
        at org.mapdb.DBMaker$Maker.makeEngine(DBMaker.java:1510)
        at org.mapdb.DBMaker$Maker.make(DBMaker.java:1290)
        at com.openwave.sst.server.mapdb.TenantSet.<init>(TenantSet.java:69)
        at com.openwave.sst.server.mapdb.TenantSet.getInstance(TenantSet.java:42)
        at com.openwave.sst.server.index.IndexCleanupJob.doCleanup(IndexCleanupJob.java:52)
        at com.openwave.sst.server.index.IndexCleanupJob.execute(IndexCleanupJob.java:40)
        at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
        at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:557)

Dmitriy Shabanov

unread,
Jun 22, 2016, 3:13:13 AM6/22/16
to ma...@googlegroups.com
You have to export data from 1.0.7 and import to 2.x or 3.x version of mapdb.

Renamed package simplify that procedure:

<dependency>
    <groupId>org.mapdb</groupId>
    <artifactId>mapdb-renamed</artifactId>
    <version>1.0.8</version>
</dependency>

--
You received this message because you are subscribed to the Google Groups "MapDB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mapdb+un...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--
Dmitriy Shabanov

mi_yane

unread,
Jun 22, 2016, 3:22:25 AM6/22/16
to MapDB
Hi!

Thanks for your reply!
If I use the 1.0.7, is there anyway to optimize the deletion of entries from BTreeMap?

Jan Kotek

unread,
Jun 22, 2016, 3:57:06 AM6/22/16
to ma...@googlegroups.com
Storage format has changed between versions 1, 2 and 3 

mi_yane

unread,
Jun 22, 2016, 4:02:54 AM6/22/16
to MapDB
Hi Jan!

I may keep to the 1.0.7 version.
Any assistance on this issue would be very helpful.
I'm kinda stuck on how to workaround on this.

Dmitriy Shabanov

unread,
Jun 22, 2016, 7:15:45 AM6/22/16
to ma...@googlegroups.com
On Wed, Jun 22, 2016 at 1:02 PM, mi_yane <madi...@gmail.com> wrote:
Any assistance on this issue would be very helpful.

Can you create simple test that reproduce issue?

--
Dmitriy Shabanov

mi_yane

unread,
Jun 22, 2016, 7:48:35 AM6/22/16
to MapDB
Hi Dmitriy!

It's kinda hard to reproduce.
It takes around 9GB of data for this to happen.
I'm trying a fix to close the db after deletion of map entries and then open it again.
I believe this will release the unuse memories of BTreeMap.
Thanks!

Scott Carey

unread,
Jun 24, 2016, 6:51:12 PM6/24/16
to MapDB
Does it happen if you disable caching, or change it to weak reference caching?
Reply all
Reply to author
Forward
0 new messages