There is FGC when we use vertx.

76 views
Skip to first unread message

infrapol...@gmail.com

unread,
Mar 5, 2015, 12:23:20 AM3/5/15
to ve...@googlegroups.com
Hi, guys.
I have some problems while vertx is running.


I use Hazelcast 3.2.3 version which is included in Vertx 2.1.5 version, but there is Memory Overflow or server load by frequent FGC after 3 months of running vertx.

And I found the reason of this situation. I guess this is happened by accumulating org.vertx.java.spi.cluster.impl.hazelcast.HazelcastAsyncMultiMap.

So I just tried to test by calling vertx.eventBus().regiterHandler and vertx.eventBus().unregisterHandler for a single pair repeatly. 

And I've gotten the result from repetition test that HazelcastAsyncMultiMap Memory is accumulating consistently. But I don't know this result means memory leak. this is a memory leak?




And there is another question.

Now I try to use eviction config by setting map name field in cluster.xml to "sub" like <map name="subs">.

But It doesn't work even though I've set like belows.
<eviction-policy>LRU</eviction-policy>
<max-size policy="USED_HEAP_SIZE">100</max-size> or  <max-size policy=" USED_HEAP_PERCENTAGE  ">50</max-size>
<eviction-percentage>40</eviction-percentage>

What we have to do for using eviction? Is there any other configuration which I have to set? 


In summary, I have two questions about memory leak and eviction.
Please answer the two questions.
Message has been deleted

infrapol...@gmail.com

unread,
Mar 5, 2015, 1:28:15 AM3/5/15
to ve...@googlegroups.com
There is one more thing which you know.
Memory leak problem doesn't happen if I don't use clustering.

Jordan Halterman

unread,
Mar 5, 2015, 2:01:52 AM3/5/15
to ve...@googlegroups.com
Not to turn you away, but just a suggestion: you might get better answers regarding eviction policies on the Hazelcast mailing list since those features are Hazelcast specific.

Personally, I have seen some crazy memory consumption behavior in Hazelcast in the past. I once ran a test that caused OOM just by repeatedly setting the *same* key on a single Hazelcast map. That sounds like the side effect of an in-memory commit log that is never compacted, but I would be surprised if Hazelcast had anything like that.

Anyways, what you're describing sounds like potentially similar behavior. When registering and unregistering an event bus handler Vert.x should respectively create and destroy an entry in a distributed map IIRC. 

I never spent much time digging into my own issue and instead used ZooKeeper and forgot about the whole mess. Maybe someone will come along with a better answer, but it may be useful to search Hazelcast groups for questions regarding such issues. I can't imagine you and me are the only people that have seen this.
--
You received this message because you are subscribed to the Google Groups "vert.x" group.
To unsubscribe from this group and stop receiving emails from it, send an email to vertx+un...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

bytor99999

unread,
Mar 5, 2015, 11:00:01 AM3/5/15
to ve...@googlegroups.com
I think we have seen this slightly too. As time goes by in our system, each new user logging in creates about 5 new Handlers that also get unregistered when they logout. And gradually over time our memory consumption goes up. We have caught a number of Hazelcast Memory leaks with our separate data we store in Hazelcast and have already fixed those, but still see a slight increase still and haven't found out where, but this could very well be our last memory leak.

I also recall someone else Brian Lalor, who I think found the same thing.

I will have someone on my team look at it that is a Hazelcast expert and see if he can stop it and let you guys know if so or if not.

Mark
Reply all
Reply to author
Forward
0 new messages