Map Eviction and data insertion in Hazelcast 3.5

132 views
Skip to first unread message

Vikas Sharma

unread,
Aug 12, 2015, 2:58:31 AM8/12/15
to Hazelcast
I am using hazelcast-spring-3.1.xsd. My hazelcast map configuration is given below.

    <hz:map name="defaultMap"
                   backup-count="1"
                   max-size="5000"
                   max-size-policy = "PER_NODE"
                   eviction-percentage="50"
                   read-backup-data="true"    
                   eviction-policy="LRU"
                   merge-policy=
                            "com.hazelcast.map.merge.LatestUpdateMapMergePolicy"
                   time-to-live-seconds="0"
               >

I have given max-size = 5000. I have 12000 records and defaultMap is loading 12000 records in it if i do insert into it. I am confused here ,how it can load more data than its actual size 5000.I am getting no exception in inserting 12000 data records.
And one more thing its eviction configuration is not working also. I want to evict my cache by 50 percent if it reaches its max-size limit.

Can any body help me? Its very urgent.What i am missing ?

Enes Akar

unread,
Aug 12, 2015, 3:26:15 AM8/12/15
to Hazelcast
How many nodes do you have there?

--
You received this message because you are subscribed to the Google Groups "Hazelcast" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hazelcast+...@googlegroups.com.
To post to this group, send email to haze...@googlegroups.com.
Visit this group at http://groups.google.com/group/hazelcast.
To view this discussion on the web visit https://groups.google.com/d/msgid/hazelcast/10bf32b2-7116-42f1-95d9-0546f7cbda56%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Vikas Sharma

unread,
Aug 12, 2015, 3:29:41 AM8/12/15
to Hazelcast
I have 2 nodes.

Enes Akar

unread,
Aug 12, 2015, 3:33:52 AM8/12/15
to Hazelcast
Your configuration is 5K per node. 
So when number of entries reaches 10K (2 nodes), then eviction starts, still you may see number above 10K, no exception is thrown. But eviction starts and you should see the decrease in size.

On Wed, Aug 12, 2015 at 10:29 AM Vikas Sharma <vikas....@paxcel.net> wrote:
I have 2 nodes.

--
You received this message because you are subscribed to the Google Groups "Hazelcast" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hazelcast+...@googlegroups.com.
To post to this group, send email to haze...@googlegroups.com.
Visit this group at http://groups.google.com/group/hazelcast.

Vikas Sharma

unread,
Aug 12, 2015, 5:00:07 AM8/12/15
to Hazelcast
Hi Enes

Thanks for your reply.

We are loading our data(12000 records) on server startup into IMap whose max-size is 5000.On production we have two nodes.But i am testing it on local server.So i am using 1 node. Now my IMap has all 12000 records. My IMap is having id as 'key' and object as 'value'. You can see below code snippet.
When we call method getDefaultMapRows(ids) with some ids which are not in defaultMapcache, repopulateDefaultMapCache() is called and repopulateDefaultMapCache
will load the new ids data to defaultMapcache.

Now the issue we are facing is that when we try to get those new records ,we are not getting our newly added data. Actually we want to evict data from cache if max-size value is reached, which is not happening in our case.

IMap<Long, Object> defaultMapcache;
     void populateDefaultMapCache() {
List<Object[]> kvpairs = DAO.getData();
defaultMapcache = hazelcast.getMap("defaultMap");
Map<Long, Object> tempMap =  new HashMap<Long, Object>();
for(Object[] kv : kvpairs) 
{
Object row = mapCacheRowQueryResults(kv);
if(row != null)
{
tempMap.put(row.getOfferContentId(), row);
}
}
defaultMapcache.putAll(tempMap);
}
Map<Long, Object> getDefaultMapRows(List<Long> ids){
if(defaultMapcache.getAll(new HashSet<Long>(ids)) == null || defaultMapcache.getAll(new HashSet<Long>(ids)).size() != ids.size()) {
repopulateDefaultMapCache(ids);
}
return defaultMapcache.getAll(new HashSet<Long>(ids));
}
void repopulateDefaultMapCache(List<Long> ids) {
List<Object[]> kvpairs = DAO.getData();
Map<Long, Object> tempMap =  new HashMap<Long, Object>();
for(Object[] kv : kvpairs) 
{
Object row = mapCacheRowQueryResults(kv);
if(row != null)
{
tempMap.put(row.getOfferContentId(), row);
}
}
defaultMapcache.putAll(tempMap);

Vikas Sharma

unread,
Aug 12, 2015, 5:42:15 AM8/12/15
to Hazelcast

Hi Enes

Its evicting data ,but why its not evicting my 50 percent cache because my eviction-percentage is 50?

Enes Akar

unread,
Aug 12, 2015, 5:57:08 AM8/12/15
to Hazelcast
It tries to evict 50% of entries by evicting 50% of entries at each partition. So it may not be exact 50% but close to it.

Vikas Sharma

unread,
Aug 12, 2015, 6:45:04 AM8/12/15
to Hazelcast
Thank you very much for your help Enes. 

Vikas Sharma

unread,
Aug 12, 2015, 7:16:46 AM8/12/15
to Hazelcast
Hi Enes

What will happen if i won't define max-size and eviction-policy in my configuration? I know, this will stop eviction.But what will happen in this case? What is the limit of storing 

data in IMap? If there is no limit(no limit means max In memory of the system) then could you please tell,is there any side effect of this on my application.?

Thanks.

Enes Akar

unread,
Aug 12, 2015, 7:19:55 AM8/12/15
to Hazelcast
Hi Vikas;

There is no limit in hazelcast side, but of-course the limit is your hardware (memory). So you will experience out-of-memory exception.

Vikas Sharma

unread,
Aug 12, 2015, 7:42:31 AM8/12/15
to Hazelcast

Ok. So its better to evict the  data. In my code there was problem. Whenever i try to get data from Imap cache, i do check if the id exists in the cache. If it does not ,insert the new data from database to cache and then get data from cache in the next statement.Within this process,due to eviction some of my newly added data also got evicted and i got null whenever i do getDataFromCache operation. So i need to change my code. Thanks for your help.

Vikas Sharma

unread,
Aug 17, 2015, 5:46:43 AM8/17/15
to Hazelcast
Hi Enes

I am using eviction-policy="LRU" which means Least Recently Used entries will be evicted from cache if 'max-size' limit has been reached. But my newly added entries are getting evicted instead of old entries.What would be the issue ? 

Thanks 
Reply all
Reply to author
Forward
0 new messages