Hi Enes
Thanks for your reply.
We are loading our data(12000 records) on server startup into IMap whose max-size is 5000.On production we have two nodes.But i am testing it on local server.So i am using 1 node. Now my IMap has all 12000 records. My IMap is having id as 'key' and object as 'value'. You can see below code snippet.
When we call method getDefaultMapRows(ids) with some ids which are not in defaultMapcache, repopulateDefaultMapCache() is called and repopulateDefaultMapCache
will load the new ids data to defaultMapcache.
Now the issue we are facing is that when we try to get those new records ,we are not getting our newly added data. Actually we want to evict data from cache if max-size value is reached, which is not happening in our case.
IMap<Long, Object> defaultMapcache;
void populateDefaultMapCache() {
List<Object[]> kvpairs = DAO.getData();
defaultMapcache = hazelcast.getMap("defaultMap");
Map<Long, Object> tempMap = new HashMap<Long, Object>();
for(Object[] kv : kvpairs)
{
Object row = mapCacheRowQueryResults(kv);
if(row != null)
{
tempMap.put(row.getOfferContentId(), row);
}
}
defaultMapcache.putAll(tempMap);
}
Map<Long, Object> getDefaultMapRows(List<Long> ids){
if(defaultMapcache.getAll(new HashSet<Long>(ids)) == null || defaultMapcache.getAll(new HashSet<Long>(ids)).size() != ids.size()) {
repopulateDefaultMapCache(ids);
}
return defaultMapcache.getAll(new HashSet<Long>(ids));
}
void repopulateDefaultMapCache(List<Long> ids) {
List<Object[]> kvpairs = DAO.getData();
Map<Long, Object> tempMap = new HashMap<Long, Object>();
for(Object[] kv : kvpairs)
{
Object row = mapCacheRowQueryResults(kv);
if(row != null)
{
tempMap.put(row.getOfferContentId(), row);
}
}
defaultMapcache.putAll(tempMap);