Actually this isn't a bug in Hazelcast, it's one of those cases where the right thing happens but it seems wrong without the detail. We'll get the documentation updated.
The main design here is that eviction works per partition.
each partition gets an equal share of the allowance.
This algorithm is necessary for instance, because partitions can move from member to member on a rebalance.
The example posted inserts 4 keys with large values.
For the keys chosen, each entry is in a different behaviour.
The keys are integers 0, 1, 2 and 3, allocated to partitions 11, 31, 5 and 227.
("hazelcastInstance.getPartitionService().getPartition(i).getPartitionId()" will show this)
What happens is when "put(3, v)" is added to partition 227, this is the point at which the memory threshold has been breached,
and partition 227 needs to find entries to evict to keep it's share of memory in check. Partition 227 only has one entry, key 3 just
inserted, so that's the only one it can delete.
The behaviour is a consequence of the entry sizes being large in relation to the allowed memory.
It's almost always a bad idea to reduce the partition count, but for *a test* you could repeat the example with "hazelcast.partition.count" set to "1".
All entries go to the same partition, partition 0, and when "put(3, v)" happens there are other entries in partition 0 and key 0 can be deleted as it's oldest.
For production, a better idea is to increase the allowed memory.
Finally, on a real cluster with several nodes, remember that other apparent issues will arise.
For example, in a 2 node cluster, it might happen that keys 0 and 2 are on the first node, and keys 1 and 3 are on the second node.
If the second node is low on memory, it can only choose of it's keys to evict, so either key 1 or 3 will have to go, let's say key 1.
This means the oldest key, key 0, remains.
Key 0 is on the first node, evicting it won't help the memory on the second node.
But if you listed the keys you'd now see keys 0, 2 and 3, and again this could cause confusion.