I have a use case where I need couple of millions keys and the size of the values would be around 1mb. There would be another million keys who will be around 2.5mb in size. All my keys would have infinite expiration time.
Does the slab memory allocation implementation keep on adding memory to adhere the key/value pair. Intentionally would there be a limiting factor where I need lot of key/value pairs to be of similar size. I have all the memory to my disposal so creating memcache with 1mb or 2.5mb or 1G slab size won't be an issue anyways.
How does chunk size inside slab gets decided etc. If I have all the slabs with free_chunks as 0 then will I have another slab created etc.
I am coming from a problem faced by a colleague where his all the key/value sizes were around the same and he started to find lot of evictions happening.
Regards,
Shubham