I forgot to describe strategies:
* non blocking LRU, LRU cache with eviction over ets. Eviction is done asynchronously using garbage collection. A pool of keys among the less used are evicted. This prevents evicting the keys too often and increases the latency. The algorithm is similar to the one used in Redis [1]
* volatile: pure cache maintained in a gen process. Keys to be evicted are maintained in a list
* tier file: part of the cache is maintained in memory, part is maintained on disk for persistence in an append only manner. This allows the cache tobe persistent across upgrades/restarts.
Note: you can pass your own cache module if you need to. You can also pass a hook to be executed on eviction on all caches backends.
Distribution is done by maintaining a group of peers. For now it is using PG2 and erlang distribution. This is similar to Group Cache. The distribution backend itself is pluggable. For some customers a true P2P distribution is used for example.
Anyway I may release the stable version if anyone sees an interest in it.
Benoît