Hello!
On Sun, Nov 15, 2015 at 10:19 PM, RJoshi wrote:
> Thanks @agentzh.
> Do you see performance impact if number of keys increased in resty.lrucache given implementation is based on queue?
>
It depends on the key patterns and your use cases. Also, mind you,
there is a memory limit in GC-managed memory in each LuaJIT VM (which
is per worker). The limit is 4GB on i386 and 1GB on x86_64 (with the
luajit-mm plugin, the limit can be 2 GB on x86_64).
> I have ~50GB RAM available which can be utilized towards caching API request Uri and its responses. What would be your recommendation?
Use big memory for shm-based stores and moderate numbers of keys (like
1000ish?) in VM-level cache stores (like lua-resty-lrucache).
> How much should be allocated for resty.lrucache and how much for ngx.shared.DICT ?
>
See above.
> Do you recommend creating multiple shared.dict vs one big may be based on hashing the request uri?
>
Multiple shm dicts and manual sharding can be more efficient.
Regards,
-agentzh