Max size of lrucache and ngx.shared.DICT cache

693 views
Skip to first unread message

RJoshi

unread,
Nov 13, 2015, 12:40:33 PM11/13/15
to openresty-en
1. What is the maximum number of keys can be stored into resty.lrucache cache for efficient lookup given implmentation is queue based?
2. How much maximum size can I allocate for ngx.shared.DICT? 
3. I am planning to cache API response. Can ngx.shared.DICT be utilized or disk-based cache is preferred?

Yichun Zhang (agentzh)

unread,
Nov 15, 2015, 9:02:39 AM11/15/15
to openresty-en
Hello!

On Sat, Nov 14, 2015 at 1:40 AM, RJoshi <rohit....@gmail.com> wrote:
> 1. What is the maximum number of keys can be stored into resty.lrucache
> cache for efficient lookup given implmentation is queue based?

You specify the maximum yourself in the new() method call:

https://github.com/openresty/lua-resty-lrucache#new

> 2. How much maximum size can I allocate for ngx.shared.DICT?

No limit really, as long as you have enough memory.

> 3. I am planning to cache API response. Can ngx.shared.DICT be utilized or
> disk-based cache is preferred?
>

It depends. Sometimes you need both in series. As a rule of thumb,
memory based cache is usually much faster than disk-based one (even
when the page cache hit rate is extremely high since one still has to
pay the price of syscalls and page-cache loops in the OS kernel).

Regards,
-agentzh

RJoshi

unread,
Nov 15, 2015, 9:19:52 AM11/15/15
to openresty-en
Thanks @agentzh.
Do you see performance impact if number of keys increased in resty.lrucache given implementation is based on queue?

I have ~50GB RAM available which can be utilized towards caching API request Uri and its responses. What would be your recommendation ?
How much should be allocated for resty.lrucache and how much for ngx.shared.DICT ?

Do you recommend creating multiple shared.dict vs one big may be based on hashing the request uri?

Yichun Zhang (agentzh)

unread,
Nov 15, 2015, 9:37:36 AM11/15/15
to openresty-en
Hello!

On Sun, Nov 15, 2015 at 10:19 PM, RJoshi wrote:
> Thanks @agentzh.
> Do you see performance impact if number of keys increased in resty.lrucache given implementation is based on queue?
>

It depends on the key patterns and your use cases. Also, mind you,
there is a memory limit in GC-managed memory in each LuaJIT VM (which
is per worker). The limit is 4GB on i386 and 1GB on x86_64 (with the
luajit-mm plugin, the limit can be 2 GB on x86_64).

> I have ~50GB RAM available which can be utilized towards caching API request Uri and its responses. What would be your recommendation?

Use big memory for shm-based stores and moderate numbers of keys (like
1000ish?) in VM-level cache stores (like lua-resty-lrucache).

> How much should be allocated for resty.lrucache and how much for ngx.shared.DICT ?
>

See above.

> Do you recommend creating multiple shared.dict vs one big may be based on hashing the request uri?
>

Multiple shm dicts and manual sharding can be more efficient.

Regards,
-agentzh

Rohit Joshi

unread,
Nov 15, 2015, 10:36:08 AM11/15/15
to openre...@googlegroups.com
Thanks. I assume 1GB limit is for Lua based cache (resty.lrucache) and not for ngx.shared.DICT.

So essentially, I can allocate 4GB of 10 ngx.shared.dict caches and may be 1000 keys for lrucache per worker.
> --
> You received this message because you are subscribed to a topic in the Google Groups "openresty-en" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/topic/openresty-en/U0UP_rSYpPM/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to openresty-en...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

Yichun Zhang (agentzh)

unread,
Nov 15, 2015, 10:39:23 AM11/15/15
to openresty-en
Hello!

On Sun, Nov 15, 2015 at 11:36 PM, Rohit Joshi wrote:
> Thanks. I assume 1GB limit is for Lua based cache (resty.lrucache) and not for ngx.shared.DICT.
>

Right.

> So essentially, I can allocate 4GB of 10 ngx.shared.dict caches and may be 1000 keys for lrucache per worker.
>

Maybe :) The actual numbers are depending on the size of your
individual key-value pairs though.

Regards,
-agentzh
Reply all
Reply to author
Forward
0 new messages