ngx.shared.dict vs module local variable to store cache

424 views
Skip to first unread message

RJoshi

unread,
May 19, 2016, 4:57:31 PM5/19/16
to openresty-en
Hello,
  If I wanted a persistent cache (non-expiry) shared across all worker processes and occasionally flush/reload based on on-demoand/API request particularly when traffic is low, would it be better to use local variable of a module?  Can I use resty.lock to lock the variable during flush/reload cache?

Thx

Lord Nynex

unread,
May 19, 2016, 9:21:50 PM5/19/16
to openre...@googlegroups.com
Hello,

The Lua VM in openresty is not 1 single VM. Infact, there is 1 per worker. In the situation you've described, you can create a cache at the module level but you would not be able to do any sort of IPC with this data. If the the data is truly a cache, this may not be a big deal for you but it does increase the memory requirements of your application. As an aside, I've seen over the last few years that module level cache for dynamic data is a usual source of memory leaks. Sometimes developers or difficult to find bugs prevent GC of these types of objects and the memory usage of nginx will grow constantly as a result. 

There is obviously SOME computational overhead for using ngx.shared.dict, but in most cases it is imperceivable. Based on the requirements you've described, ngx.shared.dict is the only 'out of the box' option for you. An alternate is using a centralized store like redis.

You can of course use resty.lock on any shared dict as thats what it was intended to do. If you're asking if holding locks on keys and recursively deleting everything in the shared dict will 'protect' your keys, the answer is pretty much no because thats not really the way locks work. You must think of it as a synchronization primitive (like a mutex) and not some kind of exclude filter when you're performing a destructive operation. It is up to you, the developer, to decide how your shared dict is managed. Lock will not make assumptions for you. 

-Brandon

--
You received this message because you are subscribed to the Google Groups "openresty-en" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openresty-en...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

RJoshi

unread,
May 20, 2016, 10:03:06 AM5/20/16
to openresty-en
Thanks Brandon for detail explanation. I did see memory leak and this might be the reason.  I will stick to ngx.shared.dict for sharing across workers and bring few entries into lurcache if immediate is possible.

Regarding ngx.shared.dict and lrrcache, do I need to explicitly flush the expired record?  I see memory growing and not sure if this is the cause. 

Is there any setting where I can force  purging/GC of these expired records immediatly?

Yichun Zhang (agentzh)

unread,
May 27, 2016, 6:10:01 PM5/27/16
to openresty-en
Hello!

On Fri, May 20, 2016 at 7:03 AM, RJoshi wrote:
> Regarding ngx.shared.dict and lrrcache, do I need to explicitly flush the
> expired record?

No, don't do that. Both of them have some kind of GC themselves.

> I see memory growing and not sure if this is the cause.
>

As long as it's not growing forever, then it's fine. Almost every GC
has some latency.

> Is there any setting where I can force purging/GC of these expired records
> immediatly?
>

You can use shdict:flush_expired() and collectgarbage("collect") to
force a full GC cycle for shdict and lrucache, respectively. But these
operations are usually very expensive and blocking, and they are not
usually necessary at all. So be careful.

Regards,
-agentzh

RJoshi

unread,
May 27, 2016, 6:13:30 PM5/27/16
to openresty-en
Thanks.
Reply all
Reply to author
Forward
0 new messages