Clarification on shared dict eviction or expiration

215 views
Skip to first unread message

Aleš Bregar

unread,
Jan 19, 2017, 11:25:18 AM1/19/17
to openresty-en
Hi,

was looking through available documentation on github and openresty web but couldn't clarify fully how shared dict is cleaned.
According to
"When it fails to allocate memory for the current key-value item, then set will try removing existing items in the storage according to the Least-Recently Used (LRU) algorithm. Note that, LRU takes priority over expiration time here. If up to tens of existing items have been removed and the storage left is still insufficient (either due to the total capacity limit specified by lua_shared_dict or memory segmentation), then the err return value will be no memory and success will be false."

Does LRU eviction distinguish between values where exptime was set and non-expirable items? Or just dumps oldest items regardless?

Other question is - is there any trigger which periodically (or when over some threshold) automatically flush_expired items?

Thanks in advance for the clarifications

Robert Paprocki

unread,
Jan 19, 2017, 11:59:24 AM1/19/17
to openre...@googlegroups.com
Hi,

On Thu, Jan 19, 2017 at 11:25 AM, Aleš Bregar <ales....@gmail.com> wrote:
Hi,

was looking through available documentation on github and openresty web but couldn't clarify fully how shared dict is cleaned.
According to
"When it fails to allocate memory for the current key-value item, then set will try removing existing items in the storage according to the Least-Recently Used (LRU) algorithm. Note that, LRU takes priority over expiration time here. If up to tens of existing items have been removed and the storage left is still insufficient (either due to the total capacity limit specified by lua_shared_dict or memory segmentation), then the err return value will be no memory and success will be false." 

Does LRU eviction distinguish between values where exptime was set and non-expirable items? Or just dumps oldest items regardless?

From the source (https://github.com/openresty/lua-nginx-module/blob/master/src/ngx_http_lua_shdict.c#L1188), when memory cannot be allocated, `ngx_http_lua_shdict_expire` is called 30 times. Each call is passed the shdict ctx point and an integer 0. Comments for this int flag indicate the following:

/*
     * n == 1 deletes one or two expired entries
     * n == 0 deletes oldest entry by force
     *        and one or two zero rate entries
     */

So in this case, the last element (derived from ngx_queue_last) is forcibly removed. The function then tries to remove the next node; if this node is not expired, or was not assigned an expiry time, ngx_http_lua_shdict_expire returns the number of entries removed. So to answer your first question, yes it does distinguish to a certain extent. The first entry to be removed is forcefully removed regardless of expiry time, and then one or two more items are attempted to be removed if they have expired. This process happens up to thirty times when memory for a new node cannot be allocated from the slab.

By the way, ngx_http_lua_shdict_expire is called sporadically throughout various set/insert functions with a flag of '1', so its always trying to remove some expired entries when adding or updating the dictionary.

Just musing, it seems like an interesting consequence of this is that two or more elements with very very long expiry times could potentially pin memory in the dictionary after it's expired if flush_expired isn't ever called.

 
Other question is - is there any trigger which periodically (or when over some threshold) automatically flush_expired items?

Seems like ngx.timer.at would be a use for this, but be cautious as it will lock the dictionary for the whole operation, so on a very large dictionary this could cause a noticeable access block. Caveat emptor ;)

Aleš Bregar

unread,
Jan 20, 2017, 3:50:19 AM1/20/17
to openresty-en, rob...@cryptobells.com
Hi,
thank you for the prompt and excellent answer, I think it is clear now.

May I ask further - wondering about fragmentation (mentioned in https://github.com/openresty/lua-nginx-module/issues/175). My intentions with stored entities size are not going above 4k.
Issue is closed, there is some ngx patch created but I found no clear severity and recommendation for necessity in using it?

Thanks again and best regards

Robert Paprocki

unread,
Jan 20, 2017, 9:24:21 AM1/20/17
to Aleš Bregar, openresty-en
Hi,

This patch was merged into core several years ago:


So you don't have anything to worry about :) Cheers!

Aleš Bregar

unread,
Jan 23, 2017, 6:41:50 PM1/23/17
to openresty-en, rob...@cryptobells.com
That's great. Thank you for the valuable answer!
Reply all
Reply to author
Forward
0 new messages