is this Shared Table thread Safe and atomic?

74 views
Skip to first unread message

Hadi Abbasi

unread,
Apr 23, 2019, 9:44:38 AM4/23/19
to openresty-en
hey friends...

this shared table seems to work correct and the logs seems to be correct!
but I don't know if it's thread safe and atomic?!
in other words, I wanna know if my human table is concurrent table to be shared between all of users?

ngx.shared.human = {}
ngx
.shared.human["007"] = {}
ngx
.shared.human["007"].name = "james"
ngx
.shared.human["007"].family = "bond"
ngx
.shared.human["007"].age = "40"
if ngx.shared.routing_table["007"] ~= nil then
 ngx
.log(ngx.DEBUG,"+++++> " .. tostring(ngx.shared.human["007"].name or "Empty")) -- +++++> james
else
 ngx
.log(ngx.DEBUG,"-----> Empty")
end

please help..unfortunately I can not find any tutorial and document about this way of making shared table...
thanks a lot...
Best,
Hadi

Vinicius Mignot

unread,
Apr 23, 2019, 3:58:38 PM4/23/19
to openre...@googlegroups.com
Hey, Hadi

You can find ngx.shared.DICT docs here: https://github.com/openresty/lua-nginx-module#ngxshareddict

--
You received this message because you are subscribed to the Google Groups "openresty-en" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openresty-en...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


--
Vinicius Mignot

Robert Paprocki

unread,
Apr 23, 2019, 4:40:59 PM4/23/19
to openre...@googlegroups.com
To clarify on a few points:

- All access to shared dictionaries, including reads, writes, and deletes, is protected by a mutex. Under the hood, shared dictionaries are just Nginx shared memory zones, which themselves are implemented via an mmap'd space created by the master Nginx process (incidentally, this is why worker process/OpenResty lua code cannot create arbitrary shared dictionaries). Since multiple distinct processes may want to access this shared space, the mutex associated with each dictionary protects access to the memory zone in question. It's note quite accurate to say it's "thread safe", since Nginx (and I'm ignore cases like aio threads) does not use process threads in its execution model.
- Shared dictionaries need to be modified via the ngx.shared.DICT API as Vinicius noted above - simply modifying the table value as noted in your source example will not have the expected behaviors.


Hadi Abbasi

unread,
Apr 24, 2019, 2:26:53 AM4/24/19
to openresty-en
thanks a lot...I know that shared dictionary is thread safe but key/val style is not perfect for my requirements...
I wanna use a shared object by having hierarchy style(having some properties by different types like int or string or ...)...making this shared data type is difficult by shared dictionary!
unless I do that using shared_dictionary as key to an index... and the index can be the index of an array which contains my info objects... am I right?

On Wednesday, April 24, 2019 at 12:28:38 AM UTC+4:30, Vinicius Mignot wrote:
Hey, Hadi

You can find ngx.shared.DICT docs here: https://github.com/openresty/lua-nginx-module#ngxshareddict
--
Vinicius Mignot

Igor Clark

unread,
Apr 24, 2019, 3:04:25 AM4/24/19
to openre...@googlegroups.com
Depending on the content of the objects I tend to json encode them and store the string in the shared dict. Works well for me, as often I just want to output json directly anyway. May not be good for you if you want to do frequent deep object changes.
Message has been deleted

Hadi Abbasi

unread,
Apr 24, 2019, 10:06:53 AM4/24/19
to openresty-en
thanks a lot... so I think I have to use shared dictionary instead of using shared object (table),
so I have to define human 007 like:

ngx.shared.human:set("007_name","james")
ngx
.shared.human:set("007_family","bond")
ngx
.shared.human:set("007_age","40" or 40)

but I'm not sure if it's good idea in terms of performance!

Hadi Abbasi

unread,
Apr 24, 2019, 10:18:49 AM4/24/19
to openresty-en
thank you...
but  I need to access my human information in each user request, so parsing human json item at each user request seems to be ineffective in terms of performance!
so I think the best way is using this:
ngx.shared.human:set("007_name","james")
ngx
.shared.human:set("007_family","bond")
ngx
.shared.human:set("007_age","40" or 40)
I think I have no any other choice!

gspoosi

unread,
Apr 24, 2019, 11:24:40 AM4/24/19
to openre...@googlegroups.com

this way you lose the atomic property because you do multiple lookups. Listen to Igor. He knows whats up.





Von meinem Samsung Galaxy Smartphone gesendet.

-------- Ursprüngliche Nachricht --------
Von: Hadi Abbasi <hadi.abbasi...@gmail.com>
Datum: 24.04.19 16:18 (GMT+01:00)
An: openresty-en <openre...@googlegroups.com>
Betreff: Re: [openresty-en] is this Shared Table thread Safe and atomic?

--

Hadi Abbasi

unread,
Apr 27, 2019, 2:41:19 AM4/27/19
to openresty-en
thanks a lot...
but as I told at the last posts, I need to access my information frequently (in each user request),
so parsing the same information for a host in each request (same information for a page with many requests) is not perfect idea...

Igor Clark

unread,
Apr 27, 2019, 5:08:50 AM4/27/19
to openre...@googlegroups.com
Hi Hadi,

Of course it depends on your app, but if you can structure it so that you store the table as a json string, fetch it at the beginning of the request, do whatever manipulations you need in memory, then json encode it and store it when you’re done, you might well find cjson is fast enough not to cause any problems. It’s pretty fast. If so then you have the benefit of the atomic shared dict and can carry on with the nice simple programming model it provides. Personally I would try it rather than assume it’ll be too slow - it might be fine, and if it’s not, then at least you’ll know for sure.

To paraphrase Joe Armstrong - first make it work, then make it elegant, and only then try to make it faster if you really need to. In this case, given that you want to take advantage of the shared dict and its atomicity (which are pretty darn good), I’d say that means: choose the appropriate data model for the environment, code it as simply and cleanly as possible, and if you *then* find you’re having performance problems with the encoding and decoding, try out different json libraries, try tweaking the structure of the data, or the access pattern to speed things up. If that still doesn’t work, maybe the shared dict isn’t the right way, maybe you should be using redis or something like that. Either way, making a simple test case and whacking it with a load test tool like siege or httperf (or even ab, though remember it lies) will show you pretty quickly whether it’ll do what you need it to do. It’ll have been worth it either way.

I often find myself trying to second-guess which route will be better, so I can avoid spending the time having to come back and do something again if route #1 doesn’t work out. In one way that makes a lot of sense - but just as often it turns out a lot better to try out the simplest realistic test of both routes and measure, because then you know, so you’re not relying on intuition and the net time and effort gain is usually pretty good. (Obviously over time intuition improves based on all the testing, so in the long run I find it also helps me not having to test *quite* so much :-))

Cheers
Igor

Hadi Abbasi

unread,
Apr 28, 2019, 2:07:01 AM4/28/19
to openresty-en
Thank You Igor...
your message will help me,
yeah I will try to test them but I think redis is slower than Shared Dict because of network communication requirement...
yeah json format is pretty enough but also breaking info object as compound keys can help us like:
ngx.shared.human:set("007_name","james")
ngx
.shared.human:set("007_family","bond")
ngx
.shared.human:set("007_age","40" or 40)
in this way, there is no any Serialization/Deserialization,so I think it's faster! 
good luck...
all the best,
Hadi

Igor Clark

unread,
Apr 28, 2019, 4:44:03 AM4/28/19
to openre...@googlegroups.com
OK Hadi, good luck to you too. It sounds to me like you’re falling into the trap of premature optimisation; you’re worrying about serialisation or network overhead before you’ve even got the data model straight. Remember what Donald Knuth said, a guy who knows his stuff more than most [1] -

"We should forget about small efficiencies, say about 97% of the time: premature optimisation is the root of all evil. Yet we should not pass up our opportunities in that critical 3%."

Until you’ve done some testing, it’s safer to assume you’re in the 97%, and quite likely a waste of time to do optimisations for the 3% without having found out for sure. Purely from what you’ve written I think you might be focusing on the wrong thing. But of course it’s your call, you might have other requirements or factors we don’t know about. Best of luck.

Cheers
Igor

--

Hadi Abbasi

unread,
Apr 28, 2019, 5:57:24 AM4/28/19
to openresty-en
Thanks a lot Igor... 
but I'm not at the start steps of my project development, actually I encountered the poor performance of my api on heavy traffics!
at the first steps I tried to change worker_processes more than 1 and also change the worker_connections to upper values but I think saving info structures inside nginx can help me to optimizing the performance too(that was in nodejs as model layer)!
I think giving some tasks to the shared_dictionary instead of using redis or nodejs (as model layer) can help to gain the best performance, because shared_dictionary is fast and atomic and also there is no any socket connection!
so I am going to test all of these steps then I will chose the best way!
thank you...
Best,
Hadi



On Sunday, April 28, 2019 at 1:14:03 PM UTC+4:30, Igor Clark wrote:
OK Hadi, good luck to you too. It sounds to me like you’re falling into the trap of premature optimisation; you’re worrying about serialisation or network overhead before you’ve even got the data model straight. Remember what Donald Knuth said, a guy who knows his stuff more than most [1] -

"We should forget about small efficiencies, say about 97% of the time: premature optimisation is the root of all evil. Yet we should not pass up our opportunities in that critical 3%."

Until you’ve done some testing, it’s safer to assume you’re in the 97%, and quite likely a waste of time to do optimisations for the 3% without having found out for sure. Purely from what you’ve written I think you might be focusing on the wrong thing. But of course it’s your call, you might have other requirements or factors we don’t know about. Best of luck.

Cheers
Igor


On 28 Apr 2019, at 07:07, Hadi Abbasi <hadi.abbas...@gmail.com> wrote:

Thibault Charbonnier

unread,
Apr 29, 2019, 6:05:34 PM4/29/19
to openre...@googlegroups.com
Hi,

It sounds like you are in need of something like:

https://github.com/thibaultcha/lua-resty-mlcache

The ability to deserialize shm content into Lua-land (what mlcache
references as L2 and L1 caches, respectively) will avoid heavy
serializeation/deserialization operations on shm values, and help
reducing lock contention between workers on shm zones as well.

Best of luck,
Thibault

Hadi Abbasi

unread,
May 5, 2019, 6:31:52 AM5/5/19
to openresty-en
thanks a lot Thibault Charbonnier...
I will try to check it...
good luck...
Best,
Hadi
Reply all
Reply to author
Forward
0 new messages