light thread limitations/recommendations

44 views
Skip to first unread message

Nelson, Erik - 2

unread,
Jan 8, 2017, 5:14:41 PM1/8/17
to openresty-en
I'm using light threads as described at

https://github.com/openresty/lua-nginx-module#ngxthreadspawn

in order to retrieve results from some upstreams. I'm curious about the limits of this approach- how well will it scale? Is hundreds/thousands up upstreams a reasonable thing to do, or should I use another design?

Thanks

Erik

----------------------------------------------------------------------
This message, and any attachments, is for the intended recipient(s) only, may contain information that is privileged, confidential and/or proprietary and subject to important terms and conditions available at http://www.bankofamerica.com/emaildisclaimer. If you are not the intended recipient, please delete this message.

rpaprocki

unread,
Jan 19, 2017, 11:32:52 AM1/19/17
to openresty-en, erik.l...@bankofamerica.com
Hi,


On Sunday, January 8, 2017 at 5:14:41 PM UTC-5, Nelson, Erik - 2 wrote:
I'm using light threads as described at

https://github.com/openresty/lua-nginx-module#ngxthreadspawn

in order to retrieve results from some upstreams.  I'm curious about the limits of this approach- how well will it scale?  Is hundreds/thousands up upstreams a reasonable thing to do, or should I use another design?

Light threads are not scheduled pre-emptively, that is, a cooperative scheduling model is implemented. So if you are launching hundreds of thousands of light threads, you should ensure that they either yield control with reason, or perform some action that automatically results in a yield so that other light threads (as well as other events in the worker process) get their turn.

My guess is that the spin-up time to send out several hundred or thousand upstream socket calls could take a considerable amount of time, but the definition of an 'acceptable' delay would likely vary by use case. Assuming that each call to a given endpoint that launches this level of threads is hit regularly, memory consumption from spawning new threads may be a concern as well. Benchmarking your particular use case would be give the most concrete answer here.

And I would strongly recommend that if you are using these threads to initiate calls to remove servers, you store connections to those servers in a socket keepalive pool (assuming TCP). The overhead of a three-way handshake several hundred of thousand times repeatedly would be a nontrivial waste of wait time and packet round trips. Ensure that whatever library you're using allows for TCP socket keepalives, or if you're rolling your own socket calls, have a look at https://github.com/openresty/lua-nginx-module#tcpsocksetkeepalive.

Nelson, Erik - 2

unread,
Jan 19, 2017, 12:06:33 PM1/19/17
to openre...@googlegroups.com


rpaprocki wrote on Thursday, January 19, 2017 11:33 AM
Thanks for the design advice! I'll keep that in mind when making a first implementation and see how it goes for my use case. I do have the advantage that the upstreams will all be on a fast local LAN.
Reply all
Reply to author
Forward
0 new messages