TCP servers in OpenResty

54 views
Skip to first unread message

Markus Walther

unread,
May 27, 2023, 6:59:20 AM5/27/23
to openresty-en
Hi,

I need feedback on using TCP servers on the *same instance* of
OpenResty alongside with 'normal' http servers.

I have a bunch of them, looking like this one:

stream {
server {

listen unix:/tmp/service.sock;

error_log logs/service.log debug;

lua_code_cache on;

content_by_lua_file 'lua/service.lua';

}
}

Normal ones, too:

location /api/v1/map {

error_log logs/map.log debug;

lua_code_cache on;

access_by_lua_file lua/auth.lua';

content_by_lua_file 'lua/map.lua';

}
}

To preserve 100 % non-blocking behaviour, normal Lua-powered endpoints
like lua/map.lua may offload some subcomputation to TCP servers like
lua/service.lua that run *on the same OpenResty* instance.

Questions:
1. Is this OK from a performance point-of-view? Or an anti-pattern?
2. Any caveats?

Many thanks for feedback, Markus

Junlong li

unread,
May 28, 2023, 10:03:25 AM5/28/23
to openresty-en
The stream server works in the same nginx worker event loop.
So if the subcompuation takes much of CPU time, it will also block the nginx event loop.

dr.ma...@gmail.com

unread,
Jun 30, 2023, 11:14:32 AM6/30/23
to openresty-en
I see! Would it be then be correct to say, though, that if  stream-server computation is I/O-bound and doesn't use much CPU, that then the pattern is OK?

Junlong li

unread,
Jun 30, 2023, 8:38:25 PM6/30/23
to openresty-en
If it is disk I/O bound, then it works worse.
Because the worker process is blocked when accessing the disk I/O.

Reply all
Reply to author
Forward
0 new messages