Markus Walther
unread,May 27, 2023, 6:59:20 AM5/27/23Sign in to reply to author
Sign in to forward
You do not have permission to delete messages in this group
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to openresty-en
Hi,
I need feedback on using TCP servers on the *same instance* of
OpenResty alongside with 'normal' http servers.
I have a bunch of them, looking like this one:
stream {
server {
listen unix:/tmp/service.sock;
error_log logs/service.log debug;
lua_code_cache on;
content_by_lua_file 'lua/service.lua';
}
}
Normal ones, too:
location /api/v1/map {
error_log logs/map.log debug;
lua_code_cache on;
access_by_lua_file lua/auth.lua';
content_by_lua_file 'lua/map.lua';
}
}
To preserve 100 % non-blocking behaviour, normal Lua-powered endpoints
like lua/map.lua may offload some subcomputation to TCP servers like
lua/service.lua that run *on the same OpenResty* instance.
Questions:
1. Is this OK from a performance point-of-view? Or an anti-pattern?
2. Any caveats?
Many thanks for feedback, Markus