How to return some information texts to client when no upstream server is found in balancer_by_lua file?

415 views
Skip to first unread message

Peng Liu

unread,
Mar 13, 2017, 4:37:27 AM3/13/17
to openresty-en
Hi, openresty guys,

I write some specific load balance rules in the balancer_by_lua file. If I want to send some text back to client if no upstream server is found. How to achieve that? I know that ngx.say can't be used in balancer_by_lua.
Besides, I don't want to call ngx.exit(xxx) in header_filter_by_lua because that will cause the woker process to exit with core dump.

What should be a graceful way?

Thanks
Liu Peng

Robert Paprocki

unread,
Mar 13, 2017, 12:56:01 PM3/13/17
to openre...@googlegroups.com
Hi,

On Mon, Mar 13, 2017 at 1:37 AM, Peng Liu <liupe...@gmail.com> wrote:
Hi, openresty guys,

I write some specific load balance rules in the balancer_by_lua file. If I want to send some text back to client if no upstream server is found. How to achieve that? I know that ngx.say can't be used in balancer_by_lua.
Besides, I don't want to call ngx.exit(xxx) in header_filter_by_lua because that will cause the woker process to exit with core dump.

You could call ngx.exit(...) in your balancer_by_lua handler (perhaps with a non-standard status code), and then use nginx's error_page directive (http://nginx.org/en/docs/http/ngx_http_core_module.html#error_page) to handle that error code as you wish.

Peng Liu

unread,
Mar 13, 2017, 9:59:42 PM3/13/17
to openresty-en, rob...@cryptobells.com
Hi, rpaprocki,

Is it possible to avoid using ngx.exit(xxx) because I get this error message in the log:

worker process 1653 exited on signal 11 (core dumped).

I don't want the worker process exited because that will impact other connections processed by this worker process.

Thanks
Liu Peg

Robert Paprocki

unread,
Mar 13, 2017, 11:37:38 PM3/13/17
to openre...@googlegroups.com, rob...@cryptobells.com
Hi,

> On Mar 13, 2017, at 18:59, Peng Liu <liupe...@gmail.com> wrote:
>
> Hi, rpaprocki,
>
> Is it possible to avoid using ngx.exit(xxx) because I get this error message in the log:
>
> worker process 1653 exited on signal 11 (core dumped).
>
> I don't want the worker process exited because that will impact other connections processed by this worker process.

ngx.exit inside balancer by Lua shouldn't cause a segfault. Can you post a full, standalone, minimal config that presents this behavior so it can be duplicated? Can you also post the content of nginx -V, as well as a backtrace of the core dump?

hamza t

unread,
Mar 14, 2017, 4:08:48 AM3/14/17
to openresty-en
Hi :)
you can use an internal upstream that only serve the page that you want to show in case no upstream server is found :) 

Peng Liu

unread,
Mar 14, 2017, 5:00:30 AM3/14/17
to openresty-en
So, How to gracefully handle the scenario no upstream server is found in balancer_by_lua?

Peng Liu

unread,
Mar 14, 2017, 5:25:10 AM3/14/17
to openresty-en, rob...@cryptobells.com

As the nginx runs in docker container, I check the container and find no core dump file. Possibly the worker process has no root privilege.   

Output of nginx -V is:
nginx version: nginx/1.11.1
built by gcc 4.9.2 (Debian 4.9.2-10)
built with OpenSSL 1.0.1t  3 May 2016
TLS SNI support enabled
configure arguments: --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_addition_module --with-http_dav_module --with-http_geoip_module --with-http_gzip_static_module --with-http_sub_module --with-http_v2_module --with-http_spdy_module --with-stream --with-stream_ssl_module --with-threads --with-file-aio --without-mail_pop3_module --without-mail_smtp_module --without-mail_imap_module --without-http_uwsgi_module --without-http_scgi_module --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --add-module=/tmp/build/ngx_devel_kit-0.3.0 --add-module=/tmp/build/set-misc-nginx-module-0.30 --add-module=/tmp/build/nginx-module-vts-0.1.9 --add-module=/tmp/build/lua-nginx-module-0.10.5 --add-module=/tmp/build/headers-more-nginx-module-0.30 --add-module=/tmp/build/nginx-goodies-nginx-sticky-module-ng-c78b7dd79d0d --add-module=/tmp/build/nginx-http-auth-digest-f85f5d6fdcc06002ff879f5cbce930999c287011 --add-module=/tmp/build/ngx_http_substitutions_filter_module-bc58cb11844bc42735bbaef7085ea86ace46d05b --add-module=/tmp/build/lua-upstream-nginx-module-0.05

The code block in balancer_by_lua is as follow:
    if (server == nil) then
       ngx.log(ngx.ERR, "All the servers are full load!")
       #ngx.exit(506)
       ngx.status = 506
       return
    end


The nginx log is as follow:
2017/03/14 09:19:54 [error] 31#31: [lua] init_by_lua.lua:19: The upstream name is liupeng-sm-webtier-svc-8080
2017/03/14 09:19:54 [error] 31#31: [lua] init_by_lua.lua:19: The upstream name is upstream-default-backend
2017/03/14 09:21:03 [error] 1921#1921: *473 [lua] rewrite_by_lua.lua:48: rewrite block: separated ip is 172.77.69.10 port is 13080, client: 172.77.69.22, server: _, request: "POST /SM/ui HTTP/1.1", host: "sm-ingress-svc.liupeng.svc.cluster.local"
2017/03/14 09:21:06 [error] 1921#1921: *473 connect() failed (113: No route to host), client: 172.77.69.22, server: _, request: "POST /SM/ui HTTP/1.1", host: "sm-ingress-svc.liupeng.svc.cluster.local"
2017/03/14 09:21:06 [error] 1921#1921: *473 [lua] rewrite_by_lua.lua:15: getUpstreamNodeAttribute(): http failure: no route to host for 172.77.69.10:13080, client: 172.77.69.22, server: _, request: "POST /SM/ui HTTP/1.1", host: "sm-ingress-svc.liupeng.svc.cluster.local"
2017/03/14 09:21:06 [error] 1921#1921: *473 [lua] rewrite_by_lua.lua:51: getUpstreamNodeAttribute Error: no route to host, client: 172.77.69.22, server: _, request: "POST /SM/ui HTTP/1.1", host: "sm-ingress-svc.liupeng.svc.cluster.local"
2017/03/14 09:21:06 [error] 1921#1921: *473 [lua] rewrite_by_lua.lua:48: rewrite block: separated ip is 127.0.0.1 port is 8181, client: 172.77.69.22, server: _, request: "POST /SM/ui HTTP/1.1", host: "sm-ingress-svc.liupeng.svc.cluster.local"
2017/03/14 09:21:07 [error] 1921#1921: *473 [lua] rewrite_by_lua.lua:28: getUpstreamNodeAttribute(): Query returned a non-200 response: 503, client: 172.77.69.22, server: _, request: "POST /SM/ui HTTP/1.1", host: "sm-ingress-svc.liupeng.svc.cluster.local"
2017/03/14 09:21:07 [error] 1921#1921: *473 [lua] rewrite_by_lua.lua:51: getUpstreamNodeAttribute Error: Query return non-200 response: 503, client: 172.77.69.22, server: _, request: "POST /SM/ui HTTP/1.1", host: "sm-ingress-svc.liupeng.svc.cluster.local"
2017/03/14 09:21:07 [error] 1921#1921: *473 [lua] rewrite_by_lua.lua:48: rewrite block: separated ip is 172.77.69.13 port is 13080, client: 172.77.69.22, server: _, request: "POST /SM/ui HTTP/1.1", host: "sm-ingress-svc.liupeng.svc.cluster.local"
2017/03/14 09:21:07 [error] 1921#1921: *473 [lua] rewrite_by_lua.lua:28: getUpstreamNodeAttribute(): Query returned a non-200 response: 404, client: 172.77.69.22, server: _, request: "POST /SM/ui HTTP/1.1", host: "sm-ingress-svc.liupeng.svc.cluster.local"
2017/03/14 09:21:07 [error] 1921#1921: *473 [lua] rewrite_by_lua.lua:51: getUpstreamNodeAttribute Error: Query return non-200 response: 404, client: 172.77.69.22, server: _, request: "POST /SM/ui HTTP/1.1", host: "sm-ingress-svc.liupeng.svc.cluster.local"
2017/03/14 09:21:07 [error] 1921#1921: *473 [lua] rewrite_by_lua.lua:48: rewrite block: separated ip is 172.77.69.21 port is 13080, client: 172.77.69.22, server: _, request: "POST /SM/ui HTTP/1.1", host: "sm-ingress-svc.liupeng.svc.cluster.local"
2017/03/14 09:21:07 [error] 1921#1921: *473 [lua] rewrite_by_lua.lua:28: getUpstreamNodeAttribute(): Query returned a non-200 response: 404, client: 172.77.69.22, server: _, request: "POST /SM/ui HTTP/1.1", host: "sm-ingress-svc.liupeng.svc.cluster.local"
2017/03/14 09:21:07 [error] 1921#1921: *473 [lua] rewrite_by_lua.lua:51: getUpstreamNodeAttribute Error: Query return non-200 response: 404, client: 172.77.69.22, server: _, request: "POST /SM/ui HTTP/1.1", host: "sm-ingress-svc.liupeng.svc.cluster.local"
2017/03/14 09:21:07 [error] 1921#1921: *473 failed to load external Lua file "/etc/nginx/lua/balancer_by_lua.lua": /etc/nginx/lua/balancer_by_lua.lua:42: unexpected symbol near '#' while connecting to upstream, client: 172.77.69.22, server: _, request: "POST /SM/ui HTTP/1.1", host: "sm-ingress-svc.liupeng.svc.cluster.local"
127.0.0.1 - [127.0.0.1] - - [14/Mar/2017:09:21:06 +0000] "GET /mbeanclient-9.53/Action?getattribute=AcceptNewConnection HTTP/1.1" 503 213 "-" "lua-resty-http/0.07 (Lua) ngx_lua/10005" 144 0.000 - - - -
172.77.69.22 - [172.77.69.22] - - [14/Mar/2017:09:21:07 +0000] "POST /SM/ui HTTP/1.1" 500 193 "-" "Java/1.8.0_111" 3352 3.200  0 - -
2017/03/14 09:21:25 [alert] 31#31: worker process 1921 exited on signal 11 (core dumped)

Peng Liu

unread,
Mar 14, 2017, 5:36:53 AM3/14/17
to openresty-en, rob...@cryptobells.com
Add relative part of nginx.conf. I change the status code to 506 in balancer_by_lua:

location /SM/ui {
            proxy_set_header Host                   $host;

            # Pass Real IP
            proxy_set_header X-Real-IP              $remote_addr;

            # Allow websocket connections
            proxy_set_header                        Upgrade           $http_upgrade;

            proxy_set_header                        Connection        "";


            proxy_set_header X-Forwarded-For        $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Host       $host;
            proxy_set_header X-Forwarded-Port       $server_port;
            proxy_set_header X-Forwarded-Proto      $pass_access_scheme;

            # mitigate HTTPoxy Vulnerability
            # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
            proxy_set_header Proxy                  "";

            proxy_connect_timeout                   5s;
            proxy_send_timeout                      60s;
            proxy_read_timeout                      60s;

            proxy_redirect                          off;

            proxy_buffering                         off;

            proxy_http_version                      1.1;


            proxy_pass http://liupeng-sm-rte-svc-13080;

            rewrite_by_lua_file /etc/nginx/lua/rewrite_by_lua.lua;

            header_filter_by_lua_file /etc/nginx/lua/header_filter_by_lua.lua;

            error_page 506 = /somewhere_else;

        }
       
location /somewhere_else {
            content_by_lua_block {
                ngx.say("506 error")
                ngx.status = 500

Robert Paprocki

unread,
Mar 14, 2017, 11:35:29 PM3/14/17
to Peng Liu, openresty-en
Hi,

Your email is not overly helpful- it doesn't contain a full config we can use to reproduce the issue, nor does it contain the backtrace of the core dump.

Nevertheless, in one example you've commented out ngx.exit(506) in balancer by lua and replaced it with ngx.status, which is most certainly not appropriate in a balancer context: https://github.com/openresty/lua-nginx-module#ngxstatus

Even according to the docs, ngx.exit(500) is the appropriate solution: https://github.com/openresty/lua-resty-core/blob/master/lib/ngx/balancer.md

It also seems like you're using a number of third party modules, so you may want to try eliminating those to see if they don't place nice with OpenResty. In particular, I'd guess that nginx-goodies-nginx-sticky-module-ng probably conflicts with balancer by lua in some manner (but that's just a guess, you should talk to the module developer about that).
Reply all
Reply to author
Forward
0 new messages