Modify ngx.status on the basis of redis:get (using lua-resty-redis)

345 views
Skip to first unread message

Sudip Datta

unread,
Oct 11, 2018, 1:55:15 AM10/11/18
to openresty-en
Hi,

I am trying to solve the following problem:

I am trying to modify the response body on the basis of a redis-call, however want to return `ngx.status = ngx.HTTP_SERVICE_UNAVAILABLE` under certain circumstances.

I have tried the following approaches:

1. I tried using body_filter_by_lua, to capture the http response and use a redis:get response to modify the http response body. However, as I mentioned in the problem statement, in certain cases, I need to modify the ngx.status on the basis of redis response as well. But, as expected, I get `attempt to set ngx.status after sending out response headers` since, the header is already on the wire. As an aside, I do know that this might cause significant performance degradation of nginx, but I am not aware of a workaround.

2. I tried moving the redis:get to header_filter_by_lua and use the response to decide ngx.status, while storing it in `ngx.ctx` for consumption in body_filter_by_lua, but lua-resty-redis has the limitation that it can't be used in header_filter_by_lua (nor in init_by_lua)

The only other option I foresee is to try using redis:get in `init_worker_by_lua` https://github.com/openresty/lua-resty-redis/issues/42.

Will it be the right approach or are there cleaner ways of doing it?

Thanks,
Sudip

Jim Robinson

unread,
Oct 11, 2018, 3:56:51 AM10/11/18
to openresty-en
Would you clarify for me what you  mean by:

"... capture the http response and use a redis:get response..."

Are you  making calls to both a backend http service and to redis, and you want to be able to evaluate both the backend http response and a separate redis call's responses before you decide on the final status code you want to return?

If that's the scenario, could you perhaps use a content_by_lua_block and fetch the backend http response via the lua-resty-http library (https://github.com/pintsized/lua-resty-http)?  That'd let you fetch from both backend services (http and redis) before you start writing the final response.

Bogdan Irimia

unread,
Oct 11, 2018, 4:32:27 AM10/11/18
to openre...@googlegroups.com
If I understood right, the issue "attempt to set ngx.status after sending out response headers" happens because the first thing you return to the client is not the status. So check if you have "ngx.print" or something similar before setting the status of the response.

Sudip Datta wrote:
--
You received this message because you are subscribed to the Google Groups "openresty-en" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openresty-en...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Jim Robinson

unread,
Oct 12, 2018, 4:25:29 PM10/12/18
to openresty-en
The original poster specifically mentions

body_filter_by_lua

wouldn't an output-body-filter imply the response headers have already been sent?

That was my assumption, which is why I was suggesting a content_by_lua_block, so that control would be retained regarding when the response code and headers started to get produced.

Thibault Charbonnier

unread,
Oct 12, 2018, 4:35:29 PM10/12/18
to openre...@googlegroups.com
Hi,

On 10/12/18 1:25 PM, Jim Robinson wrote:
> The original poster specifically mentions
>
> body_filter_by_lua
>
> wouldn't an output-body-filter imply the response headers have already
> been sent?

Yes, that is correct Jim. I believe your assumption of the mistake here
to be accurate.

> That was my assumption, which is why I was suggesting a
> content_by_lua_block, so that control would be retained regarding when
> the response code and headers started to get produced.

The main issue with re-implementing proxying within the content handler
is the added memory pressure to the Lua VM. When done as such, requests
and responses payloads must be buffered into the (limited) memory of the
LuaJIT VM. There are other limitations, such as missing out on
upstream{} blocks and load balancing features, or good old performance
concerns with mimicking Nginx C modules behavior in Lua.

--
Thibault

Sudip Datta

unread,
Oct 14, 2018, 10:32:16 PM10/14/18
to openre...@googlegroups.com
Hi,

Thanks for the responses and sorry for the delay in my reply.

I guess, I missed a critical point in my question. Before stating "capture the http response", I should have stated that I need to proxy_pass the request to another server, and the response ('R') from there needs to be captured. Hence, I attempted body_filter_by_lua.

Further, the response contains some information 'R.A'. 'R.A' needs to be queried on a separate Redis instance and response from Redis determines the final R (including a possible ngx.HTTP_SERVICE_UNAVAILABLE) that's sent to the client.

I believe, the use of proxy_pass eliminates the chance to use content_by_lua.

Any other alternatives?

Thanks again,
Sudip 

--
You received this message because you are subscribed to a topic in the Google Groups "openresty-en" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/openresty-en/ZEvRzCkfEHw/unsubscribe.
To unsubscribe from this group and all its topics, send an email to openresty-en...@googlegroups.com.

Jim Robinson

unread,
Oct 16, 2018, 2:04:38 AM10/16/18
to openresty-en
You can still use proxy_pass within content_by_lua, with a little indirection.  That's what I was doing before I rewrote my code to use the lua-resty-http library I was recommending you consider.  But it does have downsides, as Thibault indicated.  What I had been doing previously was setting up a 'proxy' location:

location /proxy {
  rewrite ^/proxy/+(.*) /$1 break;
  ...
  proxy_pass http://$proxy_host;
}

And then calling it via ngx.location.capture from within the content_by_lua block:

location / {
    content_by_lua_block {
        ...
        local response = ngx.location.capture("/proxy" .. $request_uri, {
            method = $request_method,
            body = $request_body,
        }
        ...
        for k, v in pairs(response.header) do
            ngx.header[k] = v
        end

        ngx.header["date"] = ngx.http_time(ngx.time())

        ngx.status = response.status

        if response.body then
            ngx.print(response.body)
        end
    }
}

The downside is that the entire proxied request is captured before 'response' is returned by ngx.location.capture, so if it's large and slow then you are forced to wait for it.  NGINX will even do things like automatically streaming to a local tempfile if it's a very large (or very slow, I think) response so as to not overwhelm the memory resources.

This is the reason I ended up switching to lua-resty-http and implementing streaming response logic with it.  While the technique above worked fine for proxying requests to services that returned small payloads quickly, the delay to start serving things like large pdfs was noticeable.  The total time to serve via /proxy wasn't actually bad, but a 3-second delay before the response starting appearing to the client was.
Reply all
Reply to author
Forward
0 new messages