Get cached data from nginx in case an invalid content (should be identified with content length < threshold) is returned from upstream

48 Aufrufe
Direkt zur ersten ungelesenen Nachricht

Fanendra Nath Tripathi

ungelesen,
26.09.2017, 02:00:0426.09.17
an openresty-en
Hi Guys,

I have a requirement. You may feel this is weird, but this is how it has to be implemented.

We have set of upstream servers which of course are not managed by us. There are scenarios in which these servers behaves abnormally and we get blank data along with http response code 200 (Blank means the response is not entirely blank but has few xml container tags but the ones containing actual data is missing). We can easily identify the bad responses on basis of content length.

Now the requirement is to served cached content in such cases given that we have cached data for these URIs and it should continue until we start to get valid responses. I know it can be easily handled on upstream servers but since we do not manage it and it is almost impossible to get this fixed from the people who are managing them.

Can you guys pls help me understand whether this can be achieved. I tried using body filter (check content length and return error response code incase of invalid response but it didn't work)

Jim Robinson

ungelesen,
26.09.2017, 18:05:0426.09.17
an openresty-en
This seems perfectly reasonable to me.

You could  execute the request to the backend using lua-resty-http
(or ngx.location.capture if necessary), and then test the content-length
and/or actual response body length in LUA code.

If the length indicated it was a malformed response you'd use the
memc-nginx-module to see if could get it out of cache instead.  On
the other hand, if the backend response was good, save the body
using memc-nginx-module.

Jim

Fanendra Nath Tripathi

ungelesen,
27.09.2017, 02:08:5427.09.17
an openresty-en
Thanks Jim for your response. It does mean that the solution can't be done purely via nginx cache. I was assuming that lua layer sits in between cache and upstream layer

client -->nginx-cache-->lua-->upstream  

which is why I was trying to achieve desired result from Nginx cache. Anyways I'll try to implement the one you mentioned. I need one more help from you. The response size can be upto 500KB of size. Would memcache be able to handle this size efficiently? 
Die Nachricht wurde gelöscht

Jim Robinson

ungelesen,
28.09.2017, 12:06:5128.09.17
an openresty-en
[2nd attempt, the first one ate some of my formatting]

Hrm, I can't be positive the stock nginx cache modules won't work for you but my suspicion is that they would not.  My reasoning is that a typical reverse cache will check its cache and try and serve anything possible out of it first, rather than first check whether or not the backend is OK.  That didn't sound like what you were needing to fulfill your business requirements. Please let me know if I've misunderstood.

As to payload sizes, memcache has a built-in limit of 1 MiB, so if your payloads are only up to 500 KiB you shouldn't have a problem there.

You might want to look into the openresty srcache module.  It lets you define the GET / PUT routines to populate the cache, and you could implement the GET in such a way that the first thing it did was check the backend.

So for example if you had nginx set up to use srcache:

location /memc {
internal;
lua_code_cache on;
lua_need_request_body on;
content_by_lua_file /usr/local/openresty/nginx/conf/memcache.lua;
}

location / {
set $key $uri$args;
srcache_fetch GET /memc $key;
srcache_store PUT /memc $key;
srcache_store_statuses 200 301 302;
}

and your memcache.lua held the logic for GET / PUT operations.  The following is just sketched out I haven't even tried to compile it, but I think it's enough to give the idea.

--
-- memcache.lua - GET/PUT subrequest handler to get or set memcache data
--
local libstr = require "resty.string"
local libmd5 = require "resty.md5"
local memcached = require "resty.memcached"
local http = require "resty.http"

--
-- read the subrequest sent by srcache
--
local method = ngx.req.get_method()
local path = ngx.var.query_string
local body = ngx.req.get_body_data()

local md5 = libmd5:new()
if not md5 then
ngx.log(ngx.ERR, "unable to load md5")
ngx.exit(500)
end

--
-- build memcache key based on the path
--
local ok = md5:update(path)
if not ok then
ngx.log(ngx.ERR, "unable to add data to md5")
ngx.exit(500)
end

local memc_key = libstr.to_hex(md5:final())

--
-- set up the memcached connection
--
local memc = memcached:new()

-- NB: probably should do something ketama-like here
local ok, err = memc:connect("127.0.0.1", 11211)
if not ok then
        ngx.log(ngx.ERR, err)
        ngx.exit(500)
end

-- memcache get or set
if method == "GET" then
-- check backend first
local con = http.new()
local ok, err = con:connect("backend.example.com", "80")
-- TODO: handle errors

con:set_timeout(500)
local res, err = httpc:request({
path = path,
headers = {
["Host"] = "backend.example.com",
},
})
-- TODO: handle errors

-- check res.headers content-length or res.body length

local ok, err = con:set_keepalive()
-- TODO: handle errors

-- if length is ok, send the response from the backend 
-- using ngx.header and ngx.print, and returning,
-- or fall through to check the cache for a valid response
-- (this following logic could of course be moved into
-- its own function to make this code cleaner).

-- check our cache next
local res, flags, err = memc:get(memc_key)
if err then
ngx.log(ngx.ERR, err)
ngx.exit(ngx.HTTP_INTERNAL_SERVER_ERROR)
end

-- sent response status and body
if not res then
local ok, err = memc:set_keepalive()
                if not ok then
                        ngx.log(ngx.WARN, "unable to memc:set_keepalive: " .. err)
                end
ngx.exit(ngx.HTTP_NOT_FOUND)
else
local ok, err = memc:set_keepalive()
if not ok then
ngx.log(ngx.WARN, "unable to memc:set_keepalive: " .. err)
end

ngx.print(res)
end
elseif method == "PUT" then
local ok, err = memc:set(memc_key, body, 0)
if not ok then
ngx.log(ngx.ERR, err)
                ngx.exit(ngx.HTTP_INTERNAL_SERVER_ERROR)
end

local ok, err = memc:set_keepalive()
        if not ok then
        ngx.log(ngx.WARN, "unable to memc:set_keepalive: " .. err)
        end
ngx.exit(ngx.HTTP_CREATED)
end 
Allen antworten
Antwort an Autor
Weiterleiten
0 neue Nachrichten