Lua cache tracking in Redis

128 views
Skip to first unread message

Andrei

unread,
Sep 14, 2020, 12:24:49 PM9/14/20
to openre...@googlegroups.com
Hello,

Would it be possible to have a subrequest or something similar only add an entry in Redis when a new cache file is created? In short, I'm wondering if it's possible to basically track cache data using Redis.

Thanks!

ecc256

unread,
Sep 14, 2020, 1:55:46 PM9/14/20
to openresty-en

Andrei,

Does this answer your question?

Andrei

unread,
Sep 14, 2020, 4:33:43 PM9/14/20
to openre...@googlegroups.com
Hello,

Unfortunately, that does not. I'm basically trying to accomplish something along these lines:

1. Check if $upstream_cache_status = MISS/STALE/EXPIRED (any status that can result in a cache file write)
2. If the request resulted in writing a cache file, fire off a subrequest to Redis
3. The subrequest to Redis would store the cache key and other metadata from headers using the same expire time as the nginx cache manager for that particular cache file

My goal is to have granular info and control over what's actively cached, without having to search through the filesystem or use inotify + parsing.




--
You received this message because you are subscribed to the Google Groups "openresty-en" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openresty-en...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openresty-en/2258c4a4-6f5e-48d4-822e-0028d00952aen%40googlegroups.com.

ecc256

unread,
Sep 14, 2020, 4:50:09 PM9/14/20
to openresty-en

That’s quite a work you are up to.

Do you run standalone OpenResty or as multiple pods on k8s (for redundancy)?

Might be easier to replace OS file system based caching with REDIS one?

No idea if caching subsystem is pluggable, tho…

Good luck in any case!

ecc256

unread,
Sep 14, 2020, 4:55:58 PM9/14/20
to openresty-en

On the 2nd thought, you can check headers sent to client and update REDIS, right?

You will know cache hit/miss status and time stamp.

You can lookup other metadata through the file system API, if needed?

Andrei

unread,
Sep 14, 2020, 6:56:54 PM9/14/20
to openre...@googlegroups.com
Hello,

Standalone for now, but once I figure it out I'll be able to scale and probably toss on github. I can likely figure out the subrequest to redis part, just not sure how to determine in lua if the request resulted in a newly written cache file. I'm hoping there's a function I can hook into/use and trigger the Redis subrequest from there.

ecc256

unread,
Sep 14, 2020, 7:18:12 PM9/14/20
to openresty-en

Take a look at this page. I would use header_filter_by_lua. You should be able to get both cache hit/miss status and some meta info.

Jason Godsey

unread,
Sep 14, 2020, 7:21:18 PM9/14/20
to openre...@googlegroups.com
Yes this should be easy. Use proxy_cache as normal but send it through another port also on your nginx before going upstream. In that location block, log the request being fetched as it was a miss by definition. No need to track headers unless you are trying to report on upstream cache availability.

--


You received this message because you are subscribed to the Google Groups "openresty-en" group.


To unsubscribe from this group and stop receiving emails from it, send an email to openresty-en...@googlegroups.com.


Jason Godsey

unread,
Sep 14, 2020, 7:31:42 PM9/14/20
to openre...@googlegroups.com
Sorry for the double email, here is what I was thinking - not complete but shows layout:

server {
listen :80;
location / {
# cache here
proxy_cache cache;
}
}

server {
listen :8080;
location / {
access_by_lua_block {
                local redis = require "resty.redis"
                local red = redis:new()
                red:set_timeouts(1000, 1000, 1000) -- 1 sec
                local ok, err = red:connect("127.0.0.1", 6379)
                if not ok then
                    ngx.say("failed to connect: ", err)
                    return
                end
                ok, err = red:set(whatYouWantSetKey, whatYouWantSetValue)
                if not ok then
                    ngx.say("failed to set: ", err)
                    return
                end
}
proxy_pass http://origin;
# don't cache here
}
}

Andrei

unread,
Sep 14, 2020, 7:32:06 PM9/14/20
to openre...@googlegroups.com
Hello,

Thanks for the tips. Would it be possible to use sr_cache similar to https://github.com/openresty/srcache-nginx-module#caching-with-redis, maintain the cache data in nginx vs redis and only leverage the sr_cache PUT to store the cache key and a few arbitrary headers?

Andrei

unread,
Sep 14, 2020, 7:32:46 PM9/14/20
to openre...@googlegroups.com
Gotcha, thanks for the skeleton, something to fiddle with :)

ecc256

unread,
Sep 14, 2020, 7:32:48 PM9/14/20
to openresty-en

Jason,

>Yes this should be easy. Use proxy_cache as normal but send it through another port also on your nginx before going upstream.

Yep, should work too.

Tho, wouldn’t it result in cached content double buffering (sort of)?

ecc256

unread,
Sep 14, 2020, 7:33:33 PM9/14/20
to openresty-en

Andrei,

Keep us posted on you progress, please?

Jason Godsey

unread,
Sep 14, 2020, 7:44:55 PM9/14/20
to openre...@googlegroups.com
On Mon, Sep 14, 2020 at 5:32 PM 'ecc256' via openresty-en <openre...@googlegroups.com> wrote:

Jason,

>Yes this should be easy. Use proxy_cache as normal but send it through another port also on your nginx before going upstream.

Yep, should work too.

Tho, wouldn’t it result in cached content double buffering (sort of)?


Not double buffering, but it would cause nginx to do an additional copy from :8080 to :80 to populate the cache on any cache miss.  If it's a high volume site, maybe use unix sockets instead of tcp port for :8080 directive.  The other way would be putting the access_by_lua in the :80 directive, hashing the url and doing a fstat manually maybe? I think that would be more expensive. I don't know of a built in hook to ask nginx if the cache_key is present.

 
On Monday, September 14, 2020 at 7:32:06 PM UTC-4 lag...@gmail.com wrote:
Hello,

Thanks for the tips. Would it be possible to use sr_cache similar to https://github.com/openresty/srcache-nginx-module#caching-with-redis, maintain the cache data in nginx vs redis and only leverage the sr_cache PUT to store the cache key and a few arbitrary headers?

On Tue, Sep 15, 2020 at 2:21 AM Jason Godsey <ja...@godsey.net> wrote:
Yes this should be easy. Use proxy_cache as normal but send it through another port also on your nginx before going upstream. In that location block, log the request being fetched as it was a miss by definition. No need to track headers unless you are trying to report on upstream cache availability.

On Mon, Sep 14, 2020 at 10:24 AM Andrei <lag...@gmail.com> wrote:
Hello,

Would it be possible to have a subrequest or something similar only add an entry in Redis when a new cache file is created? In short, I'm wondering if it's possible to basically track cache data using Redis.

Thanks!








--


You received this message because you are subscribed to the Google Groups "openresty-en" group.


To unsubscribe from this group and stop receiving emails from it, send an email to openresty-en...@googlegroups.com.


To view this discussion on the web visit https://groups.google.com/d/msgid/openresty-en/CAP%2BvvEu4VXuL_jAE8an9FVYE2fhuXt5FXwFAWPf3xHmwv3bNXw%40mail.gmail.com.


--
You received this message because you are subscribed to the Google Groups "openresty-en" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openresty-en...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "openresty-en" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openresty-en...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages