Reuse Redis connection and get better performance

2,121 views
Skip to first unread message

Louie Kwan

unread,
Jan 15, 2016, 4:54:45 PM1/15/16
to openresty-en
I do the following to do a get the value from a key. 

Every time, ngix will call the same script again and I am wondering how can it be optimized as the connection will not change. I would like to have some insight how can I ruse the same redis connection. Any help is much appreciated.

-- REDIS Connection
local redis = require "resty.redis"
local red = redis:new()
red:set_timeout(100) -- 100 ms
local redis_host = os.getenv("CSP_REDIS_HOST")
if not redis_host then redis_host = "127.0.0.1" end
local redis_port = os.getenv("CSP_REDIS_PORT")
if not redis_port then redis_port = 6379 end
local res, err = red:connect(redis_host, redis_port)

... The above code is same all the time and only the key for look up is different.

  res, err = red:hget(key, "proxyPass")

Chris Tanner

unread,
Jan 16, 2016, 3:21:30 PM1/16/16
to openresty-en
If you're reusing the same connection within the same request then you can use the set_keepalive() method. To reduce the getenv calls you could put them in a module and then just require that module once.

Louie Kwan

unread,
Jan 18, 2016, 1:33:04 PM1/18/16
to openresty-en
The base code is based on the following 


The issue is that I got new connection every time and it could be expensive? even put the set_keepalive...

How could we pass the same connection around?

worker_processes  2;
error_log logs/error.log info;

events {
    worker_connections 1024;
}

http {
    server {
        listen 8080;

        location / {
            resolver 8.8.4.4;  # use Google's open DNS server

            set $target '';
            access_by_lua '
                local key = ngx.var.http_user_agent
                if not key then
                    ngx.log(ngx.ERR, "no user-agent found")
                    return ngx.exit(400)
                end

                local redis = require "resty.redis"
                local red = redis:new()

                red:set_timeout(1000) -- 1 second

                local ok, err = red:connect("127.0.0.1", 6379)
                if not ok then
                    ngx.log(ngx.ERR, "failed to connect to redis: ", err)
                    return ngx.exit(500)
                end

                local host, err = red:get(key)
                if not host then
                    ngx.log(ngx.ERR, "failed to get redis key: ", err)
                    return ngx.exit(500)
                end

                if host == ngx.null then
                    ngx.log(ngx.ERR, "no host found for key ", key)
                    return ngx.exit(400)
                end

                ngx.var.target = host
            ';

            proxy_pass http://$target;
        }
    }
}

Thibault Charbonnier

unread,
Jan 18, 2016, 3:46:46 PM1/18/16
to openresty-en
Since lua-resty-redis leverages the ngx_lua cosocket API, you can, as Chris said, use the set_keepalive() method.

Basically, each Nginx worker can maintain a connection pool of sockets already connected to their upstream peer, and re-use them on subsequent connect() instead of opening a new one. The only thing you have to do to use it is to call set_keepalive() in place where you would usually call close().

For example:

local redis = require "resty.redis"
local red = redis:new()

red
:set_timeout(1000) -- 1 second

local ok, err = red:connect("127.0.0.1", 6379)
-- do things


local host, err = red:get(key)
-- do things

-- Put the underlying socket in the worker's connection pool.
local ok, err = red:set_keepalive()
-- handle error


There, when you call connect() with a given host and port, lua-resty-redis will re-use (if any) a socket already connected to this peer, which is very cheap in comparison with opening a new connection. You can optionally pass arguments to set_keepalive() such as the size of the connection pool and the max idle time (before closing), or rely on your lua_socket_pool_size and lua_socket_keepalive_timeout directives.

Secondly, as Chris pointed out, it would probably be better to cache your os.getenv() calls in a module and have it required from your Lua logic: when the module will be required (when Nginx starts), your ENV variables will be cached in the module, and all your requests will hit your Redis logic, and simply use the cached variables instead of calling os.getenv() on such a hot code path.

Docs:
- More infos on the underlying setkeepalive() function: https://github.com/openresty/lua-nginx-module#tcpsocksetkeepalive-
- Availability of cosocket API in various contexts of ngx_lua: https://github.com/openresty/lua-nginx-module#cosockets-not-available-everywhere
- lua_socket_pool_size/lua_socket_keepalive_timeout directives: https://github.com/openresty/lua-nginx-module#lua_socket_pool_size

Louie Kwan

unread,
Jan 18, 2016, 10:16:56 PM1/18/16
to openresty-en
Thanks for the further clarification.

Can I assume that the same principle applied to pintsized/lua-resty-http, a HTTP client driver for OpenResty / ngx_lua ?

Thibault Charbonnier

unread,
Jan 18, 2016, 11:06:42 PM1/18/16
to openresty-en
Thanks for the further clarification.

Can I assume that the same principle applied to pintsized/lua-resty-http, a HTTP client driver for OpenResty / ngx_lua ?

Yes, you can assume the same of any library that claims to leverage the cosocket API.

James Hurst

unread,
Jan 19, 2016, 5:49:57 AM1/19/16
to openre...@googlegroups.com
Yes, although in the case of HTTP, there are cases where the server determines it not possible, and in such cases httpc:set_keepalive() will return `2, err`. So best to check that return value if you want to be sure you're reusing connections.

--

Thibault Charbonnier

unread,
Jan 19, 2016, 2:49:37 PM1/19/16
to openresty-en
Yes indeed. It is also worth nothing that when using the cosocket API with HTTP, you should previously check the value of the Connection header and not call set_keepalive() when the connection is not considered persistent ("Connection: close") .

Louie Kwan

unread,
Jan 19, 2016, 2:55:13 PM1/19/16
to openresty-en
Yes, etcd may not have keepalive support yet. Just tried....

ngx.say(res.headers["Connection"]) is returning me a nil 

and the httpc:set_keepalive() is returning nil, err.

May be the COREOS ETCD thinks that it is too heavy too keep the permanent connection.

Brian Akins

unread,
Jan 19, 2016, 6:32:00 PM1/19/16
to openre...@googlegroups.com
etcd does keepalive. 
HTTP/1.1 is keep alive by default - no need for a connection header.
set_keepalive() is for setting the connection pool status, not http.

What headers are you sending? Do you have a small example that exhibits the behavior?

Sorry if this has been covered as I am coming into this thread late. I have a good bit of experience with etcd in other languages, but never in Openresty.


--
You received this message because you are subscribed to the Google Groups "openresty-en" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openresty-en...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Thibault Charbonnier

unread,
Jan 19, 2016, 7:44:37 PM1/19/16
to openresty-en
> HTTP/1.1 is keep alive by default - no need for a connection header.
> set_keepalive() is for setting the connection pool status, not http.

Which is why you want to avoid putting a socket in the connection pool when the connection is actually closed, which can be checked with the Connection header.

James Hurst

unread,
Jan 20, 2016, 4:58:29 AM1/20/16
to openre...@googlegroups.com
On 20 January 2016 at 00:44, Thibault Charbonnier <thib...@mashape.com> wrote:
> HTTP/1.1 is keep alive by default - no need for a connection header.
> set_keepalive() is for setting the connection pool status, not http.

Which is why you want to avoid putting a socket in the connection pool when the connection is actually closed, which can be checked with the Connection header.

Yep exactly, and lua-resty-http keeps track of this for you. So you can safely call set_keepalive(), which will either close or put the connection in the pool, and optionally you can check the return code if you need to know what happened:

 
--

Thibault Charbonnier

unread,
Jan 20, 2016, 2:21:22 PM1/20/16
to openresty-en
On Wednesday, January 20, 2016 at 1:58:29 AM UTC-8, James Hurst wrote:
On 20 January 2016 at 00:44, Thibault Charbonnier <thib...@mashape.com> wrote:
> HTTP/1.1 is keep alive by default - no need for a connection header.
> set_keepalive() is for setting the connection pool status, not http.

Which is why you want to avoid putting a socket in the connection pool when the connection is actually closed, which can be checked with the Connection header.

Yep exactly, and lua-resty-http keeps track of this for you. So you can safely call set_keepalive(), which will either close or put the connection in the pool, and optionally you can check the return code if you need to know what happened:

That's neat! Granted it was the original question from OP, but I'd say it's still important to clarify this in the context of the cosocket API with HTTP. Anyways!
Reply all
Reply to author
Forward
0 new messages