Load balancing redis using Nginx

3,560 views
Skip to first unread message

Davide Perini

unread,
Feb 20, 2017, 1:04:32 PM2/20/17
to Redis DB

Hi, 

I would like to use this plugin to balance my redis instances.

https://github.com/openresty/redis2-nginx-module#redis2_pass


I’m having some hard time in balancing redis and I would really appreciate any help you could give me.

 

I installed the module and I have added this to my nginx configuration:

upstream redis_cluster {

     server 127.0.0.1:6379;

     server 127.0.0.1:6380;

}

 

server {

     listen 6370;

     server_name 0.0.0.0;

     location = /redis {

         redis2_next_upstream error timeout invalid_response;

         redis2_query get foo;

         redis2_pass redis_cluster;

     }

}

 

As you can see I have two instances of redis, one on 6379 (master) and one on 6380 (slave), they are configured for replication.

 

As far as I understood this configuration should open a port 6370, the 6370 port will be balanced by nginx.

6379 should serve every request to redis and 6380 should serve every request in case of master timeout error ecc…

 

Unfortunantly if I try to ping the 6370 using redis-cli I have this error:

root@as301ouc:/app/nginx/sites-enabled# redis-cli -p 6370

127.0.0.1:6370> ping

Error: Protocol error, got "H" as reply type byte

 

Any help will be really appreciated.

Thanks

Jan-Erik Rediger

unread,
Feb 20, 2017, 1:23:09 PM2/20/17
to redi...@googlegroups.com
The nginx plugin you linked to exposes Redis through an HTTP interface.
It awaits HTTP requests on port 6370, sends a request to the configured
Redis and returns the response inside a HTTP request.

redis-cli on the other hand only speaks the bare Redis protocol, not HTTP.

If you actually want to load balance in front of Redis use a TCP proxy
such as haproxy[1] or a Redis-specific proxy such as Twemproxy[2]

[1]: http://www.haproxy.org/
[2]: https://github.com/twitter/twemproxy
> --
> You received this message because you are subscribed to the Google Groups "Redis DB" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+u...@googlegroups.com.
> To post to this group, send email to redi...@googlegroups.com.
> Visit this group at https://groups.google.com/group/redis-db.
> For more options, visit https://groups.google.com/d/optout.

X

unread,
Dec 1, 2017, 10:12:01 AM12/1/17
to Redis DB
Start 2 Redis servers: a slave and a master

$ redis-server --port 6379
$ redis
-server --port 6380 --slaveof 127.0.0.1 6379

$ redis-cli set foo bar
$ redis
-cli -p 6380 get foo
"bar"


Setup NGINX

stream {
    upstream redis
{

        server
127.0.0.1:6379;
        server
127.0.0.1:6380;
   
}


    server
{

        listen
127.0.0.1:6370;
        proxy_pass redis
;
   
}
}


Done. This setup works only for read operations.

$ redis-cli -p 6370 get foo
"bar"

hva...@gmail.com

unread,
Dec 1, 2017, 11:16:41 AM12/1/17
to Redis DB


On Friday, December 1, 2017 at 7:12:01 AM UTC-8, X wrote:

 This setup works only for read operations.

So it requires a client that is able to use different connections for write commands than for read commands.  The writes must go through connections to the master, and reads go through connections to the nginx load balancer.

This isn't impossible to do, but it's an important part of using this kind of load balancer scheme, and should be explained clearly for the benefit of novice Redis users.

simon mitnick

unread,
Mar 14, 2023, 3:22:30 PM3/14/23
to Redis DB
Hello,

I was reading this thread and I wanted to ask if HAProxy is useful if you have redis sentinel setup (1master+2slaves)!?

thanks.

simon

Greg Andrews

unread,
Mar 16, 2023, 1:25:25 PM3/16/23
to Redis DB
If haproxy has a plugin/module that talks to the Sentinel instances to learn the IP+port of the master and slaves, then haproxy can be useful.

Sentinel monitors your Redis instances and when the master crashes, Sentinel can reconfigure the instances to have a new master (and connect the remaining slaves to replicate from the new master).
Sentinel also provides an API for clients to learn the IP+port of the master and the slaves, so the clients can connect to the proper places after a crash and reconfiguration.

If haproxy has the ability to use this Sentinel API and follow reconfigurations, then it will be useful.  If haproxy does not have the ability to adapt to a Redis crash and reconfiguration, then it can send clients to the wrong Redis instance.  Without haproxy, clients would talk to Sentinel and use the API info to connect directly to the appropriate Redis instances.
Reply all
Reply to author
Forward
0 new messages