Poor Redis performance

656 views
Skip to first unread message

Chris O'Brien

unread,
Sep 5, 2013, 5:56:37 PM9/5/13
to redi...@googlegroups.com
When my Redis instance starts to hit >50 connections and >15,000 queries per second, transactions frequently begin to take longer than 1 second. I see this behavior connecting through bonded 10G interfaces and through the loopback interface. I do not see this behavior connecting through the Redis socket locally. 

The workload is a near 50/50 split of GETs and SETs.

I am running a single instance of Redis on a Xeon E7520 @ 1.87GHz running Linux 3.4.60. 256GB memory.

You can see the following values from 'netstat' incrementing rapidly:

    4441223 times the listen queue of a socket overflowed
    4441223 SYNs to LISTEN sockets ignored

I do see that the CPU running Redis is 100% utilized. 

Currently, all RDB and AOF saving is turned off while I work on this issue. 

I'm looking for feedback on how I may improve performance, reduce CPU usage, and/or tune TCP performance. 

Thanks for your help!

Benchmarks:

numactl -C 1 ./redis-benchmark -q -n 100000 -d 256
PING_INLINE: 73475.39 requests per second
PING_BULK: 73313.78 requests per second
SET: 74794.31 requests per second
GET: 71073.21 requests per second
INCR: 76161.46 requests per second
LPUSH: 76628.36 requests per second
LPOP: 71736.01 requests per second
SADD: 74794.31 requests per second
SPOP: 74682.60 requests per second
LPUSH (needed to benchmark LRANGE): 76628.36 requests per second
LRANGE_100 (first 100 elements): 16087.52 requests per second
LRANGE_300 (first 300 elements): 5724.10 requests per second
LRANGE_500 (first 450 elements): 3466.44 requests per second
LRANGE_600 (first 600 elements): 2439.26 requests per second
MSET (10 keys): 45126.35 requests per second

Config:

daemonize yes
pidfile /var/run/redis.pid
port 6379
unixsocket /tmp/redis.sock
unixsocketperm 755
timeout 1
tcp-keepalive 0
loglevel notice
logfile redis.log
databases 16
#save 900 1
#save 300 10
#save 60 10000
stop-writes-on-bgsave-error no
rdbcompression no
rdbchecksum no
dbfilename dump.rdb
dir ./
slave-serve-stale-data yes
slave-read-only no
repl-disable-tcp-nodelay no
slave-priority 100
maxmemory 215gb
maxmemory-policy allkeys-lru
appendonly no
appendfilename /2/appendonly.aof
appendfsync no
no-appendfsync-on-rewrite yes
auto-aof-rewrite-percentage 0
auto-aof-rewrite-min-size 64mb
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
activerehashing no
hz 100
aof-rewrite-incremental-fsync no

Didier Spezia

unread,
Sep 6, 2013, 6:11:05 AM9/6/13
to redi...@googlegroups.com
Your hardware seems unbalanced to me: no need to have bonded 10G interfaces with slow CPUs.
A Xeon E7520 is an example of workhorse CPU (a number of cores, but each core is rather slow).
Redis works better with racehorse CPUs (i.e. less cores, but fast ones).

Now, the netstat statistics imply you have a lot of connections/disconnections.
Do your application constantly connect/disconnect to/from Redis?
If yes, then you need to fix this.

You may also want to check whether the NIC interrupts are processed on
a core different from the Redis instance itself (htop is a good tool to check
this point).

Regards,
Didier.

Greg Andrews

unread,
Sep 6, 2013, 11:45:22 AM9/6/13
to redi...@googlegroups.com

On Thu, Sep 5, 2013 at 2:56 PM, Chris O'Brien <chris.ob...@gmail.com> wrote:
You can see the following values from 'netstat' incrementing rapidly:

    4441223 times the listen queue of a socket overflowed
    4441223 SYNs to LISTEN sockets ignored


The listen queue mentioned in those stats is the queue that handles new TCP connections.  The overflows in handling new TCP connections means your clients are making an enormous number of connections to (and disconnections from) redis.  They are probably making a connection, sending a command and getting the response, then disconnecting.  This connect/disconnect approach does indeed kill performance dead.  You've added a large amount of overhead opening the TCP connection to the transaction of sending the command and receiving the reply. Don't let your client programs do this.

Instead they should open a connection to Redis and keep it open, sending a command and reading the reply, then another command and its reply, and so on, through the same connection.  Clients can even open several connections to Redis if the rate of commands is too great for a single TCP stream.

On top of using persistent connections to Redis is the command pipelining described in http://redis.io/topics/pipelining.  But your main performance limit right now is coming from your clients disconnecting and reconnecting so much.

  -Greg

Reply all
Reply to author
Forward
0 new messages