Strange performance issue

655 views
Skip to first unread message

Andrei Lukovenko

unread,
Jun 20, 2014, 1:07:51 PM6/20/14
to redi...@googlegroups.com

Hi,

While migrating from memcache to Redis I noticed a major slowdown in single connection mode. While investigating it become obvious that something strange is going on. The test was performed under the following conditions:
1) local connection through socket
2) save=""
3) empty db with one string key
4) 30000 consecutive GETs to this key from one connection

I've got about 15000 rps on Redis on this test. Previously ,  I had about 30000 on memcache. It seemed strange,  so I ran redis-benchmark,  and got the same result  with -c 1 parameter. With default 5 connections I get about 60000, which is great,  but my results in single connection mode are discouraging.

Is it normal,  or it is an issue with my server?

Josiah Carlson

unread,
Jun 20, 2014, 2:43:08 PM6/20/14
to redi...@googlegroups.com
On a simple get/set request basis, Memcached can be faster than Redis (especially without pipelining). This is known. There are several reasons for this, and none of them are surprising if you understand what goes on under the covers in Memcached and Redis. In your specific case, the performance you are experiencing seems likely due to a low-powered server/VM. I'd expect that level of performance if I was using an AWS EC2 m1.small instance, or similar from another provider.

In my case, I see 100k-120k requests/second with get/set tests in "redis-benchmark -s /tmp/redis.sock -c 1 -q -n 500000" on a modern-ish Intel i7. With "redis-benchmark -s /tmp/redis.sock -c 5 -q -n 500000", I see 360k-375k requests/second.

If I would make a recommendation, it is to maybe use a faster machine with better IO characteristics.

 - Josiah




--
You received this message because you are subscribed to the Google Groups "Redis DB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+u...@googlegroups.com.
To post to this group, send email to redi...@googlegroups.com.
Visit this group at http://groups.google.com/group/redis-db.
For more options, visit https://groups.google.com/d/optout.

Andrei Lukovenko

unread,
Jun 20, 2014, 3:25:03 PM6/20/14
to redi...@googlegroups.com

Hi,

Actually, I do not care about absolute values. What really bothers me is the relative performance. I didn't expect Redis to be two times slower than memcache under the same conditions, and I don't see that as "normal".

Is that normal?

Josiah Carlson

unread,
Jun 20, 2014, 4:28:56 PM6/20/14
to redi...@googlegroups.com
I've never actually run Memcached, so I don't fully know. A few years ago someone did some benchmarks and noticed a 25% advantage for Memcached, which resulted in some work being done to improve the situation for Redis. Further optimizations have been done on the Redis side of things, so except for some cases, it should be within 10-20% of Memcached.

But whether or not you are running it in an identical setup, unless you are running the two in situations comparable to how you are going to run "in production", benchmarked comparisons are more or less worthless. What do I mean? Say that a unix domain socket in both gets you X and Y QPS. If you switch to 'localhost', that performance will drop. Switching to an external IP for remote access will drop the performance even further, and may significantly narrow the gap between the two (socket communication is inherently slow by nature). Also, if Memcached is being accessed via UDP, that offers certain performance advantages compared to the TCP that Redis supports.

Ultimately though, the question of whether to use Redis vs. Memcached usually comes down to features. Want master/slave replication? Want cluster support? Want data structures? Want pubsub? Want persistence? Want operations on data structures? Want Lua scripting? Want something more than just a cache? That means Redis. But if you just need something with get/set cache functionality, and Memcached is already working for you, then there might not be any advantages to Redis for you. But the moment you have a real need for a Redis-only feature, you might find that the drop in performance is well worth the additional functionality.

 - Josiah

Yiftach Shoolman

unread,
Jun 20, 2014, 4:50:29 PM6/20/14
to redi...@googlegroups.com, redi...@googlegroups.com
Can u pls send your Redis config file ?

Sent from my iPhone

Andrei Lukovenko

unread,
Jun 21, 2014, 7:43:51 AM6/21/14
to redi...@googlegroups.com
Hi, Josiah,

Currently, I use unix domain sockets with memcached in production, and I expect to do the same with Redis, so my benchmarking actually does make sense. In my case it is apples-to-apples comparison. And - yes, I do need to process lots of GET requests, and - yes, it seems like a bottleneck.

I really like Redis and I was going to replace all of my memcached instances that currently do all of the caching heavyload. I expected 10-15% performance degradation at most, and 50% degradation seems to be a major issue.

Now I really need to find a way to speed things up.

Anyway, thanks for your help!
Best regards, Andrei

Andrei Lukovenko

unread,
Jun 21, 2014, 8:10:20 AM6/21/14
to redi...@googlegroups.com
Hi Yiftach,

Here is my config:

daemonize yes                                                                                                                            
pidfile /var/run/redis/redis.pid
port 6379
bind 127.0.0.1
unixsocket /tmp/redis.sock
unixsocketperm 777
timeout 0
tcp-keepalive 0
loglevel notice
logfile /var/log/redis.log
databases 16
save ""
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename redis.rdb
dir /var/lib/redis/
slave-serve-stale-data no
slave-read-only yes
repl-disable-tcp-nodelay no
slave-priority 100
appendonly no
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-entries 512
list-max-ziplist-value 64
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes

And here are my benchmarks:

$ redis-benchmark -s /tmp/redis.sock -n 100000 -t GET -c 1
====== GET ======
  100000 requests completed in 5.48 seconds
  1 parallel clients
  3 bytes payload
  keep alive: 1

100.00% <= 1 milliseconds
100.00% <= 1 milliseconds
18241.52 requests per second

$ redis-benchmark -s /tmp/redis.sock -n 100000 -t GET -c 5
====== GET ======
  100000 requests completed in 1.90 seconds
  5 parallel clients
  3 bytes payload
  keep alive: 1

100.00% <= 0 milliseconds
52548.61 requests per second


Best regards, Andrei

Yiftach Shoolman

unread,
Jun 21, 2014, 9:03:41 AM6/21/14
to redi...@googlegroups.com
I don't see any issue with the config file. From my experience a single threaded Memcached should run equally to Redis, and if you run everything over a single connection the performance should be the same even if a multi-threaded Memcached.Redis also supports pipelining which AFAIR only exist with binary Memcached. You can just run pipeline over this single connection and  probably get x5 improvement.

I wonder which tool did u use to test Memcached performance ?  to compare apples to apples I really suggest that you run everything using the same load generator tool. We built memtier_benchmark that does exactly that, if you can share your results with it that would be great.

Yiftach Shoolman
+972-54-7634621

Andrei Lukovenko

unread,
Jun 21, 2014, 3:59:35 PM6/21/14
to redi...@googlegroups.com
Hi!

Spectacular tool! I really like it, but I am really concerned about the results. Actually, they are somewhat contradictive. After running memtier I decided to sketch a synthetic test to reproduce my previous results, and it seems that memtier somehow underperforms while testing memcache:

$ memtier_benchmark -S /tmp/redis.sock -n 100000 -c 1 -t 1 --key-minimum=1 --key-maximum=1 -P redis
[RUN #1] Preparing benchmark client...
[RUN #1] Launching threads now...
[RUN #1,   5 secs]  0 threads:      100000 ops,   20000 ops/sec, 1.29MB/sec,  0.05msec latency

1         Threads
1         Connections per thread
100000    Requests per thread
Type        Ops/sec     Hits/sec   Misses/sec      Latency       KB/sec
------------------------------------------------------------------------
Sets        1818.20          ---          ---      0.05500       127.00
Gets       18181.80     18181.80         0.00      0.05400      1189.00
Totals     20000.00     18181.80         0.00      0.05400      1317.00


Request Latency Distribution
Type        <= msec      Percent
------------------------------------------------------------------------
SET               0       100.00
---
GET               0       100.00

$ memtier_benchmark -S /tmp/memcached.sock -n 100000 -c 1 -t 1 --key-minimum=1 --key-maximum=1 -P memcache_binary
[RUN #1] Preparing benchmark client...
[RUN #1] Launching threads now...
[RUN #1,   5 secs]  0 threads:      100000 ops,   20000 ops/sec, 715.55KB/sec,  0.05msec latency

1         Threads
1         Connections per thread
100000    Requests per thread
Type        Ops/sec     Hits/sec   Misses/sec      Latency       KB/sec
------------------------------------------------------------------------
Sets        1818.20          ---          ---      0.05600       129.00
Gets       18181.80     18181.80         0.00      0.05300       585.00
Totals     20000.00     18181.80         0.00      0.05300       715.00


Request Latency Distribution
Type        <= msec      Percent
------------------------------------------------------------------------
SET               0       100.00
---
GET               0       100.00

$ memtier_benchmark -S /tmp/memcached.sock -n 100000 -c 1 -t 1 --key-minimum=1 --key-maximum=1 -P memcache_text
[RUN #1] Preparing benchmark client...
[RUN #1] Launching threads now...
[RUN #1,   6 secs]  0 threads:      100000 ops,   16666 ops/sec, 1.19MB/sec,  0.06msec latency

1         Threads
1         Connections per thread
100000    Requests per thread
Type        Ops/sec     Hits/sec   Misses/sec      Latency       KB/sec
------------------------------------------------------------------------
Sets        1515.17          ---          ---      0.06200        94.00
Gets       15151.50     15151.50         0.00      0.06000      1124.00
Totals     16666.67     15151.50         0.00      0.06000      1219.00


Request Latency Distribution
Type        <= msec      Percent
------------------------------------------------------------------------
SET               0        99.99
SET               1       100.00
---
GET               0       100.00
GET               1       100.00

So, we have a perfect tie. 18K in both cases. But let's run another test. Here is my Perl script:

---------------------------------------------------------------
#!/usr/bin/perl

use utf8;
use 5.014;
use warnings;

use Benchmark;
use Redis::Fast;
use Cache::Memcached::Fast;

use constant COUNT => 100_000;

say 'Testing Redis';
my $redis = Redis::Fast->new(sock => '/tmp/redis.sock');
$redis->set('foo', 'bar');
Benchmark::timethis( COUNT, sub { $redis->get('foo'); } );

say 'Testing memcached';
my $memcache = Cache::Memcached::Fast->new({
            servers => [ '/tmp/memcached.sock' ],
            namespace => 'test',
            utf8 => 1,
            max_size => 512 * 1024,
        });

$memcache->set('foo', 'bar');
Benchmark::timethis( COUNT, sub { $memcache->get('foo'); } );
---------------------------------------------------------------

And here are my results:

Testing Redis
timethis 100000:  7 wallclock secs ( 2.67 usr +  2.75 sys =  5.42 CPU) @ 18450.18/s (n=100000)
Testing memcached
timethis 100000:  4 wallclock secs ( 0.81 usr +  2.48 sys =  3.29 CPU) @ 30395.14/s (n=100000)

Now we have the same 18K for Redis, but over 30K for memcached.

I am very confused. Could you please run my script and try to reproduce my results? It needs several Perl libraries, you can install them by executing:

cpan install Benchmark Redis::Fast Cache::Memcached::Fast

Matt Stancliff

unread,
Jun 22, 2014, 1:56:03 AM6/22/14
to redi...@googlegroups.com

On Jun 21, 2014, at 3:59 PM, Andrei Lukovenko <al...@cordeo.ru> wrote:

But let's run another test. Here is my Perl script:

  Excellent!  I added additional tests and we have more interesting results now.

I added:
  - Sending and receiving raw Redis protocol (not going through a library) both with Inline and Bulk syntax
  - Sending and receiving raw Memcached protocol with text syntax.
  - Testing against large values (2+ MB); it requires starting memcached with an extra option since memcached only supports maximum values of 1 MB by default.
  - Testing against two other Perl Redis libraries


Result are below.  The “Large” results are from reading the contents of /usr/share/dict/words from a key (2.4 MB).  The “Small” results are from setting and retrieving 3 byte key/values of ‘foo’ => ‘bar’.

Test labels:
  - Redis = Redis::Fast (hiredis wrapper)
  - RedisMod = Redis (native perl)
  - RedisDB = RedisDB
  - Memcache = Cache::Memcached::Fast
  - *-DNE = lookup a non-existing key
  - *-Raw = talk over the socket directly without using a library interface; read the reply, but don’t parse it.

Results:
Large Benchmarks
Benchmark: timing 4000 iterations of Memcache, Memcache-Raw, Redis, Redis-Raw-Bulk, Redis-Raw-Inline, RedisDB, RedisMod...
  Memcache: 11.0503 wallclock secs ( 0.45 usr +  7.00 sys =  7.45 CPU) @ 536.91/s (n=4000)
Memcache-Raw: 9.768 wallclock secs ( 1.18 usr +  4.48 sys =  5.66 CPU) @ 706.71/s (n=4000)
     Redis: 13.6461 wallclock secs ( 3.95 usr +  8.48 sys = 12.43 CPU) @ 321.80/s (n=4000)
Redis-Raw-Bulk: 10.0401 wallclock secs ( 1.16 usr +  4.84 sys =  6.00 CPU) @ 666.67/s (n=4000)
Redis-Raw-Inline: 10.0243 wallclock secs ( 1.15 usr +  4.85 sys =  6.00 CPU) @ 666.67/s (n=4000)
   RedisDB: 9.76441 wallclock secs ( 4.23 usr +  4.05 sys =  8.28 CPU) @ 483.09/s (n=4000)
  RedisMod: 12.7869 wallclock secs ( 5.49 usr +  4.71 sys = 10.20 CPU) @ 392.16/s (n=4000)
Results:
                  Rate Redis RedisMod RedisDB Memcache Redis-Raw-Inline Redis-Raw-Bulk Memcache-Raw
Redis            322/s    --     -18%    -33%     -40%             -52%           -52%         -54%
RedisMod         392/s   22%       --    -19%     -27%             -41%           -41%         -45%
RedisDB          483/s   50%      23%      --     -10%             -28%           -28%         -32%
Memcache         537/s   67%      37%     11%       --             -19%           -19%         -24%
Redis-Raw-Inline 667/s  107%      70%     38%      24%               --            -0%          -6%
Redis-Raw-Bulk   667/s  107%      70%     38%      24%               0%             --          -6%
Memcache-Raw     707/s  120%      80%     46%      32%               6%             6%           --


Small Benchmarks
Benchmark: timing 1000000 iterations of Memcache-Raw, Memcached, Memcached-DNE, Redis, Redis-DNE, Redis-Raw-Bulk, Redis-Raw-Inline, RedisDB, RedisMod...
Memcache-Raw: 10.4522 wallclock secs ( 2.02 usr +  3.60 sys =  5.62 CPU) @ 177935.94/s (n=1000000)
 Memcached: 12.2543 wallclock secs ( 1.71 usr +  6.09 sys =  7.80 CPU) @ 128205.13/s (n=1000000)
Memcached-DNE: 11.1033 wallclock secs ( 1.03 usr +  5.93 sys =  6.96 CPU) @ 143678.16/s (n=1000000)
     Redis: 17.8883 wallclock secs ( 4.98 usr +  6.31 sys = 11.29 CPU) @ 88573.96/s (n=1000000)
 Redis-DNE: 17.3093 wallclock secs ( 4.56 usr +  6.28 sys = 10.84 CPU) @ 92250.92/s (n=1000000)
Redis-Raw-Bulk: 12.5372 wallclock secs ( 2.18 usr +  3.90 sys =  6.08 CPU) @ 164473.68/s (n=1000000)
Redis-Raw-Inline: 13.4731 wallclock secs ( 2.19 usr +  3.67 sys =  5.86 CPU) @ 170648.46/s (n=1000000)
   RedisDB: 24.237 wallclock secs (13.20 usr +  6.58 sys = 19.78 CPU) @ 50556.12/s (n=1000000)
  RedisMod: 30.9423 wallclock secs (23.02 usr +  5.25 sys = 28.27 CPU) @ 35373.19/s (n=1000000)
Result:
                     Rate RedisMod RedisDB Redis Redis-DNE Memcached Memcached-DNE Redis-Raw-Bulk Redis-Raw-Inline Memcache-Raw
RedisMod          35373/s       --    -30%  -60%      -62%      -72%          -75%           -78%             -79%         -80%
RedisDB           50556/s      43%      --  -43%      -45%      -61%          -65%           -69%             -70%         -72%
Redis             88574/s     150%     75%    --       -4%      -31%          -38%           -46%             -48%         -50%
Redis-DNE         92251/s     161%     82%    4%        --      -28%          -36%           -44%             -46%         -48%
Memcached        128205/s     262%    154%   45%       39%        --          -11%           -22%             -25%         -28%
Memcached-DNE    143678/s     306%    184%   62%       56%       12%            --           -13%             -16%         -19%
Redis-Raw-Bulk   164474/s     365%    225%   86%       78%       28%           14%             --              -4%          -8%
Redis-Raw-Inline 170648/s     382%    238%   93%       85%       33%           19%             4%               --          -4%
Memcache-Raw     177936/s     403%    252%  101%       93%       39%           24%             8%               4%           --

Andrei Lukovenko

unread,
Jun 22, 2014, 4:18:41 AM6/22/14
to redi...@googlegroups.com
Hi Matt,

Interesting results, and that makes me wonder even more about what I've got from memtier-benchmark. Could you please run memtier-benchmark with the same command-line options as in my example on your host?


--
You received this message because you are subscribed to the Google Groups "Redis DB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+u...@googlegroups.com.
To post to this group, send email to redi...@googlegroups.com.
Visit this group at http://groups.google.com/group/redis-db.
For more options, visit https://groups.google.com/d/optout.



--
Best regards, Andrei

Matt Stancliff

unread,
Jun 22, 2014, 8:43:54 AM6/22/14
to redi...@googlegroups.com

On Jun 22, 2014, at 4:18 AM, Andrei Lukovenko <al...@cordeo.ru> wrote:

> Interesting results, and that makes me wonder even more about what I've got from memtier-benchmark. Could you please run memtier-benchmark with the same command-line options as in my example on your host?

The memtier thing looks broken if using the -n option. You can use --test-time instead. Their numbers are certainly suspect since the results are lower than redis-benchmark. It turns out the strange “repeated number” problem is because it runs exactly 10 gets for every 1 set by default, and the performance of both is the same here, so the operation count ends up multiplied by 10.

Here’s the run for 20 seconds reduced to a get/set ratio of 1:1 instead of 1:10. It’s interesting to see the memcache_text protocol test almost the same as Redis (also a text protocol), but it still feels like we’re mostly hitting internal architecture limits of the benchmark tool itself.

matt@ununoctium:~/repos/memtier_benchmark% memtier_benchmark -S /tmp/redis.sock --test-time 20 -c 1 -t 1 --key-minimum=1 --key-maximum=1 --ratio 10:10 -P memcache_text
[RUN #1] Preparing benchmark client...
[RUN #1] Launching threads now...
error: response parsing failed.
^CUN #1, 1 secs] 1 threads: 0 ops, 0 ops/sec, 0.00KB/sec, nanmsec latency
matt@ununoctium:~/repos/memtier_benchmark% memtier_benchmark -S /tmp/memcached.sock --test-time 20 -c 1 -t 1 --key-minimum=1 --key-maximum=1 --ratio 1:1 -P memcache_text
[RUN #1] Preparing benchmark client...
[RUN #1] Launching threads now...
[RUN #1, 20 secs] 0 threads: 1233629 ops, 61681 ops/sec, 4.12MB/sec, 0.02msec latency

1 Threads
1 Connections per thread
20 Seconds
Type Ops/sec Hits/sec Misses/sec Latency KB/sec
------------------------------------------------------------------------
Sets 30840.75 --- --- 0.01500 1927.00
Gets 30840.70 30840.70 0.00 0.01500 2288.00
Totals 61681.45 30840.70 0.00 0.01500 4216.00


Request Latency Distribution
Type <= msec Percent
------------------------------------------------------------------------
SET 0 100.00
---
GET 0 100.00


matt@ununoctium:~/repos/memtier_benchmark% memtier_benchmark -S /tmp/memcached.sock --test-time 20 -c 1 -t 1 --key-minimum=1 --key-maximum=1 --ratio 1:1 -P memcache_binary
[RUN #1] Preparing benchmark client...
[RUN #1] Launching threads now...
[RUN #1, 20 secs] 0 threads: 1398520 ops, 69926 ops/sec, 3.53MB/sec, 0.01msec latency

1 Threads
1 Connections per thread
20 Seconds
Type Ops/sec Hits/sec Misses/sec Latency KB/sec
------------------------------------------------------------------------
Sets 34963.00 --- --- 0.01400 2492.00
Gets 34963.00 34963.00 0.00 0.01300 1126.00
Totals 69926.00 34963.00 0.00 0.01300 3619.00


Request Latency Distribution
Type <= msec Percent
------------------------------------------------------------------------
SET 0 100.00
SET 1 100.00
---
GET 0 100.00
GET 1 100.00
GET 2 100.00


matt@ununoctium:~/repos/memtier_benchmark% memtier_benchmark -S /tmp/redis.sock --test-time 20 -c 1 -t 1 --key-minimum=1 --key-maximum=1 --ratio 1:1 -P redis
[RUN #1] Preparing benchmark client...
[RUN #1] Launching threads now...
[RUN #1, 20 secs] 0 threads: 1168624 ops, 58431 ops/sec, 3.87MB/sec, 0.02msec latency

1 Threads
1 Connections per thread
20 Seconds
Type Ops/sec Hits/sec Misses/sec Latency KB/sec
------------------------------------------------------------------------
Sets 29215.60 --- --- 0.01600 2054.00
Gets 29215.60 29215.60 0.00 0.01600 1911.00
Totals 58431.20 29215.60 0.00 0.01600 3965.00


Request Latency Distribution
Type <= msec Percent
------------------------------------------------------------------------
SET 0 100.00
---
GET 0 100.00
GET 1 100.00



-Matt

Thomas Love

unread,
Jun 22, 2014, 11:51:13 AM6/22/14
to redi...@googlegroups.com

Is it normal,  or it is an issue with my server?


It's an issue with your client. You are nowhere near stressing either server, so you are effectively benchmarking only your client libraries. And it looks like Redis::Fast is not particularly fast. Your CPU vs wall time numbers suggest as much, and so do Matt's comparative benchmarks. 

To stress Redis with simple commands like GET over a single connection, you will need to use a) an efficient client and b) pipelining. redis-benchmark with -P 5 will demonstrate the effect. It looks like you can get similar behaviour from this Perl library by passing a callback in to get(). 

Thomas

Yiftach Shoolman

unread,
Jun 22, 2014, 12:01:33 PM6/22/14
to redi...@googlegroups.com
A few things:

1. Matt - can you please be more specific and tell us what was broken with the '-n' option ? we are using memtier_benchmark on a a daily basis and have never experienced an issue with it
2. IMO it is always better to do these tests where the benchmark tool and the Redis/Memcached are running on different instances/machines. Yes it adds network latency but on the other hand:
  • It simulates live environment better
  • There is no resources contention between the benchmark tool and the Redis/Memcached 


--
You received this message because you are subscribed to the Google Groups "Redis DB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+u...@googlegroups.com.
To post to this group, send email to redi...@googlegroups.com.
Visit this group at http://groups.google.com/group/redis-db.
For more options, visit https://groups.google.com/d/optout.



--

Yiftach Shoolman
+972-54-7634621

Andrei Lukovenko

unread,
Jun 22, 2014, 12:03:32 PM6/22/14
to redi...@googlegroups.com
If my client library is the issue, then why do I get the same exact results with redis-benchmark and memtier-benchmark in single connection mode with Redis?

Unfortunately, pipelining is not a simple option, as it will require a major architecture rework. I was going to use Redis for smart things like job queues (and it is doing great), and also to replace memcached for caching. It turns out that I'll have to use both for a while.


--
You received this message because you are subscribed to the Google Groups "Redis DB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+u...@googlegroups.com.
To post to this group, send email to redi...@googlegroups.com.
Visit this group at http://groups.google.com/group/redis-db.
For more options, visit https://groups.google.com/d/optout.



--
Best regards, Andrei

Andrei Lukovenko

unread,
Jun 22, 2014, 12:10:30 PM6/22/14
to redi...@googlegroups.com
Well, in my case there is no point in running test through the network, as I am using unix domain sockets connection in my production environment. And the original performance issue that made me to start this topic was connected with just exactly this type of connection.

I believe that running through the network there wouldn't be much of a difference.

At the same time, it seems that Redis needs some kind of optimization handling local connections. I was hoping that it was somehow connected with my Redis build, my config or my system, but it looks like a normal behavior now.

Maybe Salvatore could take a moment to glance at this part of code?..
Best regards, Andrei

Thomas Love

unread,
Jun 22, 2014, 3:21:07 PM6/22/14
to redi...@googlegroups.com
On 22 June 2014 18:03, Andrei Lukovenko <al...@cordeo.ru> wrote:
If my client library is the issue, then why do I get the same exact results with redis-benchmark and memtier-benchmark in single connection mode with Redis?


I think you should be questioning why your Perl memcached client is significantly faster than memtier-benchmark's memcached client, never mind the Redis clients.  

There may be something that one client is doing differently from the others. It might be using a different event library for example which suits this particular test better. 

Matt Stancliff

unread,
Jun 22, 2014, 3:53:35 PM6/22/14
to redi...@googlegroups.com

On Jun 22, 2014, at 12:01 PM, Yiftach Shoolman <yiftach....@gmail.com> wrote:

> 1. Matt - can you please be more specific and tell us what was broken with the '-n' option ? we are using memtier_benchmark on a a daily basis and have never experienced an issue with it

Double check Andrei’s test output again. He requested 100,000 total queries for the test. Across both servers, it reported the *exact same* queries per second even down to two decimal places which is highly unlikely (and we’ve seen the servers don’t perform identically, so we can say the output is just wrong at this point).

Also, as Thomas mentioned, memtier returned results between 60k to 70k requests per second across memcache and Redis. The quick Perl direct socket test shows the servers can return small values at up to 170k requests per second (on the system where the tests were run). Any additional throughput loss is due to parsing replies or caused by overhead in how the tests are designed.


> 2. IMO it is always better to do these tests where the benchmark tool and the Redis/Memcached are running on different instances/machines. Yes it adds network latency but on the other hand:
> • It simulates live environment better
> • There is no resources contention between the benchmark tool and the Redis/Memcached

As Andrei kindly pointed out: these tests have no regular network latency penalty because it’s direct unix sockets on the same host. We’re all running with enough cores for contention not to be a _huge_ issue. Full disclaimer: I ran the tests on my laptop while watching a movie, browsing some web sites, writing some emails, and editing things in terminal windows.


-Matt

Andrei Lukovenko

unread,
Jun 22, 2014, 4:09:13 PM6/22/14
to redi...@googlegroups.com

Actually, I am wondering why memtier-benchmark is slower, not the other way around.

And I see no point in blaming an event library for a single client connection.

22.06.2014 23:21 пользователь "Thomas Love" <tom...@gmail.com> написал:

Yiftach Shoolman

unread,
Jun 22, 2014, 4:14:05 PM6/22/14
to redi...@googlegroups.com
I don't think there is a real accuracy issue here, so let's move on.

That said, it is true that memtier_benchmark does a lot of processing for analyzing responses and presents them in a friendly manner. If you only run with one connection, one thread and no pipelining, this small overhead can result in less throughput
--

Yiftach Shoolman
+972-54-7634621

Thomas Love

unread,
Jun 22, 2014, 5:35:11 PM6/22/14
to redi...@googlegroups.com

On Sunday, 22 June 2014 22:09:13 UTC+2, Andrei Lukovenko wrote:

Actually, I am wondering why memtier-benchmark is slower, not the other way around. 

I understand that your Perl client seems like a natural baseline, but memtier-benchmark looks to me like the better-controlled environment for an all-else-equal comparison. 

And I see no point in blaming an event library for a single client connection.

I'm not blaming it in particular. But the evidence in this thread is that server differences are responsible for no more than a small fraction of the disparity you're seeing.

The Baldguy

unread,
Jun 23, 2014, 12:59:46 AM6/23/14
to redi...@googlegroups.com
According to the Memcache::Fast docs it is a custom implementation if the protocol in designed to minimize system calls and for interaction with the Perl. The author also attempted to minimize copying data around. I don't see where the redis module makes similar claims or efforts. Indeed it is described by the author as simply a wrapper around hiredis. This alone will significantly skew the results on any comparison between the two.

This is one of the difficulties in benchmarking anything with a client connection. Well writen code with speed and efficiency as goals is likely to have better results over code which is not written with those goals and is simply a straight-forward means to an interop en. To put it another way, the fastest car in the world can get beaten if the driver can't perform as well as the driver in a car nearly or just as fast. Your drivers are different, performance variance is thus not unexpected. Particularly when you have, in keeping with the metaphor, a professional driver and an amateur one.

Thus, you are not comparing apples with apples. You are not benchmarking redis and memcached, but your client libraries. This may mean for you that if the difference is important, stick with memcached or find a better Redis client. Just be aware it isn't Redis or Memcached, but the client implementation which is your bottleneck.

As for a better Perl Redis client, I do not travel in that circle so I don't have any useful suggestions other than to look for something more than a simple wrapper.

Cheers,
Bill

Andrei Lukovenko

unread,
Jun 23, 2014, 3:45:08 AM6/23/14
to redi...@googlegroups.com
Hi Bill,

In any benchmark or comparison a client library is an integral part of the system that is hard to take out of the equation. Although Redis::Fast is not advertised as the fastest Redis client, we are yet to see higher results from any other client. Actually, both redis-benchmark and memtier are constantly giving the same results as my Redis::Fast benchmark. If you have a benchmark that will show better results with Redis - I am very eager to try it out (with a hidden intent of using it's internals to fasten my current client library).

As long as there is no such a benchmark (and an appropriate client library), we still have two numbers: the maximum result using Redis and the maximum result using memcache. And under given conditions (single client, single connection, not pipelined GETs) we are yet to see a test where Redis performs on the same level as memcache does. Actually, all of my tests show that Redis performs about 30% worse.

So I stand by my point - there is definitely something wrong going on. In retrospective it is not the first thread where we stumble upon Redis underperforming under some conditions. Don't get me wrong - I really love Redis. It is a great product, and I would really like to use it as my only cache. The problem is, I am yet unable to do it because of these issue.




--
You received this message because you are subscribed to the Google Groups "Redis DB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+u...@googlegroups.com.
To post to this group, send email to redi...@googlegroups.com.
Visit this group at http://groups.google.com/group/redis-db.
For more options, visit https://groups.google.com/d/optout.



--
Best regards, Andrei

Reply all
Reply to author
Forward
0 new messages