Replication speed benchmak

336 views
Skip to first unread message

Aníbal Rojas

unread,
Oct 26, 2009, 1:47:45 PM10/26/09
to Redis DB
Hello,

I conducted a series of replications tests using an EC2 instance
(the same in ) with 10MM keys (237M dump file and 1.5GB of RAM)

$ uname -a
Linux domU-12-31-39-06-18-41 2.6.31-300-ec2 #3-Ubuntu SMP Sat Sep 26
10:31:17 UTC 2009 x86_64 GNU/Linux

$ cat /proc/cpuinfo reports: 8x Intel(R) Xeon(R) CPU E5410 @
2.33GHz

$ cat /proc/meminfo | head
MemTotal: 7358404 kB
MemFree: 700008 kB
Buffers: 17240 kB
Cached: 1062636 kB
SwapCached: 0 kB
Active: 5933172 kB
Inactive: 392892 kB

26 Oct 16:14:39 - Connecting to MASTER...
26 Oct 16:14:47 - Receiving 247777792 bytes data dump from MASTER
26 Oct 16:15:05 - MASTER <-> SLAVE sync succeeded

Both instances of Redis are in the same box, with default SAVE
configuration. The only thing that is different is the port.

From the MASTER point of view the replication process took 12
seconds:

26 Oct 16:14:38 . 0 clients connected (0 slaves), 1652000048 bytes in
use, 0 shared objects
26 Oct 16:14:39 . Accepted 127.0.0.1:48564
26 Oct 16:14:39 - Slave ask for synchronization
26 Oct 16:14:39 - Starting BGSAVE for SYNC
26 Oct 16:14:39 - Background saving started by pid 25958
26 Oct 16:14:43 . DB 0: 10000000 keys (0 volatile) in 16777216 slots
HT.
26 Oct 16:14:43 . 0 clients connected (1 slaves), 1652000401 bytes in
use, 0 shared objects
26 Oct 16:14:46 - DB saved on disk
26 Oct 16:14:47 - Background saving terminated with success
26 Oct 16:14:48 . DB 0: 10000000 keys (0 volatile) in 16777216 slots
HT.
26 Oct 16:14:48 . 0 clients connected (1 slaves), 1652000449 bytes in
use, 0 shared objects
26 Oct 16:14:51 - Synchronization with slave succeeded
26 Oct 16:14:53 . DB 0: 10000000 keys (0 volatile) in 16777216 slots
HT.
26 Oct 16:14:53 . 0 clients connected (1 slaves), 1652000401 bytes in
use, 0 shared objects

The SLAVE took a little longer, 21 seconds.

26 Oct 16:14:39 . 0 clients connected (0 slaves), 1652000034 bytes in
use, 0 shared objects
26 Oct 16:14:39 - Connecting to MASTER...
26 Oct 16:14:47 - Receiving 247777792 bytes data dump from MASTER
26 Oct 16:15:05 - MASTER <-> SLAVE sync succeeded
26 Oct 16:15:10 . DB 0: 10000000 keys (0 volatile) in 16777216 slots
HT.

Impressive despite there is no network involved.

I monitored the process using top and liked very much that the
MASTER instance never showed up in the radar, the SLAVE pegged up
while the initial sync was in process, something thats completely
fine.

I am conducting a few more tests, I will publish the results
later.

--
Aníbal Rojas
Ruby on Rails Web Developer
http://www.google.com/profiles/anibalrojas

Salvatore Sanfilippo

unread,
Oct 26, 2009, 1:56:32 PM10/26/09
to redi...@googlegroups.com
2009/10/26 Aníbal Rojas <aniba...@gmail.com>:

>
> Hello,
>
>    I conducted a series of replications tests using an EC2 instance
> (the same in ) with 10MM keys (237M dump file and 1.5GB of RAM)

Thank you Anìbal! That's cool. Redis replication is not a very
explored field, at least I've this feeling.

Ah and btw ZADD, ZREM, ZRANGE, ZREVRANGE are now on Redis git, and now
this data type is saved on disk like any other. This is early code but
I think I fixed all the obvious issues, and I was able to run a bit of
tests without stability problems or memory leaks. Tomorrow I'll write
the tests and doc for the new data type, and test a lot more the
stability, but zsets are already hackable I guess.

Cheers,
Salvatore


--
Salvatore 'antirez' Sanfilippo
http://invece.org

"Once you have something that grows faster than education grows,
you’re always going to get a pop culture.", Alan Kay

Aníbal Rojas

unread,
Oct 26, 2009, 7:58:24 PM10/26/09
to Redis DB

Salvatore,

> Thank you Anìbal! That's cool. Redis replication is not a very
> explored field, at least I've this feeling.

Replication and sharding are very important when using Redis
because being single threaded, it is the only way to exploit the
availability of more cores or servers.

> Ah and btw ZADD, ZREM, ZRANGE, ZREVRANGE are now on Redis git, and now
> this data type is saved on disk like any other. This is early code but

Cool, tomorow we will be doing serious hacking with our project to
check if our calcs are right and we can fit the core of the data into
Redis, and age it as fast as we think it is possible.

We will be also using Tokyo Cabinet for some referential data, that is
impossible to fit in a EC2 max RAM.

> I think I fixed all the obvious issues, and I was able to run a bit of
> tests without stability problems or memory leaks. Tomorrow I'll write
> the tests and doc for the new data type, and test a lot more the
> stability, but zsets are already hackable I guess.

If our initial tests works like expected, we will try to submit the
patch to the Ruby driver for the zset support.

Thanks a lot for you support, and if you want some help adding
documentation to the Redis site please just let me know I will be more
than happy to help (By the way did you see my comment in about the
LMOVE command?)

Best regards,

--
Aníbal

> Cheers,
> Salvatore
>
> --
> Salvatore 'antirez' Sanfilippohttp://invece.org
Reply all
Reply to author
Forward
0 new messages