Persistence only slave instance with low memory requirements

162 views
Skip to first unread message

Aníbal Rojas

unread,
Jun 4, 2010, 12:57:29 AM6/4/10
to redi...@googlegroups.com
I hate going to tape, sorry, I meant disk, but it has to happen... So...

I am thinking in something like a save only slave. No reads, no
writes, only dumping data to disk as fast as possible. Also I want
this save only replica not to use, or use the least

I tried setting a 0 VM slave for this purpose, but the slave ended
reporting more memory in use than the master. I know only values got
to disk under VM config, and only when there is enough contiguos space
in the swapfile.

Would it be possible to have a setting for "0" RAM Redis server? "0"
Understood as "the minimum amount of RAM required as buffer to take
down data to disk"

--
Aníbal Rojas
Ruby on Rails Web Developer
http://www.google.com/profiles/anibalrojas

Pieter Noordhuis

unread,
Jun 4, 2010, 3:30:30 AM6/4/10
to redi...@googlegroups.com
I believe you mean a Redis slave that simply dumps everything to disk.
What you propose is a VM-only Redis, which can be easily done by
configuring "vm-max-memory" to 0. This will result in all values being
written to the swap directly. BUT, you have to take in account that if
persistence is your goal, you should rather use something else. In
order for Redis to make a dumpfile (RDB) of the dataset loaded in an
instance, it needs to read the values from swap in order to write them
to a dumpfile. This means that a VM-enabled, 0-memory, save-only
slave, will indeed use less memory, but will be A LOT slower. Rather,
I would recommend holding all values in memory (if your machine
configuration allows you to) and enable the AOF. This dumps all
*commands* to disk as fast as it can, instead of the values
themselves. The AOF can later be used to build up the entire dataset.

Good luck!
Pieter

2010/6/4 Aníbal Rojas <aniba...@gmail.com>:

> --
> You received this message because you are subscribed to the Google Groups "Redis DB" group.
> To post to this group, send email to redi...@googlegroups.com.
> To unsubscribe from this group, send email to redis-db+u...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/redis-db?hl=en.
>
>

Salvatore Sanfilippo

unread,
Jun 4, 2010, 3:53:36 AM6/4/10
to redi...@googlegroups.com
On Fri, Jun 4, 2010 at 9:30 AM, Pieter Noordhuis <pcnoo...@gmail.com> wrote:

> instance, it needs to read the values from swap in order to write them
> to a dumpfile. This means that a VM-enabled, 0-memory, save-only
> slave, will indeed use less memory, but will be A LOT slower. Rather,

Exactly. An alternative is writing a script that will connect to the
master from another computer (simulating a slave) with the only effect
of dumping all the chat into a file. But there is to handle the "bulk"
part that is emitted in the first stage by SYNC and AOF without log
compaction is hard to handle.

Is your goal separation in case of disaster recovery or letting the
master be as free as possible to enhance performances?

Cheers,
Salvatore

--
Salvatore 'antirez' Sanfilippo
http://invece.org

"Once you have something that grows faster than education grows,
you’re always going to get a pop culture.", Alan Kay

Josiah Carlson

unread,
Jun 4, 2010, 12:37:41 PM6/4/10
to redi...@googlegroups.com
I've been thinking about this as well. I suspect that Anibal is a bit
like me; I personally want my master to be as fast as possible,
handling all write traffic with as little delay as possible (even the
occasional 10-100 ms delay can be critical), but also wanting to be as
safe as reasonably possible.

A write slave whose only purpose is to write commands to disk lets the
master be fast, while the slave can use the AOF and fsync everything
to disk that has happened since the last fsync (one thread to read
commands from the master, one to write + fsync). If the slave isn't
going to be processing user commands, it doesn't really need to use
any memory, except for the occasional compaction (which could be a
secondary process that handles the compaction).

Regards,
- Josiah

Aníbal Rojas

unread,
Jun 10, 2010, 12:10:55 PM6/10/10
to redi...@googlegroups.com
Peter,

Sorry for the delayed response.

The problem I find, with such configuration is that under VM, only
values are swapped out. Eventually you will have have a bunch of keys
in memory, for nothing, because the salve instance will only be used
for persistence.

Actually I tested it, and it was weird that the slave ended using a
*little bit more memory than the master*, as you can see here:

$ redis-server redis-6380-master.conf
[8275] 10 Jun 11:12:51 * Server started, Redis version 2.1.0
[8275] 10 Jun 11:12:51 # WARNING overcommit_memory is set to 0!
Background save may fail under low memory condition. To fix this issue
add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or
run the command 'sysctl vm.overcommit_memory=1' for this to take
effect.
[8275] 10 Jun 11:13:01 * DB loaded from disk: 10 seconds
[8275] 10 Jun 11:13:01 * The server is now ready to accept connections
on port 6380
[8275] 10 Jun 11:13:02 - DB 0: 6000000 keys (0 volatile) in 12582912 slots HT.
[8275] 10 Jun 11:13:02 - 0 clients connected (0 slaves), 406299876 bytes in use

$ egrep '^(append|vm)' redis-6381-slave.conf
appendonly no
appendfsync everysec
vm-enabled no
vm-swap-file ./redis-6381-master.swap
vm-max-memory 0
vm-page-size 32
vm-pages 134217728
vm-max-threads 4

$ redis-server redis-6381-slave.conf
[8392] 10 Jun 11:14:47 * Server started, Redis version 2.1.0
[8392] 10 Jun 11:14:47 # WARNING overcommit_memory is set to 0!
Background save may fail under low memory condition. To fix this issue
add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or
run the command 'sysctl vm.overcommit_memory=1' for this to take
effect.
[8392] 10 Jun 11:14:47 * The server is now ready to accept connections
on port 6381
[8392] 10 Jun 11:14:47 - 0 clients connected (0 slaves), 533456 bytes in use
[8392] 10 Jun 11:14:47 * Connecting to MASTER...
[8392] 10 Jun 11:14:54 * Receiving 100217500 bytes data dump from MASTER
[8392] 10 Jun 11:15:04 * MASTER <-> SLAVE sync succeeded
[8392] 10 Jun 11:15:09 - DB 0: 6000000 keys (0 volatile) in 12582912 slots HT.
[8392] 10 Jun 11:15:09 - 1 clients connected (0 slaves), 406300200 bytes in use

$ ps aux | egrep '(PID|redis-server)'
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
anibal 8275 4.6 24.5 506464 504884 pts/0 S+ 11:12 0:10
redis-server redis-6380-master.conf
anibal 8392 8.6 24.5 506464 504876 pts/1 S+ 11:14 0:09
redis-server redis-6381-slave.conf

$ egrep '^(append|vm)' redis-6382-slave-0-vm.conf
appendonly no
appendfsync everysec
vm-enabled yes
vm-swap-file ./redis-6382-master.swap
vm-max-memory 0
vm-page-size 32
vm-pages 134217728
vm-max-threads 4

$ redis-server redis-6382-slave-0-vm.conf
[9023] 10 Jun 11:21:11 * Using './redis-6382-master.swap' as swap file
[9023] 10 Jun 11:21:11 * Allocating 4294967296 bytes of swap file
[9023] 10 Jun 11:21:11 * Swap file allocated with success
[9023] 10 Jun 11:21:11 - Allocated 16777216 bytes page table for 134217728 pages
[9023] 10 Jun 11:21:11 * Server started, Redis version 2.1.0
[9023] 10 Jun 11:21:11 # WARNING overcommit_memory is set to 0!
Background save may fail under low memory condition. To fix this issue
add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or
run the command 'sysctl vm.overcommit_memory=1' for this to take
effect.
[9023] 10 Jun 11:21:11 * DB loaded from disk: 0 seconds
[9023] 10 Jun 11:21:11 * The server is now ready to accept connections
on port 6382
[9023] 10 Jun 11:21:12 - 0 clients connected (0 slaves), 17512360 bytes in use

$ ps aux | egrep '(PID|redis-server)'
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
anibal 8275 1.7 24.5 506464 504888 pts/0 S+ 11:12 0:11
redis-server redis-6380-master.conf
anibal 9023 33.9 29.8 622820 613148 pts/2 S+ 11:21 0:55
redis-server redis-6382-slave-0-vm.conf

$ egrep '^(append|vm)' redis-6382-slave-0-vm-aof.conf
appendonly yes
appendfsync everysec
vm-enabled yes
vm-swap-file ./redis-6382-master.swap
vm-max-memory 0
vm-page-size 32
vm-pages 134217728
vm-max-threads 4

$ redis-server redis-6382-slave-0-vm-aof.conf
[9711] 10 Jun 11:30:31 * Using './redis-6382-master.swap' as swap file
[9711] 10 Jun 11:30:31 * Allocating 4294967296 bytes of swap file
[9711] 10 Jun 11:30:31 * Swap file allocated with success
[9711] 10 Jun 11:30:31 - Allocated 16777216 bytes page table for 134217728 pages
[9711] 10 Jun 11:30:31 * Server started, Redis version 2.1.0
[9711] 10 Jun 11:30:31 # WARNING overcommit_memory is set to 0!
Background save may fail under low memory condition. To fix this issue
add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or
run the command 'sysctl vm.overcommit_memory=1' for this to take
effect.
[9711] 10 Jun 11:30:31 * The server is now ready to accept connections
on port 6382
[9711] 10 Jun 11:30:32 - 0 clients connected (0 slaves), 17512360 bytes in use
[9711] 10 Jun 11:30:32 # WARNING: vm-max-memory limit exceeded by more
than 10% but unable to swap more objects out!
[9711] 10 Jun 11:30:32 * Connecting to MASTER...
[9711] 10 Jun 11:30:38 * Receiving 100217500 bytes data dump from MASTER
[9711] 10 Jun 11:31:40 * MASTER <-> SLAVE sync succeeded
[9711] 10 Jun 11:31:40 * Background append only file rewriting started
by pid 9757
[9711] 10 Jun 11:31:45 - DB 0: 6000000 keys (0 volatile) in 8388608 slots HT.
[9711] 10 Jun 11:31:45 - 1 clients connected (0 slaves), 525072676 bytes in use

$ ps aux | egrep '(PID|redis-server)'
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
anibal 8275 1.1 24.5 506464 504888 pts/0 S+ 11:12 0:13
redis-server redis-6380-master.conf
anibal 9711 57.9 29.8 614624 613112 pts/2 S+ 11:30 0:56
redis-server redis-6382-slave-0-vm-aof.conf

With AOF, if should be easy (please correct me if I am wrong) to
have a "persistence only" redis salve instance, dumping data to disk,
without actually using memory (except as a buffer maybe) and this is
something that would be great under heavy use scenarios with lots of
keys and lots of updates. As none of the redis processes will need to
fork.

--
Aníbal Rojas
Ruby on Rails Web Developer
http://www.google.com/profiles/anibalrojas

Pieter Noordhuis

unread,
Jun 10, 2010, 2:13:04 PM6/10/10
to redi...@googlegroups.com
Hi Aníbal,

The point of having all keys in memory (and all values either in
memory or stored using VM), is the ability to rewrite the AOF. Suppose
you have a list with 1000 items where you only RPUSH and LPOP. The AOF
will grow very large, while the real data is only a couple of
kilobytes. To counter the effect of the AOF growing infinitely, Redis
rewrites the AOF from time to time to only contain a snapshot of the
data at a single point in time.

If you really want a persistence-only slave without rewrite, you can
create your own simple script that implements a pure AOF, without
rewriting going on. The downside is that your append only file(s) will
only grow and are never shrunk. This can be easily done by
implementing a script that issues the MONITOR command to the master,
and simply writes all non-read commands to disk.

I hope this answers some of your questions.

Cheers,
Pieter

ps. The slave uses some extra bytes of memory because it has 1 client
(being the master), so nothing to worry about :-).

2010/6/10 Aníbal Rojas <aniba...@gmail.com>:

Pieter Noordhuis

unread,
Jun 10, 2010, 3:37:50 PM6/10/10
to redi...@googlegroups.com
Couldn't let the idea of such a script go, so I threw something
together real quick. This script acts as a slave, but simply writes
the command stream to disk (so it's nothing more than netcat with a
little logic). You could think of parsing the stream to only write
complete commands to disk. Also, you could implement some chunking
(start writing to a new file once the current file size reaches a
threshold) to be able to perform the AOF rewrite on another machine or
something like that. Also, this script discards the database dump,
which would be quite easy to transform in AOF commands as well. On my
laptop, the ruby process only consumes about 5% cpu, when redis-server
is saturated by redis-benchmark. Code at:
http://gist.github.com/433508

Using MONITOR is not the way to go because this also feeds you read
commands and formats them to be netcat-readable.

Cheers,
Pieter

Aníbal Rojas

unread,
Jun 10, 2010, 3:45:26 PM6/10/10
to redi...@googlegroups.com
Yes, I understand that rewriting the AOF requires the data in RAM, and
a ever growing AOF file eventually won't serve any purpose.

But in this scenario you don't need or want the AOF to be rewritten in
background, I would just spinoff a new slave. The parent will need to
fork, to complete the initial sync but the persistence only slave will
be eat no memory.

--
Aníbal Rojas
Ruby on Rails Web Developer
http://www.google.com/profiles/anibalrojas

Reply all
Reply to author
Forward
0 new messages