I was doing some tests to figure out how much memory redis will use
per key on my use case and found that redis-server dies when i reach
around 2.7GB on "used_memory_human" reported by INFO. At time same
time, TOP reports that redis-server were using 4GB, which is the
memory limit since i've compiled it in 32-bit.
Why the redis-server process died? It just dies silently, with no
entry on any log. Is it a redis bug or it's possible that linux killed
because it tried to allocated more than 4GB?
Also, since redis-server has allocated 4GB but the database is taking
only 2.7GB, what are the other 1.3GB beeing used for?
Another odd thing is that i've set maxmemory to 3.5GB, and yet redis-
server allocated 4GB.
This happened on centos 5.4 64bit running redis-master (download from
github yesterday) compiled to 32bit.
Hello!
this sounds like the Linux OOM killer. Could you please verify using
the "dmesg" command?
When you set a lower limit it still allocates 4GB because Redis can
track just the sum of all the malloc() it does, but the actual memory
usage can differ because there is the malloc() overhead,
fragmentation, and so forth.
Cheers,
Salvatore
--
Salvatore 'antirez' Sanfilippo
http://invece.org
"Once you have something that grows faster than education grows,
you’re always going to get a pop culture.", Alan Kay
php[2697]: segfault at 00000000000000c0 rip 0000000000569151 rsp
00007ffffb2397d0 error 4
And then nothing else.
On 18 mar, 12:07, Salvatore Sanfilippo <anti...@gmail.com> wrote:
> On Thu, Mar 18, 2010 at 3:02 PM, diegomsana <diegodms...@gmail.com> wrote:
> > Hi,
>
> > I was doing some tests to figure out how much memory redis will use
> > per key on my use case and found that redis-server dies when i reach
> > around 2.7GB on "used_memory_human" reported by INFO. At time same
> > time, TOP reports that redis-server were using 4GB, which is the
> > memory limit since i've compiled it in 32-bit.
>
> > Why the redis-server process died? It just dies silently, with no
> > entry on any log. Is it a redis bug or it's possible that linux killed
> > because it tried to allocated more than 4GB?
>
> Hello!
>
> this sounds like the Linux OOM killer. Could you please verify using
> the "dmesg" command?
>
> When you set a lower limit it still allocates 4GB because Redis can
> track just the sum of all the malloc() it does, but the actual memory
> usage can differ because there is the malloc() overhead,
> fragmentation, and so forth.
>
> Cheers,
> Salvatore
>
> --
> Salvatore 'antirez' Sanfilippohttp://invece.org
Please can you start Redis with gdb?
gdb ./redis-server
run
And try to make it crashing, reporting the output?
Many thanks,
Salvatore
p.s. don't have a Linux box with enough RAM at hand currently...
> --
> You received this message because you are subscribed to the Google Groups "Redis DB" group.
> To post to this group, send email to redi...@googlegroups.com.
> To unsubscribe from this group, send email to redis-db+u...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/redis-db?hl=en.
>
>
--
Salvatore 'antirez' Sanfilippo
[3322] 18 Mar 19:28:04 - DB 1: 524289 keys (0 volatile) in 1048576
slots HT.
[3322] 18 Mar 19:28:04 - 2 clients connected (0 slaves), 2807565005
bytes in use, 0 shared objects
[3322] 18 Mar 19:28:09 - DB 1: 527010 keys (0 volatile) in 1048576
slots HT.
[3322] 18 Mar 19:28:09 - 2 clients connected (0 slaves), 2822146171
bytes in use, 0 shared objects
[3322] 18 Mar 19:28:14 - DB 1: 529729 keys (0 volatile) in 1048576
slots HT.
[3322] 18 Mar 19:28:14 - 2 clients connected (0 slaves), 2836711677
bytes in use, 0 shared objects
zmalloc: Out of memory trying to allocate 71 bytes
[New Thread 0xf7f4b6c0 (LWP 3322)]
Program received signal SIGABRT, Aborted.
0xffffe410 in __kernel_vsyscall ()
On 18 mar, 14:46, Salvatore Sanfilippo <anti...@gmail.com> wrote:
It's just out of memory as there is no longer physical memory.
If you set the overcommit stuff of Linux I think that it will start to
swap instead.
In order to avoid this the best thing to do is to set maxmemory, so
that Redis will start to reply with errors to write operations once
the limit is reached.
Cheers,
Salvatore
> For more options, visit this group at http://groups.google.com/group/redis-db?hl=en.
>
>
--
Salvatore 'antirez' Sanfilippo
Should i set maxmemory to 2.6G then?
Your Redis binary is compiled with 32bit target, so I think that it
already used all the "break", all the 4GB of address space. Even if
Redis reports a given memory usage possibly the real usage because of
malloc overheads, stack, and so forth, is near to 4GB.
> Should i set maxmemory to 2.6G then?
Please try to do this: find the limit so that it does not crash, and
check with PS the Resident Set Size.
Cheers,
Salvatore
> For more options, visit this group at http://groups.google.com/group/redis-db?hl=en.
>
>
--
Salvatore 'antirez' Sanfilippo
In the first message i've said that usage measured with top (real
usage then) was 4GB when usage returned by INFO was 2.6GB. So it seems
that the 1.4GB diff is really overhead, right? That's a lot of
overhead :(
> Please try to do this: find the limit so that it does not crash, and
> check with PS the Resident Set Size.
2.6 GB seems like the limit in my use case. This is what i did to
measure: configured redis to use aof everysec and started to insert
data until it crashed (that happened with 2.6GB, real usage returned
by ps was around 4GB indeed). Then i set maxmemory to 2.55 GB and
started redis again. It loaded everything fine from aof, but it seems
it didn't enforce the maxmemory upon starting up (i've got the same
2.6GB of data after aof finnished loading). I think that's a bug.
Anyway, after that i deleted some keys until it came bellow 2.55GB
usage and started to insert them again, and this time the 2.55GB
maxmemory limit was enforced denying new inserts and i had no more
crashes.
Well, supposing that there's no way to handle/avoid this kind of crash
on 32-bit version when real usage hits 4GB, i think you could place a
warning on the wiki telling people to monitor real process usage
instead of usage returned by INFO. This sounds dumb (and maybe i'm
dumb after all hehehe :D ), but i got really confused about this.
On 18 mar, 16:01, Salvatore Sanfilippo <anti...@gmail.com> wrote: