weird memory leak

334 views
Skip to first unread message

hymloth

unread,
Apr 17, 2012, 11:26:28 AM4/17/12
to Redis DB
Hello guys

I am running redis 2.4.10 on ubuntu 11.10 and I am experiencing a very
weird situation. The redis
server works ok for 3-4 days, but suddenly it starts to allocate more
and more memory. I examined the logs, and while the server had the
same number of keys, the memory was "leaking".

[27086] 17 Apr 18:22:54 - DB 0: 130984 keys (0 volatile) in 131072
slots HT.
[27086] 17 Apr 18:22:54 - DB 1: 164 keys (0 volatile) in 256 slots HT.
[27086] 17 Apr 18:22:54 - DB 2: 286 keys (0 volatile) in 512 slots HT.
[27086] 17 Apr 18:22:54 - DB 4: 3 keys (0 volatile) in 4 slots HT.
[27086] 17 Apr 18:22:54 - 9 clients connected (0 slaves), 38927384
bytes in use
[27086] 17 Apr 18:22:59 - DB 0: 130984 keys (0 volatile) in 131072
slots HT.
[27086] 17 Apr 18:22:59 - DB 1: 164 keys (0 volatile) in 256 slots HT.
[27086] 17 Apr 18:22:59 - DB 2: 286 keys (0 volatile) in 512 slots HT.
[27086] 17 Apr 18:22:59 - DB 4: 3 keys (0 volatile) in 4 slots HT.
[27086] 17 Apr 18:22:59 - 9 clients connected (0 slaves), 42957840
bytes in use # what the hell?

After the first leak, it seems to continue to leak every 2 hours,
progressively. From 38 MB it can reach 780MB in 2 days after leaking.

Any clues?

Restarting the server solves the problem of course.

Pieter Noordhuis

unread,
Apr 17, 2012, 12:12:59 PM4/17/12
to redi...@googlegroups.com
What is going on in those keys? What kind of values do you store and
how do you update them? Obviously, larger string values/lists/sets/etc
use more memory.

Cheers,
Pieter

> --
> You received this message because you are subscribed to the Google Groups "Redis DB" group.
> To post to this group, send email to redi...@googlegroups.com.
> To unsubscribe from this group, send email to redis-db+u...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/redis-db?hl=en.
>

Salvatore Sanfilippo

unread,
Apr 17, 2012, 12:14:37 PM4/17/12
to redi...@googlegroups.com
On Tue, Apr 17, 2012 at 6:12 PM, Pieter Noordhuis <pcnoo...@gmail.com> wrote:
> What is going on in those keys? What kind of values do you store and
> how do you update them? Obviously, larger string values/lists/sets/etc
> use more memory.

In this case it's strange that restarting the system fixes the
issue... Maybe an INFO output and a CLIENT LIST could help us.

Salvatore

--
Salvatore 'antirez' Sanfilippo
open source developer - VMware

http://invece.org
"We are what we repeatedly do. Excellence, therefore, is not an act,
but a habit." -- Aristotele

hymloth

unread,
Apr 18, 2012, 7:45:09 AM4/18/12
to Redis DB
At this very moment that the server seems to behave correctly:

INFO

redis_version:2.4.10
redis_git_sha1:00000000
redis_git_dirty:0
arch_bits:64
multiplexing_api:epoll
gcc_version:4.6.1
process_id:27086
uptime_in_seconds:73854
uptime_in_days:0
lru_clock:1354353
used_cpu_sys:11.80
used_cpu_user:11.62
used_cpu_sys_children:1.41
used_cpu_user_children:8.54
connected_clients:10
connected_slaves:0
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
used_memory:40409840
used_memory_human:38.54M
used_memory_rss:43380736
used_memory_peak:41046344
used_memory_peak_human:39.14M
mem_fragmentation_ratio:1.07
mem_allocator:jemalloc-2.2.5
loading:0
aof_enabled:0
changes_since_last_save:3120
bgsave_in_progress:0
last_save_time:1334748423
bgrewriteaof_in_progress:0
total_connections_received:12
total_commands_processed:1239298
expired_keys:2
evicted_keys:0
keyspace_hits:640942
keyspace_misses:6
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:918
vm_enabled:0
role:master
db0:keys=132451,expires=0
db1:keys=164,expires=0
db2:keys=285,expires=0
db4:keys=3,expires=0

CLIENT LIST

addr=127.0.0.1:34081 fd=5 idle=73887 flags=N db=1 sub=0 psub=0 qbuf=0
obl=0 oll=0 events=r cmd=hlen
addr=127.0.0.1:34083 fd=6 idle=37 flags=N db=1 sub=0 psub=0 qbuf=0
obl=0 oll=0 events=r cmd=hexists
addr=127.0.0.1:34084 fd=7 idle=73887 flags=N db=2 sub=0 psub=0 qbuf=0
obl=0 oll=0 events=r cmd=exec
addr=127.0.0.1:34085 fd=8 idle=130 flags=N db=2 sub=0 psub=0 qbuf=0
obl=0 oll=0 events=r cmd=exec
addr=127.0.0.1:34088 fd=9 idle=11739 flags=N db=2 sub=0 psub=0 qbuf=0
obl=0 oll=0 events=r cmd=get
addr=127.0.0.1:34092 fd=10 idle=73886 flags=N db=1 sub=0 psub=0 qbuf=0
obl=0 oll=0 events=r cmd=hlen
addr=127.0.0.1:34093 fd=11 idle=73886 flags=N db=2 sub=0 psub=0 qbuf=0
obl=0 oll=0 events=r cmd=exec
addr=127.0.0.1:34096 fd=12 idle=9233 flags=N db=2 sub=0 psub=0 qbuf=0
obl=0 oll=0 events=r cmd=get
addr=127.0.0.1:34102 fd=13 idle=286 flags=N db=0 sub=0 psub=0 qbuf=0
obl=0 oll=0 events=r cmd=exec
addr=127.0.0.1:46531 fd=14 idle=0 flags=N db=0 sub=0 psub=0 qbuf=0
obl=0 oll=0 events=r cmd=client


I had the exact same problem with version 2.4.5. I have a python based
stack, and redis is daemonized


On Apr 17, 7:14 pm, Salvatore Sanfilippo <anti...@gmail.com> wrote:
> On Tue, Apr 17, 2012 at 6:12 PM, Pieter Noordhuis <pcnoordh...@gmail.com> wrote:
> > What is going on in those keys? What kind of values do you store and
> > how do you update them? Obviously, larger string values/lists/sets/etc
> > use more memory.
>
> In this case it's strange that restarting the system fixes the
> issue... Maybe an INFO output and a CLIENT LIST could help us.
>
> Salvatore
>
>
>
>
>
>
>
>
>
> > Cheers,
> > Pieter
>
> >> For more options, visit this group athttp://groups.google.com/group/redis-db?hl=en.
>
> > --
> > You received this message because you are subscribed to the Google Groups "Redis DB" group.
> > To post to this group, send email to redi...@googlegroups.com.
> > To unsubscribe from this group, send email to redis-db+u...@googlegroups.com.
> > For more options, visit this group athttp://groups.google.com/group/redis-db?hl=en.

hymloth

unread,
Apr 20, 2012, 7:21:37 PM4/20/12
to Redis DB
And at this very moment that the server is in "leaking state"

INFO

redis_version:2.4.10
redis_git_sha1:00000000
redis_git_dirty:0
arch_bits:64
multiplexing_api:epoll
gcc_version:4.6.1
process_id:27086
uptime_in_seconds:288484
uptime_in_days:3
lru_clock:1375816
used_cpu_sys:42.87
used_cpu_user:62.95
used_cpu_sys_children:5.61
used_cpu_user_children:35.69
connected_clients:10
connected_slaves:0
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
used_memory:228356592
used_memory_human:217.78M
used_memory_rss:239202304
used_memory_peak:231316344
used_memory_peak_human:220.60M
mem_fragmentation_ratio:1.05
mem_allocator:jemalloc-2.2.5
loading:0
aof_enabled:0
changes_since_last_save:361
bgsave_in_progress:0
last_save_time:1334963764
bgrewriteaof_in_progress:0
total_connections_received:18
total_commands_processed:5403807
expired_keys:2
evicted_keys:0
keyspace_hits:2787947
keyspace_misses:17
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:4298
vm_enabled:0
role:master
db0:keys=138210,expires=0
db1:keys=164,expires=0
db2:keys=284,expires=0
db4:keys=3,expires=0

CLIENT LIST

addr=127.0.0.1:34081 fd=5 idle=288614 flags=N db=1 sub=0 psub=0 qbuf=0
obl=0 oll=0 events=r cmd=hlen
addr=127.0.0.1:34083 fd=6 idle=41 flags=N db=1 sub=0 psub=0 qbuf=0
obl=0 oll=0 events=r cmd=hexists
addr=127.0.0.1:34084 fd=7 idle=288614 flags=N db=2 sub=0 psub=0 qbuf=0
obl=0 oll=0 events=r cmd=exec
addr=127.0.0.1:34085 fd=8 idle=95 flags=N db=2 sub=0 psub=0 qbuf=0
obl=0 oll=0 events=r cmd=exec
addr=127.0.0.1:34088 fd=9 idle=14162 flags=N db=2 sub=0 psub=0 qbuf=0
obl=0 oll=0 events=r cmd=del
addr=127.0.0.1:34092 fd=10 idle=288613 flags=N db=1 sub=0 psub=0
qbuf=0 obl=0 oll=0 events=r cmd=hlen
addr=127.0.0.1:34093 fd=11 idle=288613 flags=N db=2 sub=0 psub=0
qbuf=0 obl=0 oll=0 events=r cmd=exec
addr=127.0.0.1:34096 fd=12 idle=14142 flags=N db=2 sub=0 psub=0 qbuf=0
obl=0 oll=0 events=r cmd=get
addr=127.0.0.1:39192 fd=13 idle=191 flags=N db=0 sub=0 psub=0 qbuf=0
obl=0 oll=0 events=r cmd=exec
addr=127.0.0.1:58287 fd=14 idle=0 flags=N db=0 sub=0 psub=0 qbuf=0
obl=0 oll=0 events=r cmd=client
> > >>weirdsituation. The redis
> > >> server works ok for 3-4 days, but suddenly it starts to allocate more
> > >> and morememory. I examined the logs, and while the server had the
> > >> same number of keys, thememorywas "leaking".

Salvatore Sanfilippo

unread,
Apr 21, 2012, 5:26:07 AM4/21/12
to redi...@googlegroups.com
Hello,

when the server is in "leaking state" you should try to perform a
BGSAVE, take the output RDB, and run the following RDB memory
estimation tool against it:

https://github.com/sripathikrishnan/redis-rdb-tools

Or otherwise, take the RDB file, and load it in another computer and
check the memory usage with INFO, so that we can see if really the
data set is consuming more memory than needed.

Alternatively you can simple save, stop the server, restart, and see
if the memory usage is similar or not.

The problem is that for an investigation that can lead to results we
need to understand if the dataset is not really occupying this memory.
And if not, where the memory is lost.

If we have evidence of the first, you can do a GDB core dump and send
me the dump so that analyzing the memory we can try to understand
where it is leaking.

Also note that you can use Redis 2.4.11 redis-cli to check if there is
some key that is growing a lot for some reason:

redis-cli --bigkeys

Thanks for your help.

Cheers,
Salvatore

> For more options, visit this group at http://groups.google.com/group/redis-db?hl=en.

hymloth

unread,
Apr 21, 2012, 10:05:46 AM4/21/12
to Redis DB
Salvatore,

I can verify that the dataset is not occupying the memory. I 've
checked it against redis-rdb-tools, plus the fact that restarting the
server remedies the problem (temporarily)
I have generated a core file which I've sent you via email
Hope that helps.

Salvatore Sanfilippo

unread,
Apr 21, 2012, 10:47:52 AM4/21/12
to redi...@googlegroups.com
On Sat, Apr 21, 2012 at 4:05 PM, hymloth <santos.k...@gmail.com> wrote:
> Salvatore,
>
> I can verify that the dataset is not occupying the memory. I 've
> checked it against redis-rdb-tools, plus the fact that restarting the
> server remedies the problem (temporarily)
> I have generated a core file which I've sent you via email
> Hope that helps.

Ok that's interesting, please could you give me more or less the
difference between legitimate memory usage and leaked memory if you
have (even an approximate) idea? Thank you.

> For more options, visit this group at http://groups.google.com/group/redis-db?hl=en.

Salvatore Sanfilippo

unread,
Apr 22, 2012, 5:42:59 AM4/22/12
to redi...@googlegroups.com
Hello,

an update on this issue: after investigating the core file I'm 99.99%
sure this is due to the SLOWLOG accumulating large HMSET.
We see this alreay in the past, so I finally fixed this issue in all
the branches. Example:

redis 127.0.0.1:6379> slowlog get
1) 1) (integer) 1
2) (integer) 1335087637
3) (integer) 1656
4) 1) "set"
2) "foo"
3) "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA...
(16 more bytes)"

It's the same for more arguments, the latest argument is set to " ...
(5 more arguments)" and so forth.

Our friend will ack in the last in a few days if he no longer
experience the problem with the updated version (or setting the
slowlog parameter to -1 to disable it).

Redis 2.4.12 with this fix will be released within 24h so that we
avoid incurring in this problem again.

Cheers,
Salvatore

hymloth

unread,
Apr 28, 2012, 9:49:15 AM4/28/12
to Redis DB
SOLVED...

This was an already known SLOWLOG issue, just setting slowlog-log-
slower-than to -1 fixed the problem.

Thanks Salvatore!

On Apr 22, 12:42 pm, Salvatore Sanfilippo <anti...@gmail.com> wrote:
> Hello,
>
> an update on this issue: after investigating the core file I'm 99.99%
> sure this is due to the SLOWLOG accumulating large HMSET.
> We see this alreay in the past, so I finally fixed this issue in all
> the branches. Example:
>
> redis 127.0.0.1:6379> slowlog get
> 1) 1) (integer) 1
>    2) (integer) 1335087637
>    3) (integer) 1656
>    4) 1) "set"
>       2) "foo"
>       3) "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA...
> (16 more bytes)"
>
> It's the same for more arguments, the latest argument is set to " ...
> (5 more arguments)" and so forth.
>
> Our friend will ack in the last in a few days if he no longer
> experience the problem with the updated version (or setting the
> slowlog parameter to -1 to disable it).
>
> Redis 2.4.12 with this fix will be released within 24h so that we
> avoid incurring in this problem again.
>
> Cheers,
> Salvatore
>
>
>
>
>
>
>
> On Sat, Apr 21, 2012 at 4:47 PM, Salvatore Sanfilippo <anti...@gmail.com> wrote:
> > On Sat, Apr 21, 2012 at 4:05 PM, hymloth <santos.konior...@gmail.com> wrote:
> >> Salvatore,
>
> >> I can verify that the dataset is not occupying thememory. I 've
> >> checked it against redis-rdb-tools, plus the fact that restarting the
> >> server remedies the problem (temporarily)
> >> I have generated a core file which I've sent you via email
> >> Hope that helps.
>
> > Ok that's interesting, please could you give me more or less the
> > difference between legitimatememoryusage and leakedmemoryif you
> > have (even an approximate) idea? Thank you.
>
> >> On Apr 21, 12:26 pm, Salvatore Sanfilippo <anti...@gmail.com> wrote:
> >>> Hello,
>
> >>> when the server is in "leaking state" you should try to perform a
> >>> BGSAVE, take the output RDB, and run the following RDBmemory
> >>> estimation tool against it:
>
> >>>https://github.com/sripathikrishnan/redis-rdb-tools
>
> >>> Or otherwise, take the RDB file, and load it in another computer and
> >>> check thememoryusage with INFO, so that we can see if really the
> >>> data set is consuming morememorythan needed.
>
> >>> Alternatively you can simple save, stop the server, restart, and see
> >>> if thememoryusage is similar or not.
>
> >>> The problem is that for an investigation that can lead to results we
> >>> need to understand if the dataset is not really occupying thismemory.
> >>> And if not, where thememoryis lost.
>
> >>> If we have evidence of the first, you can do a GDB core dump and send
> >>> me the dump so that analyzing thememorywe can try to understand
> >>> >> > >> bytes in use...
>
> read more »

Salvatore Sanfilippo

unread,
Apr 28, 2012, 9:57:46 AM4/28/12
to redi...@googlegroups.com
On Sat, Apr 28, 2012 at 3:49 PM, hymloth <santos.k...@gmail.com> wrote:
> This was an already known SLOWLOG issue, just setting slowlog-log-
> slower-than to -1 fixed the problem.
>
> Thanks Salvatore!

Thanks for the ACK!

Cheers,
Salvatore
Reply all
Reply to author
Forward
0 new messages