Re: Continual growth of memory usage although Number/Size of key and value stored remains the same. Help

94 views
Skip to first unread message

M. Edward (Ed) Borasky

unread,
Sep 15, 2012, 1:06:55 PM9/15/12
to redi...@googlegroups.com
Just out of curiosity, do you know what the users / application are
doing when one of these events occurs? Is there something out of the
ordinary in the calling pattern?

On Fri, Sep 14, 2012 at 4:11 PM, mangigo <workin...@gmail.com> wrote:
> Hi all,
>
> I've been running redis on production continually writing and reading
> large number of keys from redis using only hget and hset operation. Every
> keys that I created and updated, I also set 2-week EXPIRE. Here's the issue
>
> A month ago, we experienced high swap used on the box and saw that Redis
> consumed all the physical memory that we have on that box. So we upgraded to
> a bigger box. However, that doesn't solve the problem. we experienced the
> same problem again. So we went investigate what's wrong with it. The
> findings are
>
> 1. The number of keys is not increasing, actually, it's slightly decreasing.
> 2. It does consume lot of memory only a brief period of time
> (used_memory_human:33.53G) and cause the swap. Then, the memory consumption
> falls to normal state (used_memory_human:19.39G). This happens twice a week
> since previous two weeks.
> 3. AOF is disable. And we don't do any special operation on the box (like
> SAVE) during that high-memory consumption period.
> 4. I used redis-rdb-tool to generate a memory report and sum the size of
> each keys during the normal period ( didn't capture when that high swap
> happened though ). The sum was 10.7G which is actually less than 19.39G of
> the normal state and way less than 33.53G of the high memory consumption
> state. Here's an example of one of the key we store from redis-rdb-tool
>
> 1,hash,"hash:keywordtopology:201237:34:10:17th st.
> cafe",1937,hashtable,4,325
>
>
> Could you guys help me find out what could be the possible issues here ?
>
> Thanks
> Matt
>
> Here's the details of my environment.
>
> 1. Redis INFO
>
> redis_version:2.4.15
> redis_git_sha1:00000000
> redis_git_dirty:0
> arch_bits:64
> multiplexing_api:epoll
> gcc_version:4.1.2
> process_id:5550
> uptime_in_seconds:440625
> uptime_in_days:5
> lru_clock:548519
> used_cpu_sys:121.16
> used_cpu_user:234.46
> used_cpu_sys_children:14.27
> used_cpu_user_children:86.86
> connected_clients:58
> connected_slaves:0
> client_longest_output_list:0
> client_biggest_input_buf:0
> blocked_clients:0
> used_memory:20823389320
> used_memory_human:19.39G
> used_memory_rss:25033265152
> used_memory_peak:36320215440
> used_memory_peak_human:33.83G
> mem_fragmentation_ratio:1.20
> mem_allocator:jemalloc-3.0.0
> loading:0
> aof_enabled:0
> changes_since_last_save:30740
> bgsave_in_progress:0
> last_save_time:1347659465
> bgrewriteaof_in_progress:0
> total_connections_received:690801
> total_commands_processed:69336281
> expired_keys:3495791
> evicted_keys:0
> keyspace_hits:15214393
> keyspace_misses:14499297
> pubsub_channels:0
> pubsub_patterns:0
> latest_fork_usec:114
> vm_enabled:0
> role:master
> db0:keys=8,expires=0
> db1:keys=6985575,expires=6985572
>
> 2. Machine Info
>
> kw-queue:/mnt/redis # cat /proc/cpuinfo
> processor : 0
> vendor_id : GenuineIntel
> cpu family : 6
> model : 26
> model name : Intel(R) Xeon(R) CPU X5550 @ 2.67GHz
> stepping : 5
> cpu MHz : 2666.760
> cache size : 8192 KB
> physical id : 0
> siblings : 1
> core id : 0
> cpu cores : 1
> fpu : yes
> fpu_exception : yes
> cpuid level : 11
> wp : yes
> flags : fpu de tsc msr pae cx8 apic sep cmov pat clflush acpi mmx
> fxsr sse sse2 ss ht syscall nx lm constant_tsc pni ssse3 cx16 sse4_1 sse4_2
> popcnt lahf_lm
> bogomips : 6671.90
> clflush size : 64
> cache_alignment : 64
> address sizes : 40 bits physical, 48 bits virtual
> power management:
>
> processor : 1
> vendor_id : GenuineIntel
> cpu family : 6
> model : 26
> model name : Intel(R) Xeon(R) CPU X5550 @ 2.67GHz
> stepping : 5
> cpu MHz : 2666.760
> cache size : 8192 KB
> physical id : 1
> siblings : 1
> core id : 0
> cpu cores : 1
> fpu : yes
> fpu_exception : yes
> cpuid level : 11
> wp : yes
> flags : fpu de tsc msr pae cx8 apic sep cmov pat clflush acpi mmx
> fxsr sse sse2 ss ht syscall nx lm constant_tsc pni ssse3 cx16 sse4_1 sse4_2
> popcnt lahf_lm
> bogomips : 6671.90
> clflush size : 64
> cache_alignment : 64
> address sizes : 40 bits physical, 48 bits virtual
> power management:
>
> processor : 2
> vendor_id : GenuineIntel
> cpu family : 6
> model : 26
> model name : Intel(R) Xeon(R) CPU X5550 @ 2.67GHz
> stepping : 5
> cpu MHz : 2666.760
> cache size : 8192 KB
> physical id : 2
> siblings : 1
> core id : 0
> cpu cores : 1
> fpu : yes
> fpu_exception : yes
> cpuid level : 11
> wp : yes
> flags : fpu de tsc msr pae cx8 apic sep cmov pat clflush acpi mmx
> fxsr sse sse2 ss ht syscall nx lm constant_tsc pni ssse3 cx16 sse4_1 sse4_2
> popcnt lahf_lm
> bogomips : 6671.90
> clflush size : 64
> cache_alignment : 64
> address sizes : 40 bits physical, 48 bits virtual
> power management:
>
> processor : 3
> vendor_id : GenuineIntel
> cpu family : 6
> model : 26
> model name : Intel(R) Xeon(R) CPU X5550 @ 2.67GHz
> stepping : 5
> cpu MHz : 2666.760
> cache size : 8192 KB
> physical id : 3
> siblings : 1
> core id : 0
> cpu cores : 1
> fpu : yes
> fpu_exception : yes
> cpuid level : 11
> wp : yes
> flags : fpu de tsc msr pae cx8 apic sep cmov pat clflush acpi mmx
> fxsr sse sse2 ss ht syscall nx lm constant_tsc pni ssse3 cx16 sse4_1 sse4_2
> popcnt lahf_lm
> bogomips : 6671.90
> clflush size : 64
> cache_alignment : 64
> address sizes : 40 bits physical, 48 bits virtual
> power management:
>
> MemTotal: 35840000 kB
> MemFree: 161304 kB
> Buffers: 860 kB
> Cached: 13760992 kB
> SwapCached: 14364 kB
> Active: 21105648 kB
> Inactive: 13298840 kB
> HighTotal: 0 kB
> HighFree: 0 kB
> LowTotal: 35840000 kB
> LowFree: 161304 kB
> SwapTotal: 16777208 kB
> SwapFree: 16704552 kB
> Dirty: 4006948 kB
> Writeback: 8300 kB
> AnonPages: 20641232 kB
> Mapped: 11904 kB
> Slab: 418560 kB
> PageTables: 84624 kB
> NFS_Unstable: 0 kB
> Bounce: 0 kB
> CommitLimit: 34697208 kB
> Committed_AS: 41484328 kB
> VmallocTotal: 34359738367 kB
> VmallocUsed: 8756 kB
> VmallocChunk: 34359729591 kB
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Redis DB" group.
> To view this discussion on the web visit
> https://groups.google.com/d/msg/redis-db/-/ElCnFA5MW5UJ.
> To post to this group, send email to redi...@googlegroups.com.
> To unsubscribe from this group, send email to
> redis-db+u...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/redis-db?hl=en.



--
Twitter: http://twitter.com/znmeb; Computational Journalism Publishers
Workbench: http://j.mp/QCsXOr

How the Hell can the lion sleep with all those people singing "A weem
oh way!" at the top of their lungs?

mangigo

unread,
Sep 19, 2012, 8:34:21 PM9/19/12
to redi...@googlegroups.com
Thanks for all the response. It seems to be the case that we've set up slave node and we reset master every week (save, stop and start), which makes master save the entire data and send it as replication to slave node. And that makes memory consumption during save becomes skyrocket. Also at the same time that it's saving, there're also ongoing activity between master and clients in which will make the problem getting worse, since those changes will be sent to SLAVE too. However, it won't send to SLAVE until the initial saving and transferring of the start-up dataset is done. This would make memory consumption growing over time. 

The memory consumption goes back to normal state when transferring has been completed.

I think this is the explanation to what I've got here. Our solution is to not reset MASTER every week ( I have no idea why we're doing that :( ) 

Thanks all for your help
- Matt

On Sunday, September 16, 2012 3:04:25 PM UTC-7, A wrote:
We had the same experience while using remote replication and the slave node was nott able to receive all the updated keys in timely manner. That was causing 10x memory consumption growth on Master node with all the aftermath.
Reply all
Reply to author
Forward
0 new messages