Redis MSOpenTech : max memory “OOM command not allowed when used memory > 'maxmemory'” error even though RDB file after save is only 3 GB

359 views
Skip to first unread message

tinti...@gmail.com

unread,
Aug 6, 2014, 12:10:04 PM8/6/14
to redi...@googlegroups.com

The redis server version I use is 2.8.9 from MSOpenTech github. Can anyone shed light on why redis "info" command indicates that used memory is 21 GB even though the RDB file that's saved on disk is < than 4 GB? I did successfully run a "save" command before noting down the size of the RDB file. The qfork heapfile is 30 Gb as it's been configured in redis.windows.conf.

Configuration :

maxheap 30gb max-memory 20 Gb appendonly no save 18000 1

The server has 192 GB of physical RAM, but unfortunately only has about 60 GB of free disk space and I had to set max-heap and max-memory to 30 Gb and 20 Gb respectively so that I have additional space to persist the data on disk.

I'm using redis as a cache and the save interval is large as seeding the data takes a long time and I don't want constant writing to file. Once seeding is done, the DB is updated with newer data once a day.

My questions are :

  1. How is the saved RDB file so small? Is it solely due to compression (rdbcompression yes)? If yes, can the same compression mechanism be used to store data in memory too? I make use of lists extensively.

  2. Before I ran the "save" command, the working set and private bytes in process-explorer was very small. Is there a way I can breakdown memory usage by datastructure? For example : List uses x amount, Hash uses y amount etc?

  3. Is there any way I can store the AOF file ( I turned off AOF and use RDB because the AOF files were filling up disk space fast ) in a network path ( shared drive or NAS )? I tried setting the dir config to \someip\some folder but the service failed to start with the message "Cant CHDIR to location"

I'm unable to post images, but this is what process-explorer has to say about the redis-server instance :

  1. Virtual Memory:

    • Private Bytes : 72,920 K
    • Peak Private Bytes : 31,546,092 K
    • Virtual Size : 31,558,356 K
    • Page faults : 12,479,550
  2. Physical Memory:

    • Working Set : 26,871,240 K
    • WS Private : 63,260 K
    • WS Shareable : 26,807,980 K
    • WS Shared : 3,580 K
    • Peak Working Set : 27,011,488 K

The latest saved dump.rdb is 3.81 GB and the heap file is 30 GB.

# Server
redis_version:2.8.9
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:1fe181ad2447fe38
redis_mode:standalone
os:Windows  
arch_bits:64
multiplexing_api:winsock_IOCP
gcc_version:0.0.0
process_id:12772
run_id:553f2b4665edd206e632b7040aa76c0b76083f4d
tcp_port:6379
uptime_in_seconds:24087
uptime_in_days:0
hz:50
lru_clock:14825512
config_file:D:\RedisService/redis.windows.conf

# Clients
connected_clients:2
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0

# Memory
used_memory:21484921736
used_memory_human:20.01G
used_memory_rss:21484870536
used_memory_peak:21487283360
used_memory_peak_human:20.01G
used_memory_lua:3156992
mem_fragmentation_ratio:1.00
mem_allocator:dlmalloc-2.8

# Persistence
loading:0
rdb_changes_since_last_save:0
rdb_bgsave_in_progress:0
rdb_last_save_time:1407328559
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:1407328560
rdb_current_bgsave_time_sec:-1
aof_enabled:0
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
aof_last_write_status:ok

# Stats
total_connections_received:9486
total_commands_processed:241141370
instantaneous_ops_per_sec:0
rejected_connections:0
sync_full:0
sync_partial_ok:0
sync_partial_err:0
expired_keys:0
evicted_keys:0
keyspace_hits:30143
keyspace_misses:81
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:1341134

Josiah Carlson

unread,
Aug 6, 2014, 4:15:20 PM8/6/14
to redi...@googlegroups.com
Replies inline.

On Wed, Aug 6, 2014 at 9:10 AM, <tinti...@gmail.com> wrote:

The redis server version I use is 2.8.9 from MSOpenTech github. Can anyone shed light on why redis "info" command indicates that used memory is 21 GB even though the RDB file that's saved on disk is < than 4 GB? I did successfully run a "save" command before noting down the size of the RDB file. The qfork heapfile is 30 Gb as it's been configured in redis.windows.conf.

Configuration :

maxheap 30gb max-memory 20 Gb appendonly no save 18000 1

The server has 192 GB of physical RAM, but unfortunately only has about 60 GB of free disk space and I had to set max-heap and max-memory to 30 Gb and 20 Gb respectively so that I have additional space to persist the data on disk.

I'm using redis as a cache and the save interval is large as seeding the data takes a long time and I don't want constant writing to file. Once seeding is done, the DB is updated with newer data once a day.

My questions are :

  1. How is the saved RDB file so small? Is it solely due to compression (rdbcompression yes)? If yes, can the same compression mechanism be used to store data in memory too? I make use of lists extensively.

Redis stores data in-memory using one of a few different data structures. They are optimized for performance primarily, and memory use second. When performing a snapshot, Redis encodes those structures in a different way, eliminating much of the overhead of the structures, at the cost of them not being in any way accessible using the algorithms normally intended for the non-encoded structure. Incidentally, the decoding process is part of the reason why starting up Redis with a large dataset can take a long-ish time. The encoding, combined with a bit of internal data compression typically results in snapshots taking 10-20% of the in-memory data size. So if you have 10 gigs of memory used in Redis, it's not uncommon to see Redis have a 1-2 gig snapshot.

In your case, there are a couple things that you might be able to do to reduce the amount of memory that Redis uses, but it depends on the data you are storing in your lists. If you can share, how big is your typical item in a list, and how large do your lists tend to be?
 
  1. Before I ran the "save" command, the working set and private bytes in process-explorer was very small. Is there a way I can breakdown memory usage by datastructure? For example : List uses x amount, Hash uses y amount etc?

While Redis does allow you to introspect on individual keys, memory-use on a per-key basis is not provided. There are several tools that allow you to decode the Redis snapshot rdb, and some will give you estimates on the memory used by keys when resident. I'd check out redis-rdb-tools first, then look for others if it isn't able to do what you need.
 
  1. Is there any way I can store the AOF file ( I turned off AOF and use RDB because the AOF files were filling up disk space fast ) in a network path ( shared drive or NAS )? I tried setting the dir config to \someip\some folder but the service failed to start with the message "Cant CHDIR to location"

Map a drive in Windows, then at least you get a drive letter that you can work from. Alternatively, the network paths for Windows are usually of the form \\hostname\path, not \hostname\path as you listed. One concern that you should have if you go this way is that if the drive disappears for some reason (I've got virtual machines running on my Windows box that Windows sometimes loses network drive mounts to), it is more than a little bit likely to cause the Redis process to hang or crash. I don't know which, and it's only a hypothesis, as I have no idea what Microsoft did to Redis in its port.

AOF rewriting should keep your AOF to a reasonable size if it can keep up with your write volume, but having a bigger disk in your server will most likely offer the most benefit. Are you sure you can't just slip in a 1-4TB drive without anyone noticing?

I'm unable to post images, but this is what process-explorer has to say about the redis-server instance :

For being unable to post images, the image you posted is coming in just fine ;)

Your 3.81 gig dump vs 30 gig in-memory is not surprising, falling well within the 10-20% estimated dump vs. in-memory sizes I offered earlier.

 - Josiah

--
You received this message because you are subscribed to the Google Groups "Redis DB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+u...@googlegroups.com.
To post to this group, send email to redi...@googlegroups.com.
Visit this group at http://groups.google.com/group/redis-db.
For more options, visit https://groups.google.com/d/optout.

Sai

unread,
Aug 6, 2014, 6:36:55 PM8/6/14
to redi...@googlegroups.com
Hi Josiah,

The RDB file being smaller makes sense, although I did not expect to see so much difference between the in-memory representation and on disk. 

In your case, there are a couple things that you might be able to do to reduce the amount of memory that Redis uses, but it depends on the data you are storing in your lists. If you can share, how big is your typical item in a list, and how large do your lists tend to be?

The list has one entry per date and each item is a delimited string which is between 100 and 250 characters long, depending on the date.

I'd check out redis-rdb-tools first, then look for others if it isn't able to do what you need.

I'll check this out. Thank you.

Alternatively, the network paths for Windows are usually of the form \\hostname\path, not \hostname\path as you listed.

This was a typo, the config entry I put in was in the form \\hostname\path. I also tried creating a mapped drive but redis couldn't read the path. I get the same "Cant CHDIR" error. I did ensure that the service account under which redis is running has permissions. I suspect that it's unable to parse windows path format ex T:\RedisRDB. Also the documentation states that the heap file ( QFork ) must be on a local drive. I'm not sure how it determines if the drive is local as opposed to being a mapped drive, but that's the biggest stumbling block for me right now. 

Are you sure you can't just slip in a 1-4TB drive without anyone noticing?

Unfortunately no :( These are servers in the enterprise data center and it would take weeks if not months to add storage.

I will play around with list-max-ziplist-entries 512 and list-max-ziplist-value 64 to see if they make a difference, understanding the fact that insertion/read speed may be affected. 

Cheers,
Sai

Jonathan Pickett

unread,
Aug 7, 2014, 1:10:21 AM8/7/14
to redi...@googlegroups.com
Hi Sai,

The latest MSOpenTech version on GitHub has a new 'heapdir' flag that will allow you to specify the location of the memory mapped file used in our fork() emulation code. The path will now default to the AppData\Local\Redis rather than being in the same folder as the executable. It must be on a physical drive with a local path for the page sharing to work between parent and child processes. See the Redis.Windows.Conf file included with the binaries for a detailed description of this flag. 

I believe that the 2.8.9 version also required the 'dir' directive to have a trailing backslash. This now optional in the latest version.

I am expecting that this week there will be an official(NuGet and Chocolatey) release of 2.8.12.

-- Jonathan (MSOpenTech)

Josiah Carlson

unread,
Aug 7, 2014, 3:45:27 AM8/7/14
to redi...@googlegroups.com
Replies inline.

On Wed, Aug 6, 2014 at 3:36 PM, Sai <sg.ka...@gmail.com> wrote:
Hi Josiah,

The RDB file being smaller makes sense, although I did not expect to see so much difference between the in-memory representation and on disk. 

Structures + compression = huge wins.

In your case, there are a couple things that you might be able to do to reduce the amount of memory that Redis uses, but it depends on the data you are storing in your lists. If you can share, how big is your typical item in a list, and how large do your lists tend to be?

The list has one entry per date and each item is a delimited string which is between 100 and 250 characters long, depending on the date.

How long are your lists, typically?

This probably won't lead anywhere, but you can expect that each entry in a list has 10-20% wasted space due to data structure overhead.

I'd check out redis-rdb-tools first, then look for others if it isn't able to do what you need.

I'll check this out. Thank you.

Alternatively, the network paths for Windows are usually of the form \\hostname\path, not \hostname\path as you listed.

This was a typo, the config entry I put in was in the form \\hostname\path. I also tried creating a mapped drive but redis couldn't read the path. I get the same "Cant CHDIR" error. I did ensure that the service account under which redis is running has permissions. I suspect that it's unable to parse windows path format ex T:\RedisRDB. Also the documentation states that the heap file ( QFork ) must be on a local drive. I'm not sure how it determines if the drive is local as opposed to being a mapped drive, but that's the biggest stumbling block for me right now. 

Jonathan's post should address this better than I can.

Are you sure you can't just slip in a 1-4TB drive without anyone noticing?

Unfortunately no :( These are servers in the enterprise data center and it would take weeks if not months to add storage.

I will play around with list-max-ziplist-entries 512 and list-max-ziplist-value 64 to see if they make a difference, understanding the fact that insertion/read speed may be affected.

"list-max-ziplist-value 64" means that the maximum length of an item for the list itself to be ziplist-encoded is 64 bytes long. If your items are 100-250 characters long, none of your lists will ever be ziplists. If you set "list-max-ziplist-value 300", then it will take effect, but only for new lists.

To quickly swap representations for lists that match the criteria, you can run the following Lua script...

for _, key in ipairs(KEYS) do
    if tonumber(redis.call('llen', key)) <= 512 then
        local ttl = redis.call('ttl', key)
        local dump = redis.call('dump', key)
        redis.call('del', key)
        redis.call('restore', key, ttl, dump)
    end
end

That's kind-of a dirty hack, but it should work.

 - Josiah
Reply all
Reply to author
Forward
0 new messages