Active defrag not working on Redis 4.0.3 ?

58 views
Skip to first unread message

sb56637

unread,
Mar 9, 2018, 10:07:10 AM3/9/18
to Redis DB
Hi there, I'm not finding much documentation on the active defrag settings. I use Redis 4.0.3 as a RAM cache, and fragmentation becomes a problem after a few days of uptime, since I'm running on a VPS with limited RAM. But active defrag doesn't seem to be working for me.

Here's the active defrag part of my Redis conf (all left at the defaults):

# Enabled active defragmentation
activedefrag yes

# Minimum amount of fragmentation waste to start active defrag
active-defrag-ignore-bytes 100mb

# Minimum percentage of fragmentation to start active defrag
active-defrag-threshold-lower 10

# Maximum percentage of fragmentation at which we use maximum effort
active-defrag-threshold-upper 100

# Minimal effort for defrag in CPU percentage
active-defrag-cycle-min 25

# Maximal effort for defrag in CPU percentage
active-defrag-cycle-max 75



And here's my redis-cli info:

# Server
redis_version:4.0.6
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:4d160ab87e0cc243
redis_mode:standalone
os:Linux 4.4.114-42-default x86_64
arch_bits:64
multiplexing_api:epoll
atomicvar_api:atomic-builtin
gcc_version:4.8.5
process_id:19763
run_id:2e81509198bd2c3656a154159b5995746d4c559e
tcp_port:0
uptime_in_seconds:84615
uptime_in_days:0
hz:10
lru_clock:10657952
executable:/usr/sbin/redis-server
config_file:/etc/redis/default.conf

# Clients
connected_clients:1
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0

# Memory
used_memory:786275016
used_memory_human:749.85M
used_memory_rss:936640512
used_memory_rss_human:893.25M
used_memory_peak:791285088
used_memory_peak_human:754.63M
used_memory_peak_perc:99.37%
used_memory_overhead:59666702
used_memory_startup:487144
used_memory_dataset:726608314
used_memory_dataset_perc:92.47%
total_system_memory:4143022080
total_system_memory_human:3.86G
used_memory_lua:7837696
used_memory_lua_human:7.47M
maxmemory:786432000
maxmemory_human:750.00M
maxmemory_policy:allkeys-lfu
mem_fragmentation_ratio:1.19
mem_allocator:jemalloc-4.0.3
active_defrag_running:0
lazyfree_pending_objects:0

# Persistence
loading:0
rdb_changes_since_last_save:8121
rdb_bgsave_in_progress:0
rdb_last_save_time:1520607300
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:10
rdb_current_bgsave_time_sec:-1
rdb_last_cow_size:19701760
aof_enabled:0
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
aof_last_write_status:ok
aof_last_cow_size:0

# Stats
total_connections_received:239204
total_commands_processed:31779176
instantaneous_ops_per_sec:382
total_net_input_bytes:3396434849
total_net_output_bytes:30090604614
instantaneous_input_kbps:35.07
instantaneous_output_kbps:217.58
rejected_connections:0
sync_full:0
sync_partial_ok:0
sync_partial_err:0
expired_keys:104
evicted_keys:565240
keyspace_hits:20285357
keyspace_misses:5376657
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:28364
migrate_cached_sockets:0
slave_expires_tracked_keys:0
active_defrag_hits:0
active_defrag_misses:0
active_defrag_key_hits:0
active_defrag_key_misses:0

# Replication
role:master
connected_slaves:0
master_replid:6b5b2875c987d6ddb318c804b9ede9808b51f5d1
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:0
second_repl_offset:-1
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0

# CPU
used_cpu_sys:685.21
used_cpu_user:1837.27
used_cpu_sys_children:762.66
used_cpu_user_children:4182.13

# Cluster
cluster_enabled:0

# Keyspace
db0:keys=661766,expires=661753,avg_ttl=31404202896



So how is the  active-defrag-threshold-lower calculated? To me it looks like I have 19% fragmentation and it should have kicked in already.

I also read this bug report, which was caused by Transparent Huge Pages, so I have disabled on the VM and restarted Redis:
# cat /sys/kernel/mm/transparent_hugepage/enabled
always madvise [never]

Any other ideas what is going wrong? Thanks a lot

Reply all
Reply to author
Forward
0 new messages