redis latency issues with a full LRU cache (with big values)

509 views
Skip to first unread message

Fabien MARTY

unread,
Dec 9, 2014, 11:08:36 AM12/9/14
to redi...@googlegroups.com
Hi,


First we have to say that redis is a great product. We use it a lot on dozens of machines as message bus, cache... And it works really great. Thanks.

But we have an issue with a redis instance on a big numa machine (lscpu output at the end). And we need some help from the redis community.

This instance is working as an LRU cache (max memory settings enabled and volatile-lru). (redis info at the end)
We put "big values" inside (as string keys) (a few MB each)
We have latency issues when the instance is full (> 500ms latency with redis-cli --latency-history with localhost). When redis is not full, the latency is "normal".

Ok redis is not optimized for big values but >500ms latency is huge !

Basic tests don't help :

- slow log doesn't seem to show something interesting (first command is DEL of one single key (string))
- there is no swap at the system wide level nor at the process level
- redis is launched with numactl --interleave=all
- there is no huge pages, no persistence at all
[...]

redis watchdog (CONFIG SET watchdog-period 500) seems to be more interesting with outputs like :

[40790 | signal handler] (1418137810)
--- WATCHDOG TIMER EXPIRED ---
redis-server(logStackTrace+0x4b)[0x4468cb]
/lib64/libc.so.6(madvise+0x7)[0x39516e5637]
/lib64/libpthread.so.0[0x3951a0f710]
/lib64/libc.so.6(madvise+0x7)[0x39516e5637]
redis-server(je_pages_purge+0xe)[0x486dce]
redis-server[0x480b2c]
redis-server(je_arena_dalloc_large+0x9b)[0x4823ab]
redis-server(decrRefCount+0x52)[0x429742]
redis-server(_dictClear+0x77)[0x419507]
redis-server(dictEmpty+0x20)[0x419580]
redis-server(flushdbCommand+0x30)[0x42aec0]
redis-server(call+0x64)[0x41c0d4]
redis-server(processCommand+0x3f7)[0x41c6e7]
redis-server(processInputBuffer+0x4b)[0x427e0b]
redis-server(readQueryFromClient+0x18b)[0x427ffb]
redis-server(aeProcessEvents+0x168)[0x4177a8]
redis-server(aeMain+0x2b)[0x4179db]
redis-server(main+0x2a1)[0x41f341]
/lib64/libc.so.6(__libc_start_main+0xfd)[0x395161ed1d]
redis-server[0x416e09]
[40790 | signal handler] (1418137810) --------



or


[40790 | signal handler] (1418140219)
--- WATCHDOG TIMER EXPIRED ---
redis-server(logStackTrace+0x4b)[0x4468cb]
/lib64/libc.so.6(madvise+0x7)[0x39516e5637]
/lib64/libpthread.so.0[0x3951a0f710]
/lib64/libc.so.6(madvise+0x7)[0x39516e5637]
redis-server(je_pages_purge+0xe)[0x486dce]
redis-server[0x480b2c]
redis-server(je_arena_dalloc_large+0x9b)[0x4823ab]
redis-server(je_arena_ralloc+0x6f0)[0x484be0]
redis-server(je_realloc+0x239)[0x47ad19]
redis-server(zrealloc+0x35)[0x4210b5]
redis-server(sdsRemoveFreeSpace+0x18)[0x41f8e8]
redis-server(clientsCronResizeQueryBuffer+0x5e)[0x41d56e]
redis-server(clientsCron+0x88)[0x41eae8]
redis-server(serverCron+0x133)[0x41ec33]
redis-server(aeProcessEvents+0x316)[0x417956]
redis-server(aeMain+0x2b)[0x4179db]
redis-server(main+0x2a1)[0x41f341]
/lib64/libc.so.6(__libc_start_main+0xfd)[0x395161ed1d]
redis-server[0x416e09]
[40790 | signal handler] (1418140219) --------



Any help is welcome :-)

Thanks

Fabien




(output of lspcu)
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                48
On-line CPU(s) list:   0-47
Thread(s) par coeur :  1
Coeur(s) par support CPU :12
Socket(s):             4
Noeud(s) NUMA :        8
ID du vendeur :        AuthenticAMD
Famille CPU :          16
Modèle :              9
Version :              1
CPU MHz :              2200.169
BogoMIPS:              4400.10
Virtualisation :       AMD-V
L1d cache :            64K
L1i cache :            64K
L2 cache :             512K
L3 cache :             5118K
NUMA node0 CPU(s):     0,4,8,12,16,20
NUMA node1 CPU(s):     24,28,32,36,40,44
NUMA node2 CPU(s):     1,5,9,13,17,21
NUMA node3 CPU(s):     25,29,33,37,41,45
NUMA node4 CPU(s):     2,6,10,14,18,22
NUMA node5 CPU(s):     26,30,34,38,42,46
NUMA node6 CPU(s):     27,31,35,39,43,47
NUMA node7 CPU(s):     3,7,11,15,19,23


(output of redis info)
 # Server
redis_version:2.8.12
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:ffdef713b3f3abf2
redis_mode:standalone
os:Linux 2.6.32-431.5.1.el6.x86_64 x86_64
arch_bits:64
multiplexing_api:epoll
gcc_version:4.1.2
process_id:40790
run_id:c1250e7eb61e42f2ac66a2587851eb44940b1ac8
tcp_port:6383
uptime_in_seconds:95858
uptime_in_days:1
hz:10
lru_clock:8852657
config_file:/home/synbase/config_auto/redis_gribcache.conf

# Clients
connected_clients:208
client_longest_output_list:0
client_biggest_input_buf:3030610
blocked_clients:0

# Memory
used_memory:12881895704
used_memory_human:12.00G
used_memory_rss:13449244672
used_memory_peak:12973397384
used_memory_peak_human:12.08G
used_memory_lua:60416
mem_fragmentation_ratio:1.04
mem_allocator:jemalloc-3.6.0

# Persistence
loading:0
rdb_changes_since_last_save:3076630
rdb_bgsave_in_progress:0
rdb_last_save_time:1418042943
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:-1
rdb_current_bgsave_time_sec:-1
aof_enabled:0
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
aof_last_write_status:ok

# Stats
total_connections_received:57848
total_commands_processed:15049420
instantaneous_ops_per_sec:2617
rejected_connections:0
sync_full:0
sync_partial_ok:0
sync_partial_err:0
expired_keys:8792
evicted_keys:496146
keyspace_hits:8693809
keyspace_misses:1120605
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:0

# Replication
role:master
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0

# CPU
used_cpu_sys:4146.55
used_cpu_user:1017.01
used_cpu_sys_children:0.00
used_cpu_user_children:0.00

# Keyspace
db0:keys=12249,expires=12246,avg_ttl=584602074


Salvatore Sanfilippo

unread,
Dec 9, 2014, 11:11:30 AM12/9/14
to Redis DB
Hello, thanks for writing, I read:

> - there is no huge pages, no persistence at all

Warning, huge pages have issues even if persistence is disabled, it is
actually totally switched off? Thanks.

Salvatore
> --
> You received this message because you are subscribed to the Google Groups
> "Redis DB" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to redis-db+u...@googlegroups.com.
> To post to this group, send email to redi...@googlegroups.com.
> Visit this group at http://groups.google.com/group/redis-db.
> For more options, visit https://groups.google.com/d/optout.



--
Salvatore 'antirez' Sanfilippo
open source developer - GoPivotal
http://invece.org

"Fear makes the wolf bigger than he is."
— German proverb

Fabien MARTY

unread,
Dec 9, 2014, 11:15:39 AM12/9/14
to redi...@googlegroups.com


Le mardi 9 décembre 2014 17:11:30 UTC+1, Salvatore Sanfilippo a écrit :
Hello, thanks for writing, I read:

> - there is no huge pages, no persistence at all

Warning, huge pages have issues even if persistence is disabled, it is
actually totally switched off? Thanks.

Yes, totally switched off :

$ cat /sys/kernel/mm/redhat_transparent_hugepage/enabled
always madvise [never]



Matt Stancliff

unread,
Dec 9, 2014, 11:20:17 AM12/9/14
to redi...@googlegroups.com

> On Dec 9, 2014, at 11:08, Fabien MARTY <fabien...@gmail.com> wrote:
>
> redis watchdog (CONFIG SET watchdog-period 500) seems to be more interesting with outputs like :
>
> [40790 | signal handler] (1418137810)
> --- WATCHDOG TIMER EXPIRED ---
> redis-server(logStackTrace+0x4b)[0x4468cb]
> /lib64/libc.so.6(madvise+0x7)[0x39516e5637]
> /lib64/libpthread.so.0[0x3951a0f710]
> /lib64/libc.so.6(madvise+0x7)[0x39516e5637]
> redis-server(je_pages_purge+0xe)[0x486dce]

There’s a note about disabling pages_purge/madvise by using MALLOC_CONF=lg_dirty_mult:-1 at http://comments.gmane.org/gmane.comp.lib.jemalloc/885

Not sure if it’ll help, but the latency introduced by purging over the numa topology could be the problem.


-Matt

Salvatore Sanfilippo

unread,
Dec 9, 2014, 11:21:31 AM12/9/14
to Redis DB
Ok thanks, from the stack trace it looks like due to jemalloc internal
efforts to defragment. Did you tried to compile Redis with just libc
malloc support? Make MALLOC=libc.
If this does not cause fragmentation with your work load, or if it
causes acceptable fragmentation, this may be a possible fix.

You may also try to switch to different version of jemalloc, since
what you are using, 2.8.12, is the first version to adopt jemalloc
3.6.0. However I believe that tuning parameters may prevent this, but
I don't have experience with jemalloc parameter tuning since it's the
first time I see this kind of latency.

Salvatore

Fabien MARTY

unread,
Dec 9, 2014, 11:32:21 AM12/9/14
to redi...@googlegroups.com


Le mardi 9 décembre 2014 17:21:31 UTC+1, Salvatore Sanfilippo a écrit :
Ok thanks, from the stack trace it looks like due to jemalloc internal
efforts to defragment. Did you tried to compile Redis with just libc
malloc support? Make MALLOC=libc.
If this does not cause fragmentation with your work load, or if it
causes acceptable fragmentation, this may be a possible fix.

You may also try to switch to different version of jemalloc, since
what you are using, 2.8.12, is the first version to adopt jemalloc
3.6.0. However I believe that tuning parameters may prevent this, but
I don't have experience with jemalloc parameter tuning since it's the
first time I see this kind of latency.

Ok thanks, tomorrow I will try with MALLOC=libc

If it fix the problem, I will test a little further with jemalloc parameters.


 

Fabien MARTY

unread,
Dec 10, 2014, 4:38:53 AM12/10/14
to redi...@googlegroups.com
Le mardi 9 décembre 2014 17:32:21 UTC+1, Fabien MARTY a écrit :
Le mardi 9 décembre 2014 17:21:31 UTC+1, Salvatore Sanfilippo a écrit :
Ok thanks, from the stack trace it looks like due to jemalloc internal
efforts to defragment. Did you tried to compile Redis with just libc
malloc support? Make MALLOC=libc.
If this does not cause fragmentation with your work load, or if it
causes acceptable fragmentation, this may be a possible fix.

You may also try to switch to different version of jemalloc, since
what you are using, 2.8.12, is the first version to adopt jemalloc
3.6.0. However I believe that tuning parameters may prevent this, but
I don't have experience with jemalloc parameter tuning since it's the
first time I see this kind of latency.

Ok thanks, tomorrow I will try with MALLOC=libc

If it fix the problem, I will test a little further with jemalloc parameters.

Ok, after some preliminary tests :


(1) MALLOC=libc

=> no latency problem anymore
=> but increasing memory fragmentation (1.3 after 15 minutes) and we plan to give 200GB of max-memory to this instance...


(2) JEMALLOC tuning

After:

diff -up redis-2.8.12/deps/jemalloc/include/jemalloc/internal/arena.h.orig redis-2.8.12/deps/jemalloc/include/jemalloc/internal/arena.h
--- redis-2.8.12/deps/jemalloc/include/jemalloc/internal/arena.h.orig   2014-12-10 09:19:33.000000000 +0100
+++ redis-2.8.12/deps/jemalloc/include/jemalloc/internal/arena.h        2014-12-10 09:19:46.000000000 +0100
@@ -41,7 +41,7 @@
  * So, supposing that opt_lg_dirty_mult is 3, there can be no less than 8 times
  * as many active pages as dirty pages.
  */
-#define        LG_DIRTY_MULT_DEFAULT   3
+#define        LG_DIRTY_MULT_DEFAULT   -1

 
 typedef struct arena_chunk_map_s arena_chunk_map_t;
 typedef struct arena_chunk_s arena_chunk_t;


 (matt suggestion)


=> no latency problem anymore (but it' seems that the latency is not as good than with MALLOC=libc)
=> no special memory fragmentation

So it's great !!!

But... what's about cons of disabling pages_purge/madvise in jemalloc (in the redis context) ? Any chance to see that in the default configuration (for redis) ?

Many thanks

Fabien

Note: ready to do some additional tests



Salvatore Sanfilippo

unread,
Dec 10, 2014, 4:49:15 AM12/10/14
to Redis DB
Interested to understand what are the cons as well, if there are no
obvious cons in the Redis use case, I'm going to implement this by
default.
AFAIK we have little need for madvise since Redis memory should not
get (and does not) get swapped normally, and to purge pages to reclaim
memory is something you rarely see anyway (if not with very sequential
access patterns) because of fragmentation. Instead latency is a known
nightmare.

Waiting more inputs before to act. Thank you.

Fabien MARTY

unread,
Dec 10, 2014, 8:10:03 AM12/10/14
to redi...@googlegroups.com
Le mercredi 10 décembre 2014 10:49:15 UTC+1, Salvatore Sanfilippo a écrit :
Interested to understand what are the cons as well, if there are no
obvious cons in the Redis use case, I'm going to implement this by
default.
AFAIK we have little need for madvise since Redis memory should not
get (and does not) get swapped normally, and to purge pages to reclaim
memory is something you rarely see anyway (if not with very sequential
access patterns) because of fragmentation. Instead latency is a known
nightmare.

Waiting more inputs before to act. Thank you.

Bad news, it seems that we have a memory fragmentation issue with the tuned jemalloc solution (memory fragmentation > 2 after a few hours) :-(

We are setting up a grafana/graphite dashboard to monitor these parameters (latency, fill factor, memory fragmentation, command processed) over time.

I will be back soon with more details.

Salvatore Sanfilippo

unread,
Dec 10, 2014, 8:16:33 AM12/10/14
to Redis DB

Please could you post the info output? Thx

Fabien MARTY

unread,
Dec 11, 2014, 10:07:43 AM12/11/14
to redi...@googlegroups.com
Le mercredi 10 décembre 2014 14:16:33 UTC+1, Salvatore Sanfilippo a écrit :

Please could you post the info output? Thx


Some interesting updates on this problem.


(1) redis 2.8.12 with standard jemalloc

=> no memory fragmentation (when redis is full)
=> huge latency spikes (> 500ms)

https://dl.dropboxusercontent.com/u/14119069/redis/standard1.png
https://dl.dropboxusercontent.com/u/14119069/redis/standard2.png

https://dl.dropboxusercontent.com/u/14119069/redis/standard3.png
=> this one is very interesting (derivative of redis fill factor in green (left axe) and redis latency in yellow (right axe)

Main latency spikes are when redis expire keys.

In this use case, redis is used as a LRU cache (1 hour lifetime). Maybe we have a lot of keys (> 1000) expiring nearly at the same time (easy to fix and test). But it doesn't explain differences with other allocators.

redis info output (at the end of the graph) :
https://dl.dropboxusercontent.com/u/14119069/redis/standard.info


2) redis 2.8.12 with patched jemalloc (matt proposal)

=> some memory fragmentation (1.3 with a full redis)
=> better latency but not really good (spikes at 50/100 ms)

redis info output  :
https://dl.dropboxusercontent.com/u/14119069/redis/modified.info


3) redis 2.8.12 with libc allocator

=> some huge memory fragmentation (> 50)

(redis info with huge memory fragmentation (not full redis))
https://dl.dropboxusercontent.com/u/14119069/redis/huge.info

(redis info with a moderate memory fragmentation (full redis))
https://dl.dropboxusercontent.com/u/14119069/redis/libc.info

=> but really good latency (spikes at 10ms)


I'm totally lost :-(

Tomorrow we will try to add some random lifetime to be sure that most keys don't expire at the same time. But I don't think it's the root cause of the problem.

Regards

Fabien



 

Fabien MARTY

unread,
Dec 12, 2014, 10:46:07 AM12/12/14
to redi...@googlegroups.com
Some additional (interesting) tests :

redis 2.8.12 with standard jemalloc (just compiled)

./redis-server
(default conf)

in another terminal:
redis-cli
config set save ""
flushdb

in another terminal
redis-cli --latency-history

in another terminal
# let's make a 100MB file
dd if=/dev/zero of=blob count=100024 bs=1024

# let's set a new key with this blob every 5 seconds
while true; do redis-cli -x SET `date +%s` <blob; sleep 5;done

# latency output (not good)
min: 0, max: 168, avg: 1.79 (1260 samples) -- 15.00 seconds range
min: 0, max: 258, avg: 5.26 (976 samples) -- 15.00 seconds range
min: 0, max: 256, avg: 1.78 (1262 samples) -- 15.00 seconds range
min: 0, max: 214, avg: 5.85 (940 samples) -- 15.00 seconds range
min: 0, max: 198, avg: 2.13 (1225 samples) -- 15.00 seconds range
min: 0, max: 212, avg: 3.33 (1119 samples) -- 15.03 seconds range


same test with redis 2.8.12 with libc malloc (just compiled)


[...]

# latency output (normal) :
min: 0, max: 1, avg: 0.20 (1456 samples) -- 15.01 seconds range
min: 0, max: 1, avg: 0.19 (1455 samples) -- 15.00 seconds range
min: 0, max: 1, avg: 0.21 (1454 samples) -- 15.00 seconds range
min: 0, max: 1, avg: 0.19 (1455 samples) -- 15.00 seconds range
min: 0, max: 1, avg: 0.20 (1455 samples) -- 15.00 seconds range
min: 0, max: 1, avg: 0.18 (1457 samples) -- 15.01 seconds range
min: 0, max: 1, avg: 0.21 (1455 samples) -- 15.01 seconds range
min: 0, max: 1, avg: 0.22 (1454 samples) -- 15.01 seconds range
min: 0, max: 1, avg: 0.21 (1456 samples) -- 15.01 seconds range
min: 0, max: 1, avg: 0.20 (1455 samples) -- 15.00 seconds range
min: 0, max: 1, avg: 0.18 (1456 samples) -- 15.01 seconds range


same results with redis 2.8.18 or redis 2.8.11 (before jemalloc 3.6.0)


the machine is a quadri opteron 6174 (total: 48 cores on 8 numa nodes)


Any idea ?

Salvatore Sanfilippo

unread,
Dec 12, 2014, 10:51:54 AM12/12/14
to Redis DB
THP disabled? It's basically the same behavior, latency-wise. A few
peaks. Otherwise it's clear that jemalloc is not the holy allocator
that gets all the tradeoffs right, because libc malloc provides a much
better worst latency experience.

Fabien MARTY

unread,
Dec 12, 2014, 3:35:40 PM12/12/14
to redi...@googlegroups.com
Le vendredi 12 décembre 2014 16:51:54 UTC+1, Salvatore Sanfilippo a écrit :
THP disabled? It's basically the same behavior, latency-wise. A few
peaks. Otherwise it's clear that jemalloc is not the holy allocator
that gets all the tradeoffs right, because libc malloc provides a much
better worst latency experience.

Salvatore Sanfilippo

unread,
Dec 13, 2014, 3:18:49 AM12/13/14
to Redis DB
On Fri, Dec 12, 2014 at 9:35 PM, Fabien MARTY <fabien...@gmail.com> wrote:

> Yes, totally switched off:

Ok, I guess that jemalloc + big allocations = latency, at least in the
default configuration. The price here is more fragmentation from libc
malloc(), but if the allocations are large, libc malloc should do a
decent job.
This is what I see from your previous messages:

used_memory_human:311.55M
used_memory_rss:19263098880
used_memory_peak:12889720904
used_memory_peak_human:12.00G
used_memory_lua:33792
mem_fragmentation_ratio:58.97
mem_allocator:libc

Peak 12GB, Used memory 331M. This is not fragmentation, it's just
unreclaimed memory.

When the server was full fragmentation was 1.52, which is not stellar
but is acceptable if it gives you much better latency for your
workload.

So I think you should try to test if fragmentation for your workload
is bound with libc malloc, and is a reasonable value. If it is fixed
at ~1.52 or alike, it can be a good tradeoff.
More or less there are no alternatives, since a slab allocator is also
going to have overhead as well. The other option I see is to tune
jemalloc for latency but this may result into it having the same
fragmentation as libc malloc in the end...

Salvatore

Fabien MARTY

unread,
Dec 13, 2014, 8:07:09 AM12/13/14
to redi...@googlegroups.com


Le samedi 13 décembre 2014 09:18:49 UTC+1, Salvatore Sanfilippo a écrit :
On Fri, Dec 12, 2014 at 9:35 PM, Fabien MARTY <fabien...@gmail.com> wrote:

> Yes, totally switched off:

Ok, I guess that jemalloc + big allocations = latency, at least in the
default configuration. The price here is more fragmentation from libc
malloc(), but if the allocations are large, libc malloc should do a
decent job.
This is what I see from your previous messages:

used_memory_human:311.55M
used_memory_rss:19263098880
used_memory_peak:12889720904
used_memory_peak_human:12.00G
used_memory_lua:33792
mem_fragmentation_ratio:58.97
mem_allocator:libc

Peak 12GB, Used memory 331M. This is not fragmentation, it's just
unreclaimed memory.

When the server was full fragmentation was 1.52, which is not stellar
but is acceptable if it gives you much better latency for your
workload.

So I think you should try to test if fragmentation for your workload
is bound with libc malloc, and is a reasonable value. If it is fixed
at ~1.52 or alike, it can be a good tradeoff.

Already tested. In my use case, I stopped the test with a fragmentation > 2 and a full instance :-(


 
More or less there are no alternatives, since a slab allocator is also
going to have overhead as well. The other option I see is to tune
jemalloc for latency but this may result into it having the same
fragmentation as libc malloc in the end...

Note that jemalloc results are much better with some other machines. I think there is also a numa/jemalloc subject also.

So... I'm in trouble :-( 

Very difficult to leave redis for this use case (distributed cache with random access inside blobs with lua scripts). Maybe we will try to split blobs over multiple keys to have lower allocations ? 

Thanks for the help. Redis is still a great product and we are using it a lot at Météo-France (french national meteorological service). For this particular use case, we just reached one of its limits.

Regards,

Fabien

Yiftach Shoolman

unread,
Dec 13, 2014, 10:16:19 AM12/13/14
to redi...@googlegroups.com
Salvatore - have you ever considered/tested tcmalloc ?

We found some use cases where it performs better (much better) than jemalloc. We were about to send you some info about it but haven't yet completed to test it internally. 


Sent from my iPhone

Salvatore Sanfilippo

unread,
Dec 13, 2014, 10:56:25 AM12/13/14
to Redis DB
Hello Yiftach, we used to have tcmalloc support, but it rarely
performed better than jemalloc from what I recall, however the support
is still integrated in the makefile AFAIK, and both the allocators
changed over the years.

Btw I would be much happier supporting a single allocator better than
we do currently, for example if jemalloc has multiple tuning
parameters, as a first step IMHO we should be able to understand/setup
them better compared to what we are doing together. However a possible
path would be to *substitute* jemalloc with tcmalloc, if we understand
we are better served than tcmalloc in the average case. But the gist
is the same, would be great to cover all the use cases with the
allocator we ship together with Redis and that is used by default for
Linux builds.

Would be great to have results of your tests to understand better if
there are arguments to switch allocator...

Salvatore

Yiftach Shoolman

unread,
Dec 13, 2014, 2:01:05 PM12/13/14
to redi...@googlegroups.com
Agree, we plan to put them on redis-dev soon

Yiftach Shoolman
+972-54-7634621

Thomas Love

unread,
Dec 14, 2014, 8:55:51 AM12/14/14
to redi...@googlegroups.com

On Saturday, 13 December 2014 17:56:25 UTC+2, Salvatore Sanfilippo wrote:
Hello Yiftach, we used to have tcmalloc support, but it rarely
performed better than jemalloc from what I recall, however the support
is still integrated in the makefile AFAIK, and both the allocators
changed over the years.

 
I don't know how you resist the temptation to write your own allocator. Could be a lot of fun. :)

I also know nothing about NUMA but from what I've read, and since jemalloc apparently spawns threads here, I would be scrutinizing the affinity situation very, very closely. 

Thomas
Reply all
Reply to author
Forward
0 new messages