zmalloc performance

109 views
Skip to first unread message

Bobby Walks

unread,
Feb 2, 2014, 3:27:16 PM2/2/14
to la...@googlegroups.com
Hello Jin,

Can you give us little more details about zmalloc?
Have you used a known memory allocation algorithm or you have rolled your own?
How in your opinion you managed to beat jmalloc and glibc malloc numbers ?
Do you think your test suite can cover jmalloc and glibc test suits as well ?

Best Regards, Bobby.

Jin Mingjian

unread,
Feb 2, 2014, 9:23:42 PM2/2/14
to Bobby Walks, la...@googlegroups.com

hi, Bobby, very thanks for interesting. your question is very good. I will go to one friend’s home in the hometown soon. I hope i will reply you in hours soon. sorry for this.

best regards,
Jin

--
You received this message because you are subscribed to the Google Groups "landz" group.
To unsubscribe from this group and stop receiving emails from it, send an email to landz+un...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Jin Mingjian

unread,
Feb 3, 2014, 9:26:54 AM2/3/14
to Bobby Walks, la...@googlegroups.com
Sorry for delay, back as following:


Can you give us little more details about zmalloc?
I just push a commit to give more detail from design:


There are still more or less new concepts in this field. If you are wanting to hacking it, the detail should be reasoned out not hard in that I am careful about the naming of sources. If you are newcomer from ground, there may be tons of new things:)

Have you used a known memory allocation algorithm or you have rolled your own?

The king is the implementation detail. The algorithm is very unique as most of them are from my several refactorings. So, I think this is great myself:) The common designs are well known: such as ThreadLocal/Global Pool(some refs called: arena), jemalloc, tcmalloc and Netty's own buffer allocator are in use. No paper will tell you how to code in lines. If you've read the philosophies of landz, such an important component in the landz will not be possible to reuse any existed works. Landz needs full control over them!

Almost works are done by me from scratch. The exceptions are:

1. Some primitives helper classes from Guava(they are simple, but as I reviewed, it is well-implemented, why not reuse?);
2. Parts of JFFI as the current runtime dependency(we expect they are in Java 9, so it is accepted.).

there are two components, HyperLoop and MPMC, which can be counted as modification or porting. But we still need to control all the details. In fact, 


> How in your opinion you managed to beat jmalloc and glibc malloc numbers ?
zmalloc is great to use in product for Java. The others, how do you use them in Java? OK, JNI:) 

In the first day, I am wared that we should have this allocator. I plan to use the allocate method in Unsafe. But after I read a NH article about allocators and do a simple benchmark about that in Unsafe, I found the existed FB/Google' one is very bleeding fast than that of Unsafe. 

And if you are familiar with this field,  you should know glibc's malloc has memory problem long history. This is why FB and Google has its own malloc. 

Please note: in our simple benchmark, we only test the speed, the speed is definitely not the most important factor for allocator. The most important factor is stability. We definitely do not want any crash for memory leaking. This is, in fact, not shown in the benchmark:)

Why I design zmalloc in KISS from scatch, that is the reason. It is a big challenge, I know. But to repeat the others's logic is a stupid thing. When I'm coding the zmalloc, I often ask myself why I design like this.

For current implementations, there is only one problem, I mentioned in mechanical-sympathy in the day before yesterday: it is called by "pathological" case or usage: that is, in this case, you simply allocate the big batch of chunks, but jump to free them. "jump to free them" means:

assume I allocated 1, 2, 3, 4, 5, 6... ,10,11, but I only free 1,2,3, 5,6,7, 9,10,11, but keep 4 and 8.

For this case,  for all allocators has some level of cache units, they may not free any in that size. This may lead to that the other sized chunks can not be allocated although you see you freed most space. The detail of how jump happens is decided by the detail of their cache unit.

I see ptmalloc, tcmalloc has these bug reports, and Netty's new allocator of course this(see more in the landz.kernel.test module) has this. It seems jemalloc has not dedicated bug reporting site. I have not found useful info. If assumed that Netty's impl is similiar to jemalloc, then I think jemalloc has such problem.

In the Memcached, there is another problem, called "classification": one class(size) of chunks may take all space and not back to other classes. This is why I set the page level size to one. So, ZMalloc has not such problem.

I will add more stats and tracing/logging options and make them into the web console of landz. From Java aspect, there is no memory allocator concept. The allocator is one try to organize offheap area to resolve the Java GC overriding problem. Except zmalloc, you can try Netty's buffer allocator. I know Netty's allocator is in the middle of zmalloc. So, we happen to meet in the thought! But, in that time, I have finished the threadlocal part coding, A simple review and benchmark to Netty just confirms that zmalloc should be good for backend's use:)


> Do you think your test suite can cover jmalloc and glibc test suits as well ?
I am only the entry-level user for jmalloc or glibc's ptmalloc. I try to find some jmalloc's test cases in its sources, but it seems they use a complex config to drive them. From several files, I have not figured out any useful infos. The current simple benchmark is from that NH article. and I do a little modification to make them align to Java implementation. But if you are interesting, welcome to challenge zmalloc by using jemalloc. I am glad to help to any problem you have:)


all right. How about giving a try to zmalloc? Any feedback is welcome!:)

best regards,
Jin Mingjian






Reply all
Reply to author
Forward
0 new messages