ChannelOutboundBuffer flushed buffer leak?

1,420 views
Skip to first unread message

Chris Conroy

unread,
Jul 18, 2016, 4:02:49 PM7/18/16
to ne...@googlegroups.com

I’ve been trying to track down a NIO memory leak that occurs in a Netty application I am porting from Netty 3 to Netty 4. This leak does not occur in the Netty 3 version of the application.

For now, I’m using only unpooled heap buffers in Netty 4, but NIO buffers do come into play for socket communication.

I’ve captured a few heap dumps from affected instances, and in each it appears that the leaked DirectByteBuf java objects are rooted in an io.netty.util.Recycler.

These buffers remain indefinitely: I can disable the application to drain traffic and force GCs, but the # of NIO buffers and NIO allocated space stays flat.

The issue is likely related to slow readers. However, the leak persists long after all channels have been closed.

I implemented a writability listener and the leak does appear to go away if I stop writing to a channel after it goes unwritable. This is good, but I’m still worried that this just makes the problem less likely since it’s still possible to write/flush and have pending data: writability just limits how much data will be buffered.

Digging into ChannelOutBoundBuffer I see the following stanza in close:


// Release all unflushed messages.
try {
    Entry e = unflushedEntry;
    while (e != null) {
        // Just decrease; do not trigger any events via decrementPendingOutboundBytes()
        int size = e.pendingSize;
        TOTAL_PENDING_SIZE_UPDATER.addAndGet(this, -size);

        if (!e.cancelled) {
            ReferenceCountUtil.safeRelease(e.msg);
            safeFail(e.promise, cause);
        }
        e = e.recycleAndGetNext();
    }
} finally {
    inFail = false;
}
clearNioBuffers();

This seems a bit curious to me: why are flushed buffers not released here? Since the leak seems to be rooted in the Recycler, this could be the culprit…What do you think?

Norman Maurer

unread,
Jul 18, 2016, 4:36:31 PM7/18/16
to ne...@googlegroups.com
failFlushed(...) should be called to fail and release all flushed messages.

Are you saying this not happens?
--
You received this message because you are subscribed to the Google Groups "Netty discussions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to netty+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/netty/CA%2B%3DgZKADssKFcs-WCc8%2Br2RWrvbgg3csaJPdcsXL_mCD5yG2bg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Chris Conroy

unread,
Jul 18, 2016, 6:21:57 PM7/18/16
to ne...@googlegroups.com
Just another data point: I confirmed the leak is still there with a writability listener: it just occurs very slowly. This isn't a big surprise if the above is indeed a correct analysis of the issue.

Chris Conroy

unread,
Jul 19, 2016, 2:04:40 PM7/19/16
to Netty discussions
Ah okay: I didn't see the calls to failFlushed since they occur above the stanza I found suspicious. 

So, the above explanation is probably not correct. Still, I am seeing a leak where DirectByteBufs are rooted to the recycler, and the speed at which these buffers leak appears to be correlated with slow/partial readers.

Norman Maurer

unread,
Jul 19, 2016, 2:17:25 PM7/19/16
to ne...@googlegroups.com
Can you provide a reproducer? Also did you try to run with paranoid leak detection

Chris Conroy

unread,
Jul 19, 2016, 2:42:16 PM7/19/16
to Netty discussions
I have not been able to reproduce locally yet, but I do see it in a cluster that takes a lot of varied traffic. The leak detector has not fired for this under advanced. I will give paranoid a shot to be safe, but it's my understanding that the leak detection framework is more for dealing with pooled byte buf misuse, but in this case I am exclusively using unpooled heap byte bufs: these are just the socket direct byte bufs that appear to be leaking.

I meant to add this earlier: The path to GC root goes:

io.netty.buffer.ByteBufUtil$ThreadLocalUnsafeDirectByteBuf
  io.netty.util.Recycler$DefaultHandle
    io.netty.util.Recycler$DefaultHandle[]
      io.netty.util.Recycler#Stack
        java.lang.Object[]
          io.netty.util.internal.InternalThreadLocalMap
            ... (more thread local map refs up to java.lang.Thread)

Norman Maurer

unread,
Jul 19, 2016, 2:48:28 PM7/19/16
to ne...@googlegroups.com
Are you using 4.0 or 4.1 ?

Chris Conroy

unread,
Jul 19, 2016, 2:51:27 PM7/19/16
to ne...@googlegroups.com
4.1.0.Final

--
You received this message because you are subscribed to a topic in the Google Groups "Netty discussions" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/netty/Ve4lnRvFXjM/unsubscribe.
To unsubscribe from this group and all its topics, send an email to netty+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/netty/F788648D-2C4E-4031-BD2A-EFFAEED64BDC%40googlemail.com.

Norman Maurer

unread,
Jul 19, 2016, 2:59:21 PM7/19/16
to ne...@googlegroups.com
Can you do me a favour and run with: 

-Dio.netty.recycler.maxCapacity=0

And let me know if you still see some leaks ?



Chris Conroy

unread,
Jul 19, 2016, 3:30:28 PM7/19/16
to ne...@googlegroups.com
I ran with paranoid while observing the leak and got no messages from the ResourceLeakDetector.

Initial results from recycler maxCapacity of 0 are looking positive with respect to the leak. However, I did see a large spike of NIO memory allocated (1G) where the Netty 3 version of this app receiving similar traffic peaks at about 1/10th that. I'll let this run for a while and report back later with the full results.

Norman Maurer

unread,
Jul 19, 2016, 3:40:20 PM7/19/16
to ne...@googlegroups.com
Btw all the “leaking” buffers are always:

ThreadLocalUnsafeDirectByteBuf

?


Chris Conroy

unread,
Jul 19, 2016, 4:39:40 PM7/19/16
to ne...@googlegroups.com
Yes all the leaked buffers look to be ThreadLocalUnsafeDirectByteBuf

Disabling the recycler w/o backpressure on slow readers resulted in several massive allocation events. I was able to free them by forcing GC so no leaks there. Memory usage with backpressure on slow readers is back down to Netty 3 w/o backpressure levels. I don't see any leaks so far.

Assuming that disabling the recycler does indeed prevent leaks, where do we go from there? Any ideas of things to look for in my application that might cause the recycler to get into a bad state?

Norman Maurer

unread,
Jul 20, 2016, 12:41:30 AM7/20/16
to ne...@googlegroups.com
Are you actually sure these are really leaked ?

The point of the “ThreadLocalUnsafeDirectByteBuf” is that it can be reused so it is expected that it not gets released after its written but put back in the recycler. Or you saying there are too many of these ?


Chris Conroy

unread,
Jul 20, 2016, 1:06:49 PM7/20/16
to ne...@googlegroups.com
This leak resulted in exhaustion of 4G of NIO memory. The same application under Netty 3 only ever uses <200MB of NIO memory. I have run several experiments where I take some traffic, disable the node from serving traffic, and then force a full GC. The allocated NIO memory does not return to normal levels.

Spot checking the byte buf handles from my heap dump, I see lots (all?) `recycle` and `lastRecycleIds` of `int -2147483648 = 0x80000000`

The experiment yesterday with the recycler disabled did not result in any such leaks over several hours of traffic.

Norman Maurer

unread,
Jul 20, 2016, 1:11:29 PM7/20/16
to ne...@googlegroups.com
Thanks Chris,

this sounds really “fishy”. Let me try to debug this a bit more (not sure yet how tho).

Norman Maurer

unread,
Jul 20, 2016, 1:17:17 PM7/20/16
to ne...@googlegroups.com
One last thing…

Can you tell me what the capacity() of the “leaked” buffer is ?

Chris Conroy

unread,
Jul 20, 2016, 1:20:13 PM7/20/16
to ne...@googlegroups.com
Here's an example leaked buf

Inline image 1

Norman Maurer

unread,
Jul 20, 2016, 1:22:50 PM7/20/16
to ne...@googlegroups.com
And how many of these you have ?

Sorry for all the questions :(

On 20 Jul 2016, at 19:19, Chris Conroy <cco...@squareup.com> wrote:

Here's an example leaked buf

<Screen Shot 2016-07-20 at 1.18.24 PM.png>

Norman Maurer

unread,
Jul 20, 2016, 1:29:34 PM7/20/16
to ne...@googlegroups.com
And also the what is under the “Stack.thread” field stored ? Is it the same for all ?

Maybe you could even share a dump with me ?

Chris Conroy

unread,
Jul 20, 2016, 1:39:57 PM7/20/16
to ne...@googlegroups.com
It's no problem! I'm sorry for all the back and forth. I'd just send you the heap dump if I could, but alas it will be difficult to impossible to sanitize it from sensitive data. (As an aside, I really wish there were tools that let you interact with java heap dumps more programmatically...)

In the particular heap dump I'm looking at, I have 12,762 such buffers. Interestingly, I see 123k Recycler$DefaultHandles in the heap...
Inline image 1


Here's the only path to GC roots from the leaked byte bufs:

Inline image 2

The threads appear to all be server worker threads. The Biggest Objects - Dominators view for strong references in YourKit shows me that the server worker threads are the dominant roots in the heap:

Inline image 3





Norman Maurer

unread,
Jul 20, 2016, 1:45:50 PM7/20/16
to ne...@googlegroups.com
Thanks I will have another look.

One thing you could try is to use “-Dio.netty.threadLocalDirectBufferSize=16” and see if the leaks are gone then. 

This is not a solution but would help me a bit :)


On 20 Jul 2016, at 19:39, Chris Conroy <cco...@squareup.com> wrote:

It's no problem! I'm sorry for all the back and forth. I'd just send you the heap dump if I could, but alas it will be difficult to impossible to sanitize it from sensitive data. (As an aside, I really wish there were tools that let you interact with java heap dumps more programmatically...)

In the particular heap dump I'm looking at, I have 12,762 such buffers. Interestingly, I see 123k Recycler$DefaultHandles in the heap...
<Screen Shot 2016-07-20 at 1.31.40 PM.png>


Here's the only path to GC roots from the leaked byte bufs:

<Screen Shot 2016-07-20 at 1.34.50 PM.png>

The threads appear to all be server worker threads. The Biggest Objects - Dominators view for strong references in YourKit shows me that the server worker threads are the dominant roots in the heap:

<Screen Shot 2016-07-20 at 1.38.19 PM.png>




Chris Conroy

unread,
Jul 20, 2016, 1:55:24 PM7/20/16
to ne...@googlegroups.com
Sure thing. I'll give that a shot.

Chris Conroy

unread,
Jul 21, 2016, 6:10:32 PM7/21/16
to ne...@googlegroups.com
I was a bit delayed as I had to roll back for an unrelated reason.

Using “-Dio.netty.threadLocalDirectBufferSize=16” appears to fix the leak.

Can you help me understand how much memory we should expect to sit resident in the recycler under the default settings? The documentation is very scarce... At first glance, the default of 266k objects in the recycler and 64k thread local buffer size limit would consume 17GB of NIO memory? But, that can't be right...

Norman Maurer

unread,
Jul 22, 2016, 8:43:14 AM7/22/16
to ne...@googlegroups.com
Hey Chris,

the thing is that I would not expect to have so much stuff in there as it should pick things from the Stack per thread. Or do you create and destroy a lot of threads ?


Chris Conroy

unread,
Jul 25, 2016, 1:30:53 PM7/25/16
to ne...@googlegroups.com
Nope: we use fixed size thread pools and are not killing threads.

I'm not saying that we've gotten that much allocated (we'd OOME well before then), but the defaults seem to imply that the recycler will hold on to 17G (per thread?) of memory if you were to allocate that much. This seems wrong.

I'm running a few experiments changing the recycler max capacity and the thread local direct buffer size which I think should unblock us.

Norman Maurer

unread,
Jul 25, 2016, 1:37:50 PM7/25/16
to ne...@googlegroups.com
Yeah I agree the default is not super good… What you think would be a “sane” default ?

Chris Conroy

unread,
Jul 25, 2016, 2:00:55 PM7/25/16
to ne...@googlegroups.com

If the recycler is used by each EventLoopGroup, then it probably should have a per EventLoopGroup configuration since you’ll need lower thresholds for more threads. In practice I’m only seeing much usage of the recycler on one of my EventLoopGroup s but I would be worried about running out of memory unnecessarily in some other situation where another group ends up buffering a large amount of data due to some slowdown.

This would be a bit easier to configure safe automatic defaults if it were a global (instead of per thread) recycler. How crazy would that be? Without that, the recyclers need to be small enough to multiply per thread in the app or there needs to be some kind of coordination mechanism to disable recycler growth in some threads if other threads are currently using a lot of capacity. There also might be some value in expiring older buffers so that after high pressure periods they are able to be reclaimed (I have had success doing this in MRU object pools elsewhere)


For HTTP applications, the default chunk size is 4k, so I imagine most buffers would be under that size? I’m currently seeing good memory usage results without too much extra GC using -Dio.netty.recycler.maxCapacity=4096 -Dio.netty.threadLocalDirectBufferSize=8192 but I haven’t explored too many other options yet. I did not seem to get much utilization with a thread local buffer size threshold of 4k for whatever reason though.


Norman Maurer

unread,
Jul 25, 2016, 2:35:07 PM7/25/16
to ne...@googlegroups.com
On 25 Jul 2016, at 20:00, Chris Conroy <cco...@squareup.com> wrote:

If the recycler is used by each EventLoopGroup, then it probably should have a per EventLoopGroup configuration since you’ll need lower thresholds for more threads. In practice I’m only seeing much usage of the recycler on one of my EventLoopGroup s but I would be worried about running out of memory unnecessarily in some other situation where another group ends up buffering a large amount of data due to some slowdown.



At the moment these Recyclers are not point to an EventLoopGroup as these are no even aware of anything like EventLoops. For example you may use buffers without EventLoops at all.

This would be a bit easier to configure safe automatic defaults if it were a global (instead of per thread) recycler. How crazy would that be? Without that, the recyclers need to be small enough to multiply per thread in the app or there needs to be some kind of coordination mechanism to disable recycler growth in some threads if other threads are currently using a lot of capacity. There also might be some value in expiring older buffers so that after high pressure periods they are able to be reclaimed (I have had success doing this in MRU object pools elsewhere)



So you are talking about automatically drop stuff if its not used for X timeframe ?

Chris Conroy

unread,
Jul 25, 2016, 2:44:30 PM7/25/16
to ne...@googlegroups.com

On Mon, Jul 25, 2016 at 2:35 PM, 'Norman Maurer' via Netty discussions <ne...@googlegroups.com> wrote:
At the moment these Recyclers are not point to an EventLoopGroup as these are no even aware of anything like EventLoops. For example you may use buffers without EventLoops at all.

This would be a bit easier to configure safe automatic defaults if it were a global (instead of per thread) recycler. How crazy would that be? Without that, the recyclers need to be small enough to multiply per thread in the app or there needs to be some kind of coordination mechanism to disable recycler growth in some threads if other threads are currently using a lot of capacity. There also might be some value in expiring older buffers so that after high pressure periods they are able to be reclaimed (I have had success doing this in MRU object pools elsewhere)



So you are talking about automatically drop stuff if its not used for X timeframe ?

Exactly! This works best if the pool is MRU retrieval instead of LRU or random access retrieval.

Norman Maurer

unread,
Jul 25, 2016, 3:21:55 PM7/25/16
to ne...@googlegroups.com
Something unrelated but why you use the UnpooledByteBufAllocator and not the PooledByteBufAllocator ?


--
You received this message because you are subscribed to the Google Groups "Netty discussions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to netty+un...@googlegroups.com.

Chris Conroy

unread,
Jul 25, 2016, 3:39:00 PM7/25/16
to ne...@googlegroups.com
In order to de-risk the rollout of the Netty 4 upgrade, we're starting out with the unpooled allocator. This was actually very helpful for digging into this issue since a full GC should have been able to reclaim most of the NIO memory.

We plan to transition to the pooled allocator once we shake out any other issues in the upgrade. Hopefully this is the last of those!

--
You received this message because you are subscribed to a topic in the Google Groups "Netty discussions" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/netty/Ve4lnRvFXjM/unsubscribe.
To unsubscribe from this group and all its topics, send an email to netty+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/netty/A7B9770D-8385-4CFB-9D35-93DED749F1F2%40googlemail.com.

Chris Conroy

unread,
Jul 28, 2016, 7:11:28 PM7/28/16
to ne...@googlegroups.com

Ping: What do you think about a global recycler instead of many thread-local recyclers?

Also, can you provide some more context on the rationale behind the recycler? Especially with the PooledByteBufAllocator, NIO allocations should be very cheap, so why bother to reuse the buffers?

Norman Maurer

unread,
Jul 29, 2016, 12:26:35 AM7/29/16
to ne...@googlegroups.com
Comments inside..

On 29 Jul 2016, at 01:10, 'Chris Conroy' via Netty discussions <ne...@googlegroups.com> wrote:

Ping: What do you think about a global recycler instead of many thread-local recyclers?


Im not sure this can be done without too much overhead. But if you want to cook up a PR and show it with benchmarks I would be interested for sure :)


Also, can you provide some more context on the rationale behind the recycler? Especially with the PooledByteBufAllocator, NIO allocations should be very cheap, so why bother to reuse the buffers?


Its because of object allocation. It basically reuses the “ByteBuf” container object (non the actual memory here). 

Im working on another fix for the problem you see. And you also may be interested in these:



Chris Conroy

unread,
Jul 29, 2016, 12:52:09 PM7/29/16
to ne...@googlegroups.com

On Fri, Jul 29, 2016 at 12:26 AM, ‘Norman Maurer’ via Netty discussions <ne...@googlegroups.com> wrote:

Comments inside..

On 29 Jul 2016, at 01:10, 'Chris Conroy' via Netty discussions <ne...@googlegroups.com> wrote:

Ping: What do you think about a global recycler instead of many thread-local recyclers?


Im not sure this can be done without too much overhead. But if you want to cook up a PR and show it with benchmarks I would be interested for sure :) 



Also, can you provide some more context on the rationale behind the recycler? Especially with the PooledByteBufAllocator, NIO allocations should be very cheap, so why bother to reuse the buffers?


Its because of object allocation. It basically reuses the “ByteBuf” container object (non the actual memory here).

The ByteBuf objects do pin the NIO memory with an unpooled allocator. Are you saying that this is not the case in the pooled allocator?

Object allocation is always very cheap. Garbage collection in the eden space is incredibly cheap, and most buffers are short-lived. I suspect that this may be a premature micro-optimization. I see no difference in JVM pause time or GC run rates when I disable the recycler completely.

Norman Maurer

unread,
Jul 29, 2016, 12:57:55 PM7/29/16
to ne...@googlegroups.com
On 29 Jul 2016, at 18:51, 'Chris Conroy' via Netty discussions <ne...@googlegroups.com> wrote:

On Fri, Jul 29, 2016 at 12:26 AM, ‘Norman Maurer’ via Netty discussions <ne...@googlegroups.com> wrote:



Comments inside..

On 29 Jul 2016, at 01:10, 'Chris Conroy' via Netty discussions <ne...@googlegroups.com> wrote:

Ping: What do you think about a global recycler instead of many thread-local recyclers?


Im not sure this can be done without too much overhead. But if you want to cook up a PR and show it with benchmarks I would be interested for sure :) 






Also, can you provide some more context on the rationale behind the recycler? Especially with the PooledByteBufAllocator, NIO allocations should be very cheap, so why bother to reuse the buffers?


Its because of object allocation. It basically reuses the “ByteBuf” container object (non the actual memory here).


The ByteBuf objects do pin the NIO memory with an unpooled allocator. Are you saying that this is not the case in the pooled allocator?


What you mean here ? In the PooledByteBufAllocator the memory is “pooled” separately from the ByteBuf instance.


Object allocation is always very cheap. Garbage collection in the eden space is incredibly cheap, and most buffers are short-lived. I suspect that this may be a premature micro-optimization. I see no difference in JVM pause time or GC run rates when I disable the recycler completely.



In the past I saw issues because of heavy object allocation (even for this short-lived objects). If you not have this issue you could just disable the recycler. 

--
You received this message because you are subscribed to the Google Groups "Netty discussions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to netty+un...@googlegroups.com.

Norman Maurer

unread,
Jul 29, 2016, 1:10:49 PM7/29/16
to ne...@googlegroups.com
Also another thing that we considered for a long time was if we should have the ability to configure recyclers for different objects differently. Like allow to disable them for buffers but still keep for others objects etc.

WDYT ?

Chris Conroy

unread,
Jul 29, 2016, 1:15:08 PM7/29/16
to ne...@googlegroups.com
On Fri, Jul 29, 2016 at 12:57 PM, 'Norman Maurer' via Netty discussions <ne...@googlegroups.com> wrote:

On 29 Jul 2016, at 18:51, 'Chris Conroy' via Netty discussions <ne...@googlegroups.com> wrote:

The ByteBuf objects do pin the NIO memory with an unpooled allocator. Are you saying that this is not the case in the pooled allocator?


What you mean here ? In the PooledByteBufAllocator the memory is “pooled” separately from the ByteBuf instance.

With the unpooled allocator, holding on to `ByteBuf` references causes the corresponding NIO memory to be held for the lifetime of the `ByteBuf`. This is why we exhausted our NIO space with the default settings as the recycler held `ByteBuf`s were taking up all available NIO memory. I haven't tested, but it looks like perhaps this is not the case when using pooled allocation since I don't see any `retain` or `release` calls inside the recycler.

 

Object allocation is always very cheap. Garbage collection in the eden space is incredibly cheap, and most buffers are short-lived. I suspect that this may be a premature micro-optimization. I see no difference in JVM pause time or GC run rates when I disable the recycler completely.

In the past I saw issues because of heavy object allocation (even for this short-lived objects). If you not have this issue you could just disable the recycler. 

Yep we can definitely do that.

Norman Maurer

unread,
Jul 29, 2016, 1:16:46 PM7/29/16
to ne...@googlegroups.com
On 29 Jul 2016, at 19:14, 'Chris Conroy' via Netty discussions <ne...@googlegroups.com> wrote:



On Fri, Jul 29, 2016 at 12:57 PM, 'Norman Maurer' via Netty discussions <ne...@googlegroups.com> wrote:

On 29 Jul 2016, at 18:51, 'Chris Conroy' via Netty discussions <ne...@googlegroups.com> wrote:

The ByteBuf objects do pin the NIO memory with an unpooled allocator. Are you saying that this is not the case in the pooled allocator?


What you mean here ? In the PooledByteBufAllocator the memory is “pooled” separately from the ByteBuf instance.

With the unpooled allocator, holding on to `ByteBuf` references causes the corresponding NIO memory to be held for the lifetime of the `ByteBuf`. This is why we exhausted our NIO space with the default settings as the recycler held `ByteBuf`s were taking up all available NIO memory. I haven't tested, but it looks like perhaps this is not the case when using pooled allocation since I don't see any `retain` or `release` calls inside the recycler.

Yeah this is different with the PooledByteBufAllocator. I hope I have a fix for the UnpooledByteBufAllocator in the next week. Just was too busy to finish it yet :(


 

Object allocation is always very cheap. Garbage collection in the eden space is incredibly cheap, and most buffers are short-lived. I suspect that this may be a premature micro-optimization. I see no difference in JVM pause time or GC run rates when I disable the recycler completely.

In the past I saw issues because of heavy object allocation (even for this short-lived objects). If you not have this issue you could just disable the recycler. 

Yep we can definitely do that.


--
You received this message because you are subscribed to the Google Groups "Netty discussions" group.
To unsubscribe from this group and stop receiving emails from it, send an email to netty+un...@googlegroups.com.

Chris Conroy

unread,
Jul 29, 2016, 1:38:47 PM7/29/16
to ne...@googlegroups.com

Configuration by type could be useful, but so far I’m unable to detect any performance degradation when leaving the recycler out altogether. Before adding any more complexity here, I think it would be illuminating to poll the community of Netty 4 users to see what sorts of workloads, if any, impact JVM pause time or GC rates.

Object pooling makes a lot of sense when object setup is expensive. For example, in my Netty based proxy, I pool Channels since connection setup (especially with TLS) is an expensive operation. There are definitely individual circumstances where it may make sense to pool objects without this characteristic, but it’s usually better to just let the JVM handle this. After all, an object pool ends up duplicating the same work that the GC would be doing.


--
You received this message because you are subscribed to a topic in the Google Groups "Netty discussions" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/netty/Ve4lnRvFXjM/unsubscribe.
To unsubscribe from this group and all its topics, send an email to netty+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/netty/415234B2-5AED-46CB-AA6E-56C0D34BAFF2%40googlemail.com.

Norman Maurer

unread,
Jul 29, 2016, 1:42:01 PM7/29/16
to ne...@googlegroups.com
I fully agree that if you can operate without object pooling its a lot “nicer”. 

Like I said I saw problems without it.. That said not sure if the best is to pool or not pool by default. The problem you are descripted with the UnpooledBytebufAllocator should be fixed without the need to disable recycling all together. Like I said I will have a fix soon.


Leonardo Gomes

unread,
Aug 1, 2016, 3:48:16 AM8/1/16
to ne...@googlegroups.com
I think this article from Trustin explains a bit why buffer pooling was added in Netty 4:



Chris Conroy

unread,
Aug 1, 2016, 1:22:40 PM8/1/16
to ne...@googlegroups.com

That blog post touches on the PooledByteBufAllocator which operates at a level below the Recycler object pool.

Norman Maurer

unread,
Aug 1, 2016, 1:24:25 PM8/1/16
to ne...@googlegroups.com
Actually it touches on both.. PooledByteBufAllocator uses the Recycler as well for “pooling” the ByteBuf container (not the memory it refer to itself tho).


Dennis Ju

unread,
Aug 29, 2016, 8:14:24 PM8/29/16
to Netty discussions
Is there a ticket number or link for the issue w/ the UnpooledBytebufAllocator?
Reply all
Reply to author
Forward
0 new messages