Re: Netty 4 read throttling

1,202 views
Skip to first unread message

"이희승 (Trustin Lee)"

unread,
Jul 16, 2012, 9:38:06 PM7/16/12
to ne...@googlegroups.com
You are correct. They are gone and here's the plan.

Instead of Channel.setReadable(), I'll let a user return a bounded
inbound buffer. If buffer is full, Netty will stop reading and resume
reading when buffer becomes not full. To disable inbound traffic, you
can simply limit the capacity of the inbound buffer to 0. Note that
this has not been implemented yet.

Memory awareness in DefaultEventExecutor should also be added to
DefaultEventExecutor before 4.0 enters beta phase.

If you have a good or better idea, I'd love to listen to.

HTH,
T

On Tue 17 Jul 2012 07:29:34 AM KST, m.d.poi...@gmail.com wrote:
> I've been looking at the changes in Netty 4 Alpha 1 to see what it
> would take to change some of my code to use the new API, and it all
> seems pretty straightforward with one exception:
> Channel.setReadable() is gone. Is there a new way to suspend reads
> for a channel (for example to throttle bandwidth usage)? In addition,
> it seems like the memory awareness of OMATPE is gone from the
> EventExecutor code...is this something that is going to be added
> later? It seems like perhaps now that handlers control their buffers
> it might be possible for the channel to automatically suspend reads if
> the first handler's buffer is full, but looking at the code as it
> stands the NIO channel at least seems to always try to grow the buffer
> if there is data to be read.

--
https://twitter.com/trustin
https://twitter.com/trustin_ko
https://twitter.com/netty_project

Mike Poindexter

unread,
Jul 16, 2012, 11:46:28 PM7/16/12
to ne...@googlegroups.com
In general the idea of having Netty automatically throttle reads based on buffer fullness seems like a good one, but it seems like maybe there are some things that may make this design difficult to implement.  

In many cases, it seems like a handler fairly far upstream will need to do the read suspension.  For example, consider the case of a HTTP proxy that accepts large uploads...it will want to throttle reads to match the bandwidth available to it's origin server, but most likely this logic will be in a handler that sits after all the codecs.  If the read throttling is controlled by the first handler's input buffer, how does an upstream handler propagate it's desire to stop reads downstream?  It seems like there are really three ways to do it:

1.)  Add some sort of read throttling handler as the first handler in the pipeline.  When an upstream handler wants to suspend reads he can look it up out of the pipeline, cast to the right class and call a method to suspend reads (which would just set the handler's buffer to not writable internally).  This seems pretty hacky and inflexible to me.

2.)  Every handler has to check if the next handler's buffer has space for it's output, and if not, it will have to close it's inbound buffer and defer writing the output until space becomes available.  In this way, the "stop reading" intent will propagate downstream until eventually the channel will turn off reads.  The disadvantage here is that it will be a lot of duplicate code in each handler.  The advantage is that each handler can define a reasonable buffer size specific to it's handled object types, and reads will automatically get suspended when the pipeline as a whole gets full without having to rely on ObjectSizeEstimators, etc.

3.)  Instead of handlers creating and talking directly to each others buffers, make the buffers be created/managed by the pipeline.  The handlers could provide some sort of metadata object indicating what type of buffer they require, or maybe call methods on the context to configure the buffer.  The pipeline, since it owns the buffers could provide implementations that would suspend read on the "head" buffer (first in the pipeline) when any link in the pipeline became full, and resume reads when the link was emptied.  Handlers could suspend reading manually by simply calling a method on the context to set their buffer to not writable (max size = 0).  Automatic throttling would be accomplished by setting a max buffer/queue size (again by calling a context method).  The disadvantage to this approach is that the handlers will have less control over the buffer implementations.  The advantages would be that handlers have very simple code, and the pipeline would automatically suspend reads when it became "full" as in approach 2.

On Monday, July 16, 2012 6:38:06 PM UTC-7, Trustin Lee wrote:
You are correct.  They are gone and here's the plan.

Instead of Channel.setReadable(), I'll let a user return a bounded
inbound buffer.  If buffer is full, Netty will stop reading and resume
reading when buffer becomes not full.  To disable inbound traffic, you
can simply limit the capacity of the inbound buffer to 0.  Note that
this has not been implemented yet.

Memory awareness in DefaultEventExecutor should also be added to
DefaultEventExecutor before 4.0 enters beta phase.

If you have a good or better idea, I'd love to listen to.

HTH,
T

Trustin Lee

unread,
Jul 23, 2012, 10:14:49 PM7/23/12
to ne...@googlegroups.com
Hi Mike,

First of all, thank you so much for your detailed feed back and patience.

What do you think about letting each handler tell Netty if it is OK to read from socket?  That is, each handler context could have a boolean flag, and the read operation would be suspended if any of the boolean flags are set (i.e. read suspended if at least one handler sets its flag).  The read operation would resume if all flags are clear (i.e. read resumed if all handler clears its flag).

Allowing bounded inbound buffer (my initial idea) introduces unwanted complication because a handler needs to be careful not to remove or read data from its inbound buffer until the next handler's inbound buffer is ready.

Norman Maurer

unread,
Jul 24, 2012, 1:47:52 AM7/24/12
to ne...@googlegroups.com
Hi Trustin,

so something similar to what we have with Channel.setReadable(false) now ?

-- 
Norman Maurer

Netty project

unread,
Jul 24, 2012, 3:08:59 AM7/24/12
to ne...@googlegroups.com

Yeah except that it's per context rather than per channel.  Lemme know if you have a better idea. :-)

--
Sent from a mobile device.

Norman Maurer

unread,
Jul 24, 2012, 3:09:53 AM7/24/12
to ne...@googlegroups.com
What would we gain to have it per context and not per channel ?

-- 
Norman Maurer

Netty project

unread,
Jul 24, 2012, 3:15:46 AM7/24/12
to ne...@googlegroups.com

The biggiest difference is that setReadable() is not a downstream operation anymore. Netty determines to resume or suspend reads depending on whether all flags are true or not.

--
Sent from a mobile device.

Netty project

unread,
Jul 24, 2012, 3:19:35 AM7/24/12
to ne...@googlegroups.com

As a result, inbound traffic throttling becomes simpler.  However, it means a handler cannot deny the request from upstream to suspend inbound traffic.  But.. I think it doesn't make much sense to do that.  Let me know if you find any use case.

--
Sent from a mobile device.

Frederic

unread,
Jul 24, 2012, 5:01:32 AM7/24/12
to ne...@googlegroups.com
Hi,

I'm not 100% sure to understand. So I will try to expose what I could like to have in Netty 4:

1- Inbound traffic shapping must be blockable in a way that it does not consume memory to prevent OOME
2- Inbound traffic shapping could be related to an handler far away in the pipeline, close to the business, for instance because it knows what will happen on business handler if not blocked

Maybe I'm wrong, but I suppose that for point 1 implies that buffer in first handler is handle correctly and that this is the one that blocks really the "real" inbound traffic. Because if not, then I suppose (I may be wrong since I didn't go deeply in Netty 4 yet, just on surface due to some priorities) that the first handler will continue to read from network then passing the information to the next handler, and then it will be blocked somewhere in the middle, but not preventing the first one to continue to read from the network => OOM

For the 2nd point, I suppose (again) that it will be great to have a way to "simply" tells the first handler to stop or restart inbound traffic. So if this feasable easily (without the event as you said going from last handler to first handler), then that's fine for the point 2.

However I could understand that each handler could have a way to block internally, as you proposed, but at the price of handling correctly the memory again, like ThreadPoolMemory does previously for instance.



Le mardi 24 juillet 2012 09:19:35 UTC+2, Netty project a écrit :

As a result, inbound traffic throttling becomes simpler.  However, it means a handler cannot deny the request from upstream to suspend inbound traffic.  But.. I think it doesn't make much sense to do that.  Let me know if you find any use case.

--
Sent from a mobile device.

On Jul 24, 2012 4:15 PM, "Netty project"  wrote:

The biggiest difference is that setReadable() is not a downstream operation anymore. Netty determines to resume or suspend reads depending on whether all flags are true or not.

--
Sent from a mobile device.

Norman Maurer

unread,
Jul 24, 2012, 8:54:40 AM7/24/12
to ne...@googlegroups.com
After talking about this with Trustin (IM), I now understand what he propose :)

So the idea is that each context can "only" setReadable in its own
context and so can not influence other handlers that may have set the
channel non readable (for example) by good reason.
This makes a lot of sense as I even fixed a bug, that was causing by
such an "side-effect".

So +1, go ahead :)

2012/7/24 Netty project <ne...@googlegroups.com>:

Norman Maurer

unread,
Jul 27, 2012, 7:30:45 AM7/27/12
to ne...@googlegroups.com
@Trustin: Did you start yet ? If no I would like to takle it as I need
the feature now ;)

Bye,
Norman


2012/7/24 Norman Maurer <norman...@googlemail.com>:

Norman Maurer

unread,
Jul 27, 2012, 2:42:39 PM7/27/12
to ne...@googlegroups.com
Hi there,

I think I'm almost done:

https://github.com/netty/netty/tree/suspend_feature

Check it out and let me know what you think ;)

2012/7/27 Norman Maurer <norman...@googlemail.com>:
Reply all
Reply to author
Forward
0 new messages