I'm the author of the hacky-fill in the closure library and Paul Irish asked me to chime in as to our use cases. Reading James' post I think a lot of the confusion comes from the impression that people who want a fast setImmediate-like scheduling mechanism are interested in drawing things. There might be drawing as the ultimate side effect, but initially the relevant code is just application logic.
1. Guarantee async execution – i.e. in promises implementations. i.e. to ensure that adding a callback to a promise never triggers executing the callback before the current JS execution ends. This avoids very hard to find bugs.
Example:
var foo = 1;
promiseThingie.addCallback(function() { console.log(foo) });
foo++;
In the above code we never want foo to be output as having the value 1 and the average JS programmer doesn't expect that. There is a significant large chunk of code in the real world written like that and it leads to deeply nested timers (think 5 or 10 levels deep). At 4ms clamping per timeout it is no longer possible to maintain 60fps given sufficiently deeply nested timers.
Essentially the real world code ends up being equivalent to
onAnimationFrame = function() {
setTimeout(function() {
setTimeout(function() {
setTimeout(function() {
setTimeout(function() {
setTimeout(function() {
setTimeout(function() {
setTimeout(function() {
// Draw!
}, 0);
}, 0);
}, 0);
}, 0);
}, 0);
}, 0);
}, 0);
};
2. Allow pushing stuff to a queue during JS execution and then handling the pushed stuff after execution ends. I.e. to avoid sending more than one HTTP request to an endpoint that would support answering everything that is currently needed in a single response. Another classic example is implementing a dirty flag and then running code once to check for that flag.
We see much larger than expected performance wins of using our hacky-fill for setImmediate over setTimeout(…, 0). The reason is that we often use it to avoid making too many HTTP requests, but calling setTimeout will queue us too far in the future. In a scenario where you compete over resources with other (possibly non-cooperating) resources on a page, this can have a very large impact in the upper percentiles (as opposed to the 4ms that one would expect).
Arguably for these use cases it would be better to use a solution that does not require yielding to the event loop, but unfortunately that cannot be hacky-filled outside of Chrome without controlling all stack entry points.
I'm somewhat troubled by the argument that the existence of a polyfill would be relevant for not implementing – I thought we wanted to make the web platform good. Ironically the hacky-fill for old IEs is very simple. The worst currently is Firefox as it requires creating an iframe and thus carries significant memory overhead: Imagine a page with 2 different banner providers and 3 social buttons: That makes 5 additional iframes per window, just because there is no setImmediate implementation in the browser.
2. Allow pushing stuff to a queue during JS execution and then handling the pushed stuff after execution ends. I.e. to avoid sending more than one HTTP request to an endpoint that would support answering everything that is currently needed in a single response. Another classic example is implementing a dirty flag and then running code once to check for that flag.
Igor started a thread about this here (although, I believe there has
been discussion elsewhere as well):
http://lists.w3.org/Archives/Public/www-dom/2013JulSep/0019.html
Keep in mind that the issue here is what happens when the promise gets
resolved -- not when it is created.
Both of the use cases I've mentioned I've been using setTimeout(0) to achieve for a long time. I understand how timers work (and did a presentation on them at Velocity 2 years ago), and yet this still the way I'm forced to do things unless I want to use the hacked up postMessage shim.
Given that there are use cases for this behavior, a spec defining how it works, an implementation ins browser, an implementation in Node, and a demand for that implementation, I don't understand the resistance. By all accounts it doesn't seem like a tough implementation (at a basic level, you could include the postMessage polyfill natively).
What other data or insights are needed to make the case for setImmediate?
Note that Google Feedback's rendering engine does exactly as jamesr describes, I've even presented on it:We define a work function as function -> function? and then a runloop that says:while (not out of time):fn = fn()if (fn):schedule async completion of loop. // setTimeout(0)
...
darin + jamesr: What are you expected to do for async work sharding once you hit 5 shards of work?This appears to be the issue everyone is discussing here. You always end up with nested setTimeout calls since each async completion is a nested call.
Why is the possibility of misbehavior such a concern here?
I see no greater risk with setImmediate() than I do with setTimeout() or requestAnimationFrame().
There's also nothing preventing postMessage()-backed pegging when used too frequently.
From what I recall, the fact that requestAnimationFrame() must be nested was specifically to prevent the very situations that people are describing in this thread. If requestAnimationFrame() is now deemed "safe enough", it doesn't seem too far from what setImmediate() is doing already.
As pointed out by Darin, getting help from the kernel here is hard and imperfect. And there are at least 5 OS' in various versions to think through once we do latch on to any kernel-assisted technique.
The web of today is still, I think, cooperatively threaded. Even in a multi-process browser, we still rely on each tab behaving well for the responsiveness of the other apps.
While we're waiting for such a robust solution to come online on all 5 platforms, how do we handle setImmediate? Do we hope that it never misbehaves? Do we info-bar if it does? At least some contingency planning here is definitely worthwhile.
The most compelling argument I have heard for setImmediate is in large codebase situations where a module of non-UI code wants to yield but doesn't necessarily know how it was reached or know about any top-level, application-defined run loop that it can integrate with. (I haven't decided if this tips me over the edge.)
That's true, but both setTimeout and requestAnimationFrame have mitigations that help web sites avoid burning battery. If we implemented setImmediate and it becomes popular, we'd likely need to add a similar mitigation to battery drain, which means we won't have gained anything over setTimeout(.., 0).
Given the approach of "if that becomes a problem, we'll mitigate it", isn't that an argument for implementing setImmediate() given that its characteristics are similar to postMessage() in this regard?
And perhaps I'm over-simplifying, but wouldn't you rather have setImmediate() as a signal of what the developer intends so that you can appropriately optimize the behavior and mitigate performance risk as opposed to people continuing to use postMessage()-based polyfills that blur the meaning of postMessage()? It would suck to throttle postMessage() because that's how it's being used.
The way timer clamping works [1] is every task has an associated timer nesting level. If the task originates from a setTimeout() or setInterval() call, the nesting level is one greater than the nesting level of the task that invoked setTimeout() or the task of the most recent iteration of that setInterval(), otherwise it's zero. The 4ms clamp only applies once the nesting level is 4 or higher. Timers set within the context of an event handler, animation callback, or a timer that isn't deeply nested are not subject to the clamping. ... The practical effect of this is that setTimeout(..., x) means exactly what you think it would even for x in [0, 4) so long as the nesting level isn't too high.
Much of the concerns you list are well-known.To me, the main new discovery and critical one, is that fixing crbug.com/402694 to make setTimeout(,0) actually be a 0-delay when it can is going to be hard due to compatibility, because it changes the ordering of tasks that are posted.Because of these compatibility concerns, this makes me wonder if we should implement setImmediate but with throttling. Thoughts?
On Tue, Jan 6, 2015 at 2:38 AM, Alex Clarke <alexc...@google.com> wrote:There is a 1ms clamp in the DOMTimer constructor, and I've investigated removing that. At best I have seen 0.1ms callbacks but some caveats apply:1. The first time the callback fires is going to be slow (likely 2+ms) since v8 needs to compile the function2. If the nesting level reaches 5 then the timer gets bumped up to 4ms3. You're at the mercy of whatever else is in the chromium runloop4. My timings where on a very high end linux pc.5. It did not help this benchmark: http://jsperf.com/setimmediate-test/17 (I don't know why)Unfortunately I think trying to make setTimeout 0 to really be 0 will cause a lot of problems with legacy code. It would be safer (and likely more performant) to implement setImmediate but I understand that is somewhat controversial.On Mon, Jan 5, 2015 at 6:41 PM, Nat Duca <nd...@chromium.org> wrote:+alex clarke who'se been poking at this.On Sun, Jan 4, 2015 at 3:05 PM, <a...@zenexity.com> wrote:That seems about right; I've never seen a timer take less than 2ms, even in optimal conditions.
On Thursday, January 9, 2014 2:19:58 PM UTC+1, jer...@duckware.com wrote:James, thanks for that great explanation of how timers 'should' work in Chrome. But Chrome has not worked the way you describe below for over five years!Chrome since Dec 2008 has used a global nesting variable that is not reset properly, and is not 'worker' safe. The end result is that a timer in Chrome will 'randomly' get clamped or not clamped. Also, there is no setTimeout(...,0) as chrome will promote that to setTimeout(...,1) -- and then due to even more internal bugs, effectively become setTimeout(...,2)The way timer clamping works [1] is every task has an associated timer nesting level. If the task originates from a setTimeout() or setInterval() call, the nesting level is one greater than the nesting level of the task that invoked setTimeout() or the task of the most recent iteration of that setInterval(), otherwise it's zero. The 4ms clamp only applies once the nesting level is 4 or higher. Timers set within the context of an event handler, animation callback, or a timer that isn't deeply nested are not subject to the clamping. ... The practical effect of this is that setTimeout(..., x) means exactly what you think it would even for x in [0, 4) so long as the nesting level isn't too high.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.
I think there's something to be said for fixing setTimeout (or implementing setImmediate) for use in event handlers, but for uses cases like maps I wonder if we'd get better results by trying something else. The Blink Scheduler has the concept of idle time, which occurs after the compositor commit and ends right before input is delivered for the next frame. When an idle task is executed, it's given a deadline. If idle tasks where exposed in js it should be possible to implement a js task scheduler which hands execution back to the browser when it needs it.
I'm the author of the hacky-fill in the closure library and Paul Irish asked me to chime in as to our use cases. Reading James' post I think a lot of the confusion comes from the impression that people who want a fast setImmediate-like scheduling mechanism are interested in drawing things. There might be drawing as the ultimate side effect, but initially the relevant code is just application logic.
1. Guarantee async execution – i.e. in promises implementations. i.e. to ensure that adding a callback to a promise never triggers executing the callback before the current JS execution ends. This avoids very hard to find bugs.
Example:
var foo = 1;
promiseThingie.addCallback(function() { console.log(foo) });
foo++;In the above code we never want foo to be output as having the value 1 and the average JS programmer doesn't expect that. There is a significant large chunk of code in the real world written like that and it leads to deeply nested timers (think 5 or 10 levels deep). At 4ms clamping per timeout it is no longer possible to maintain 60fps given sufficiently deeply nested timers.
Essentially the real world code ends up being equivalent to
onAnimationFrame = function() {
setTimeout(function() {
setTimeout(function() {
setTimeout(function() {
setTimeout(function() {
setTimeout(function() {
setTimeout(function() {
setTimeout(function() {
// Draw!
}, 0);
}, 0);
}, 0);
}, 0);
}, 0);
}, 0);
}, 0);
};2. Allow pushing stuff to a queue during JS execution and then handling the pushed stuff after execution ends. I.e. to avoid sending more than one HTTP request to an endpoint that would support answering everything that is currently needed in a single response. Another classic example is implementing a dirty flag and then running code once to check for that flag.
We see much larger than expected performance wins of using our hacky-fill for setImmediate over setTimeout(…, 0). The reason is that we often use it to avoid making too many HTTP requests, but calling setTimeout will queue us too far in the future. In a scenario where you compete over resources with other (possibly non-cooperating) resources on a page, this can have a very large impact in the upper percentiles (as opposed to the 4ms that one would expect).
Arguably for these use cases it would be better to use a solution that does not require yielding to the event loop, but unfortunately that cannot be hacky-filled outside of Chrome without controlling all stack entry points.
I'm somewhat troubled by the argument that the existence of a polyfill would be relevant for not implementing – I thought we wanted to make the web platform good. Ironically the hacky-fill for old IEs is very simple. The worst currently is Firefox as it requires creating an iframe and thus carries significant memory overhead: Imagine a page with 2 different banner providers and 3 social buttons: That makes 5 additional iframes per window, just because there is no setImmediate implementation in the browser.
Again, we want to have work in <4ms increments so that we remain responsive.If we do work in 4ms chunks, we risk blowing frames, right?
On Tue, Jan 27, 2015 at 2:35 PM, Michael Davidson <m...@google.com> wrote:Again, we want to have work in <4ms increments so that we remain responsive.If we do work in 4ms chunks, we risk blowing frames, right?You have 16ms for a frame, if the browser isn't giving you 4ms (25%) to run script per frame then the browser is too slow and should be fixed.
I would support a proposal for a setIdle(fn) like api.