From time to time I return to my computer to find that a tab has crashed, and my strong suspicion is that the process has ran out of memory and decided to crash. I suspect that author code in the page or an extension is leaking, and it's happening in a timer (setTimeout, etc.) (I *don't* know how prevalent this crash is, though. Maybe it's just me.)So here's the crazy idea:We could ameliorate these crashes by throttling a page's timers as its memory consumption increases.
Aside: I suspect it's setTimeout that is causing web page and extension authors to leak this memory. Take the "a clock created with timing events" example from the setTimeout reference of that venerable resource, W3schools. It allocates this useless closure:function startTime() {... allocate a tiny bit of "stuff" ...setTimeout(function() { startTime() }, 500);}Does V8 do anything special here to avoid that creating a chain of closures, all leaking "stuff"? (Can it?)
My late Thursday night crazy counter-idea :) Pages that leak should crash faster not slower.
We're not doing anyone any favours keeping them on life-support. The only amelioration should be to give the user a chance to save any data - but I suspect they would be better served by having the page designed to be resilient to unexpected death in the first place (eg persist the user's novel-in-progress to server or local storage).
OOM has been a big issue in the memory team, but the hard part is that we cannot reproduce the reported OOM in most cases. If we can get a list of URLs that cause OOM reliably, that is very helpful (especially when we ship Oilpan).Rather than suppressing OOM somehow, I'm interested in getting the list of URLs and fixing the OOM.
--On Fri, Mar 27, 2015 at 2:50 PM, Mike Lawther <mikel...@chromium.org> wrote:My late Thursday night crazy counter-idea :) Pages that leak should crash faster not slower. We're not doing anyone any favours keeping them on life-support. The only amelioration should be to give the user a chance to save any data - but I suspect they would be better served by having the page designed to be resilient to unexpected death in the first place (eg persist the user's novel-in-progress to server or local storage).On 26 March 2015 at 22:25, Dominic Cooney <domi...@chromium.org> wrote:From time to time I return to my computer to find that a tab has crashed, and my strong suspicion is that the process has ran out of memory and decided to crash. I suspect that author code in the page or an extension is leaking, and it's happening in a timer (setTimeout, etc.) (I *don't* know how prevalent this crash is, though. Maybe it's just me.)So here's the crazy idea:We could ameliorate these crashes by throttling a page's timers as its memory consumption increases.Aside: I suspect it's setTimeout that is causing web page and extension authors to leak this memory. Take the "a clock created with timing events" example from the setTimeout reference of that venerable resource, W3schools. It allocates this useless closure:function startTime() {... allocate a tiny bit of "stuff" ...setTimeout(function() { startTime() }, 500);}Does V8 do anything special here to avoid that creating a chain of closures, all leaking "stuff"? (Can it?)DominicKentaro Hara, Tokyo, Japan
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.
> OOM has been a big issue in the memory team, but the hard part is that we cannot reproduce the reported OOM in most cases. If we can get a list of URLs that cause OOM reliably, that is very helpful (especially when we ship Oilpan).
>
> Rather than suppressing OOM somehow, I'm interested in getting the list of URLs and fixing the OOM.
+1 to collecting this data. Do we have any such data at present in crash reporting?
+1Plus, perhaps prior to killing the page we should dispatch an event to the page to let it know when memory is getting tight. Maybe a web developer could respond to that by clearing their own caches, etc.
My late Thursday night crazy counter-idea :) Pages that leak should crash faster not slower. We're not doing anyone any favours keeping them on life-support. The only amelioration should be to give the user a chance to save any data - but I suspect they would be better served by having the page designed to be resilient to unexpected death in the first place (eg persist the user's novel-in-progress to server or local storage).
I am concerned about the actionability of this UseCounter (not about
its usefulness as a thought experiment, it's definitely worthwhile).
If you find out that X% of the web-pages reach the limit, what would
be the next step(s)? Without some extra information (e.g. a URL) to
point back to and investigate / correlate, there is little that can be
done and we may as well instrument with Telemetry.
Histogram: Memory.Renderer recorded 2017 samples, average = 120310.7 (flags = 0x1)0 O (1 = 0.0%)
1000 ...
25447 -O (3 = 0.1%) {0.0%}
28965 -------------------------------O (92 = 4.6%) {0.2%}
32969 ----------------------------------------------------------------------O (209 = 10.4%) {4.8%}
37526 ------------------------------------O (108 = 5.4%) {15.1%}
42713 -----------------------------------O (104 = 5.2%) {20.5%}
48617 --------------------------------------------------O (148 = 7.3%) {25.6%}
55338 ----------------------------------------------O (137 = 6.8%) {33.0%}
62988 ------------------------------------O (108 = 5.4%) {39.8%}
71695 ----------------------O (64 = 3.2%) {45.1%}
81606 ----------------O (49 = 2.4%) {48.3%}
92887 -------------------O (57 = 2.8%) {50.7%}
105727 ----------------------------------O (100 = 5.0%) {53.5%}
120342 ------------------------------------------------------------------------O (214 = 10.6%) {58.5%}
136978 ------------------------------------------O (126 = 6.2%) {69.1%}
155913 -----------------------------------------O (122 = 6.0%) {75.4%}
177466 --------------------------O (78 = 3.9%) {81.4%}
201998 ----------------------O (65 = 3.2%) {85.3%}
229921 -----------------------O (68 = 3.4%) {88.5%}
261704 ----------------O (48 = 2.4%) {91.9%}
297881 ----------O (30 = 1.5%) {94.2%}
339058 ------O (19 = 0.9%) {95.7%}
385928 -------O (22 = 1.1%) {96.7%}
439277 --------O (23 = 1.1%) {97.8%}
500000 -------O (22 = 1.1%) {98.9%}
We already have 'Memory.Renderer', which is 'The private working set used by each renderer process. Each renderer process provides one sample. Recorded once per UMA ping.' Units are KB.
Does this give pretty much the same information already?
We already have 'Memory.Renderer', which is 'The private working set used by each renderer process. Each renderer process provides one sample. Recorded once per UMA ping.' Units are KB.Does this give pretty much the same information already?Just for fun, here is mine from Mac 43.0.2342.2 (Official Build) dev (64-bit). I'm a little suspicious that the top one is exactly 500000KB though.