Trove: A proposal for sharing memory caches across renderers

173 views
Skip to first unread message

Tony Gentilcore

unread,
Sep 23, 2013, 7:00:28 PM9/23/13
to blink-dev
A few of us on the Speed Team have been investigating lately why web
pages can load several times slower in cold renderers than warm
renderers. A bunch of tracing suggests that nearly all of the
additional time is spent filling caches (http, v8 compilation, skia
font, etc).

Significant performance, memory and architectural benefits could be
had if we were able to come up with a system for sharing memory caches
among renderers.

Here's a proposal which largely fell out of a sketch by jschuh@:
https://docs.google.com/a/chromium.org/document/d/1VjZfAaMw9vdTLUNcGbXhjDc_feiQ4HQq8JE0r7zxhzA/edit

What do folks think? Is this worth pursuing?

-Tony

Peter Kasting

unread,
Sep 23, 2013, 7:07:01 PM9/23/13
to Tony Gentilcore, blink-dev
This definitely seems worth a closer look.

Originally, one of the reasons to assume separate caches made sense was that it allowed each tab's cache to serve just that tab, and thus hopefully have a much higher proportion of useful cached stuff, higher hit rates, etc.  A key piece of weighing your architecture against the current one will be figuring out how much real usage follows patterns like your examples.  In theory, we might even find that particular usage patterns benefit from shared caches and others benefit from separate ones, and use this to tune our heuristics or to create a multi-layer (shared cache behind separate caches) system.

PK

Marcus Bulach

unread,
Sep 24, 2013, 9:21:59 AM9/24/13
to Peter Kasting, Tony Gentilcore, blink-dev, p...@chromium.org, Philippe Liard
+ppi, pliard

A few considerations from an android point of view:

1) Extending Peter's comment, on android we're extensively using the multi-process architecture as a key part of the overall memory management..
That is, all tabs are in the system "oom killer hit list", background tabs gets lower priority, under memory pressure the android framework will kill the renderer and free all the memory associated with it.. if we move such caches outside the renderers, we need to have a good eviction mechanism in place to ensure "dead" cached items will be freed as necessary... 
Perhaps a good metric to start with would be how much data is actually duplicated right now across multiple renderers versus how much is specific to a particular renderer? "it'll be faster _and_ use less memory", it'd be really great if we could somehow quantify that :)

2) Having it backed by purgeable memory would be a really great benefit for android! It'd allow the framework to manage some of this... however, we'd have to be careful if we are to have different datatypes with different costs, afaict the purgeable memory doesn't offer many guarantees / levels, so any datatype would be equally purgeable...

Thanks,
Marcus

Philippe Liard

unread,
Sep 25, 2013, 7:20:30 AM9/25/13
to Marcus Bulach, Peter Kasting, Tony Gentilcore, blink-dev, p...@chromium.org
Thanks Marcus and +1! FWIW I'm preparing a (much less ambitious) document, which I will send to blink-dev@ soon, to store resources in discardable memory and allow (by "unpinning") at least the dead ones to be evicted by the kernel under memory pressure. There is obviously an overlap with Trove but I still think this would be a desirable short term goal on Android at least while Trove looks much more ambitious/long term.

Vyacheslav Egorov

unread,
Sep 26, 2013, 7:22:45 AM9/26/13
to Philippe Liard, Marcus Bulach, Peter Kasting, Tony Gentilcore, blink-dev, p...@chromium.org, Daniel Clifford
+danno

to me sharing V8 compilation cache among renderers seems like a non-trivial exercise given how interconnected it is with the rest of the V8 heap.

Vyacheslav Egorov
Reply all
Reply to author
Forward
0 new messages