I put together a design document on the work I'm going to be doing on
multiprocess image handling for Electrolysis. It's on the wiki at https://wiki.mozilla.org/Multiprocess_Images
I'd appreciate any input or feedback on the matter.
Cheers,
-bholley
Some thoughts:
1) What are the benefits of sharing the image cache between all content
processes? Some of the drawbacks are obvious: stopping animations in
one tab stops animations of the same images in other tabs, for example.
How common is it that different tabs/windows really share the same
images? Would things be better in that regard if we used
process-per-domain or some such? Or otherwise grouped multiple tabs
into a single process?
2) The current plan assumes that content/layout consumers will just
keep calling loadImage. But they need to make a content policy call
before doing that; will content policies run in the renderer processes
or in chrome? If the latter, then presumably the loadImage shouldn't
run in the renderer process either.
3) Similar for security checks (which also need to happen before loadImage).
-Boris
It's not clear to me that a shared image cache is necessary or desirable.
From the document, the main reason for a shared cache is to avoid
"downloading and decoding the same image multiple times unnecessarily
(TODO-elaborate). "
I would hope, though, that the network cache, which is planned to be shared,
would take care of making sure we don't download images twice. Is image
decoding really an expensive-enough operation that we want to add lots of
extra machinery?
I figured the major reason to share the image cache would be to avoid the
memory overhead of having the same decoded image data in multiple processes.
But I think we'd actually need to collect data about whether users would
actually end up with multiple unrelated tabs with many shared images before
trying to implement a complex shared solution.
--BDS
> I would hope, though, that the network cache, which is planned to be
> shared,
> would take care of making sure we don't download images twice. Is
> image
> decoding really an expensive-enough operation that we want to add
> lots of
> extra machinery?
We already have that extra machinery in the form of the image cache.
Decoding on its own isn't that expensive, but doing it over and over
for the same images sucks, and breaks some web consumers (see bug
466586).
I believe that with multi-process we're unlikely to be using the same image
in multiple tabs, and we could just use a per-process image cache. Are you
saying there are web-compat issues where if we had a per-process image cache
backed by the necko HTTP cache we'd somehow cause problems for websites?
--BDS
> I believe that with multi-process we're unlikely to be using the
> same image
> in multiple tabs, and we could just use a per-process image cache.
> Are you
> saying there are web-compat issues where if we had a per-process
> image cache
> backed by the necko HTTP cache we'd somehow cause problems for
> websites?
No, definitely not. I was perhaps reading too much into what you'd
written, which sounded like it was advocating getting rid of the image
cache altogether.
> No, definitely not. I was perhaps reading too much into what you'd
> written, which sounded like it was advocating getting rid of the image
> cache altogether.
Oh, not at all! Just not doing new work to share the image cache across
multiple processes.
--BDS
I'm also wondering if there might be concerns about the same image
being out of sync between tabs, but maybe there's really no guarantee
of that anyway given cache performance is not very reliable from a web
dev perspective.
Lucas.
> _______________________________________________
> dev-tech-dom mailing list
> dev-te...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-tech-dom
I suspect that same-domain loads will normally be handled by the same
process, even if they happen to occur in multiple tabs. But again, without
measurement this seems like a premature optimization.
We shouldn't incur any network cost in the common cases, assuming the HTTP
cache behaves reasonably. Duplicate memory usage is more concerning...
perhaps we can arrange for global control of all the caches in some manner
if that shows up as a significant issue.
--BDS