https://wiki.mozilla.org/Gecko:Layers
The details are all listed there, but the main goals are to accelerate
graphics using OpenGL or D3D, or even Core Animation itself (the layers
API hides those APIs) and enable off-main-thread rendering, in
particular animation and video playback that are smooth even if the main
thread blocks. We think we can build this incrementally, without
penalizing non-accelerated systems and without adding extra "layers vs
non-layers" code paths.
If that sounds interesting, take a look and send feedback.
Rob
This sounds like it will be very helpful, perhaps essential for mac OOP
plugin rendering. The current plan is to use a native CoreAnimation
cross-process graphics context API to render plugins (and perhaps tabs too,
later?). In order for transparent plugins and plugin z-order to work
correctly, gecko graphics would have to use separate CoreAnimation contexts
for stuff that draws "under" and "over" the plugin, right?
This sounds really good to me in general. I'd like to hear more about the
multi-process plans and how you think this might fit together with content
process sandboxing: I'm assuming that we can't give the content process any
direct access to the OS 3D APIs.
--BDS
Basically what happens is that we have a new layer type for
out-of-process plugins that abstracts over whatever the platform is
doing. The core functionality of layout's integration with layers is to
automatically separate content "under" and "over" a layer into their own
layers.
> This sounds really good to me in general. I'd like to hear more about the
> multi-process plans and how you think this might fit together with content
> process sandboxing: I'm assuming that we can't give the content process any
> direct access to the OS 3D APIs.
Right. So we add another layer implementation that supports remoting.
LayerManager::endTransaction conceptually publishes the content process'
layer subtree up to the master. The master's layer tree contains layers
of a new type, say "RemoteLayer", specifying a point where the content
process subtree should be grafted in.
In a non-accelerated implementation we don't actually publish the child
process layer tree, we just manually composite its tree together on the
CPU and send that to the master (in shared memory of course). (This can
still happen off the main thread, note.) In an accelerated
implementation we actually do publish the child process layer tree;
RenderedLayers push the updates to their surfaces through shared memory.
The master copies those updates to the real accelerated surfaces backing
those RenderedLayers and composites everything together. Voila, secure
hardware acceleration. Not only that, but we can keep animation and
video playback in sync across processes even if their main threads are hung!
Rob
On second thoughts, it might be better to not have "RemoteLayer", and
just have something higher-level than layers responsible for
deserializing content process layer trees, validating them, and creating
an explicit copy of that tree in the master's layer tree.
Rob
Off-main-thread compositing requires retained buffers. Other than that,
these axes seem orthogonal. In particular if you have no usable GPU,
off-main-thread compositing in software could make sense.
The initial implementation matches what we do today --- non-accelerated,
immediate mode, main-thread. After that we can expand in various
possible directions.
Rob