Primary eng (and PM) emails
Engineering: Alex Chiculita [ach...@adobe.com], Dirk Schulze [dsch...@chromium.org], Max Vujovic [mvuj...@adobe.com], Michelangelo De Simone [michel...@adobe.com]
Product Mgmt: Divya Manian [man...@adobe.com]
Spec
CSS Filter Effects Spec: https://dvcs.w3.org/hg/FXTF/raw-file/tip/filters/index.html#custom-filter
Summary
CSS Custom Filters enable filter effects on DOM elements using custom authored WebGL (GLSL) shaders. Authors can pass in custom parameters from CSS to their shaders. Parameters are animatable using CSS Transitions and Animations.
Motivation
CSS Custom Filters enable rich WebGL-like effects, but in the context of the DOM. They are particularly useful for transitions and animations. Custom Filters can also enable engaging experiences when combined with touch interaction.
The CSS syntax makes it easy for web authors to reuse other author’s effects, without necessarily needing to learn GLSL. For example, an author could use a pre-written page curl filter like so:
#page { filter: custom(page-curl, direction 90, amount 0.5); }
HTML5Rocks Post and Presentation by Paul Lewis: http://updates.html5rocks.com/2013/03/Introduction-to-Custom-Filters-aka-CSS-Shaders
CSS FilterLab (A playground for CSS Filters):
http://html.adobe.com/webplatform/graphics/customfilters/cssfilterlab/
Some other coverage:
http://www.webmonkey.com/2013/01/google-chrome-now-with-cinema-style-3d-effects/
http://blog.alexmaccaw.com/the-next-web
http://experiments.hertzen.com/css-shaders/index.html
http://alteredqualia.com/css-shaders/article/
http://venturebeat.com/2012/09/24/adobe-css-filterlab/
http://blattchat.com/2012/09/26/reveal-js-with-css-custom-filters/
Note: The specified CSS syntax for Custom Filters has recently changed to use an @filter rule. The new syntax is currently being implemented.
Security
To prevent timing attacks, direct access to the DOM element texture is disallowed. Instead, authors can blend and composite the fragment shader output with the DOM element texture. Note that direct access to same-origin textures is allowed in the fragment shader. The W3C wiki describes the security approach*.
*: http://www.w3.org/Graphics/fx/wiki/CSS_Shaders_Security
Implementation Status
Blink has inherited the CSS Custom Filters implementation from WebKit, and we intend to continue implementing the feature. One important next step is accelerating Custom Filters in Skia and/or the Chromium Compositor. The current implementation uses a “software” path, relying on readbacks from the GPU.
Compatibility Risk
Apple has expressed public support* for CSS Custom Filters. An implementation in WebKit / Safari is also proceeding according to the W3C Spec. Apple is co-editing the spec.
Mozilla has neither raised objections to the feature nor publicly announced its interest yet. They have contributed to spec and security discussions regarding CSS Custom Filters on the public-fx mailing list.
*: http://lists.w3.org/Archives/Public/www-style/2011Oct/0076.html
OWP launch tracking bug?
https://code.google.com/p/chromium/issues/detail?id=233383
Row on feature dashboard?
Yes (Search for “custom filters”)
Requesting simultaneous permission to ship?
No. Current implementation is behind a runtime flag.
Implementation Status
Blink has inherited the CSS Custom Filters implementation from WebKit, and we intend to continue implementing the feature. One important next step is accelerating Custom Filters in Skia and/or the Chromium Compositor. The current implementation uses a “software” path, relying on readbacks from the GPU.
On Tue, Apr 23, 2013 at 3:03 PM, <mvuj...@adobe.com> wrote:Implementation Status
Blink has inherited the CSS Custom Filters implementation from WebKit, and we intend to continue implementing the feature. One important next step is accelerating Custom Filters in Skia and/or the Chromium Compositor. The current implementation uses a “software” path, relying on readbacks from the GPU.I have serious concerns about the implementation complexity of supporting CSS Custom Filters within the compositor that I would like to have addressed before implementation proceeds any further. The current implementation basically ignores the compositor, which is never going to be a viable route to shipping, and I want to make sure we do have a viable path forward before committing to more complex changes.
Specifically I'm concerned that moving forward as is will calcify the architecture and make more ambitious efforts like out-of-process iframes, ubercompositor, and our many performance optimizations difficult.
The basic premise of CSS shaders is that author-provided vertex and fragment shaders run as part of the compositing pass of the page. I'm concerned about the implementation complexity and security aspects of this change. First, security. You mentioned the security aspect of not allowing authors to access pixels in the content, but there are additional concerns with running untrusted shaders in the compositor's context. Today we isolate all shaders from untrusted sources (WebGL, nacl, etc) into their own contexts so we can isolate their effects and do things like forcibly lose the context if it runs for too long. If we run untrusted shaders in the compositor's context, we have no way to isolate the effects of the untrusted shaders from the other compositor commands. There are many ways a malicious (or just poorly coded) shader could easily DOS the GPU and cause the compositor context to become unusable, even if the shader passes any static validation we can think of.
Since we're moving to running the compositing pass from the browser process, it'll be difficult to isolate the shader execution into a separate context.
A related concern here is out-of-process iframes where the textures inside the iframe will not be available to GL contexts running anywhere except for the iframe's process or the browser. A different renderer simply won't have access to any GL resources containing the iframe's content.
My other concern is that this requires exposing intermediate compositor stages directly to the author in the form of inputs/outputs to the CSS shader. This precludes a number of optimizations and changes in rendering techniques that we have applied and wish to apply within the compositor.
The proposal requires that we produce the specific geometry required by the vertex shaders, feed these into a GLSL shader, then produce a RGBA texture to feed into a GLSL fragment shader for blending. These map reasonably well to things we currently do most of the time, but don't map very well to software compositing or using a different GL version or using DirectX for compositing (as Mozilla does) or using more exotic color formats in order to get per-channel alpha (as Mozilla is at least experimenting with) or
doing more advanced geometry/culling optimizations. Remember, if this is something we want to accept as part of the web platform, it means we have to support it forever.
For the compositor concerns, I'd like to see at least a fleshed out design proposal that can satisfy these concerns approved by our compositor team before moving forward.
I also think the compatibility section is overstated a bit. Apple has definitely expressed interest but they won't be able to actually ship support for CSS shaders without changes to the CoreAnimation system framework, which will come as part of the next OS releases at the earliest. They might have support in OS X 10.9 or might not. Other than Apple has anyone said that they are able to implement this feature?
- James
We’d like to look at the feature in two parts. One part is the Blink CSS syntax and resource loading work. The second part is the accelerated pipeline rendering work. These two are orthogonal to each other, and implementation of both can occur fairly independently.
Specifically I'm concerned that moving forward as is will calcify the architecture and make more ambitious efforts like out-of-process iframes, ubercompositor, and our many performance optimizations difficult.We understand your concerns. However, we’ve created a prototype of CSS Custom Filters rendering using the compositor, and we are confident that the feature is small enough to not impact existing or future optimizations. In our prototype, Custom Filters used a very similar approach to the existing CSS Filters like blur or drop-shadow.We built the prototype in order to test accelerated Custom Filters on Android devices. We’ve posted details and links to the source in cr-bug 229069 [1].
The basic premise of CSS shaders is that author-provided vertex and fragment shaders run as part of the compositing pass of the page. I'm concerned about the implementation complexity and security aspects of this change. First, security. You mentioned the security aspect of not allowing authors to access pixels in the content, but there are additional concerns with running untrusted shaders in the compositor's context. Today we isolate all shaders from untrusted sources (WebGL, nacl, etc) into their own contexts so we can isolate their effects and do things like forcibly lose the context if it runs for too long. If we run untrusted shaders in the compositor's context, we have no way to isolate the effects of the untrusted shaders from the other compositor commands. There are many ways a malicious (or just poorly coded) shader could easily DOS the GPU and cause the compositor context to become unusable, even if the shader passes any static validation we can think of.We had security in mind from the beginning with our prototype implementation. The prototype runs Custom Filters in their own isolated GL context, separate from the compositor context. This makes the implementation similar to WebGL’s.
We will mitigate DOS attacks the same way as WebGL [2]. In addition, we can detect context crashes due to DOS attacks and prevent Custom Filters from running again on the offending page.
A related concern here is out-of-process iframes where the textures inside the iframe will not be available to GL contexts running anywhere except for the iframe's process or the browser. A different renderer simply won't have access to any GL resources containing the iframe's content.As mentioned above, we use the same approach as built-in filters, so out-of-process iframes won’t raise any new issues related to Custom Filters. Built-in filters like blur, etc. already have access to the GL resources.
The iframe process will run in a sandbox and have no access to OpenGL directly [3]. Even though we have a separate process for iframes, the rendering of the iframe (either a set of ubercompositor frames or textures) will need to be passed to a parent compositor for on-screen rendering. Consequently, both the iframe process and the embedding page’s compositor (that might apply a filter on the iframe element) will end up running OpenGL commands through the same GPU process with access to the necessary resources. Thus, we don’t see an issue with out-of-process iframes. Are there other specific concerns that we didn’t consider?
On Wed, Apr 24, 2013 at 5:35 PM, Alexandru Chiculita <ach...@adobe.com> wrote:We’d like to look at the feature in two parts. One part is the Blink CSS syntax and resource loading work. The second part is the accelerated pipeline rendering work. These two are orthogonal to each other, and implementation of both can occur fairly independently.There are two parts, but without a path forward for each there's no point in keeping either in Blink.
Specifically I'm concerned that moving forward as is will calcify the architecture and make more ambitious efforts like out-of-process iframes, ubercompositor, and our many performance optimizations difficult.We understand your concerns. However, we’ve created a prototype of CSS Custom Filters rendering using the compositor, and we are confident that the feature is small enough to not impact existing or future optimizations. In our prototype, Custom Filters used a very similar approach to the existing CSS Filters like blur or drop-shadow.We built the prototype in order to test accelerated Custom Filters on Android devices. We’ve posted details and links to the source in cr-bug 229069 [1].I've taken a look at the patch. It's technically interesting, but doesn't address any of my concerns. Specifically it's not compatible with compositing out of process, using rendering techniques other than OpenGL, using alternate pixel formats, or any of the other innovations we have down the pipe.
The basic premise of CSS shaders is that author-provided vertex and fragment shaders run as part of the compositing pass of the page. I'm concerned about the implementation complexity and security aspects of this change. First, security. You mentioned the security aspect of not allowing authors to access pixels in the content, but there are additional concerns with running untrusted shaders in the compositor's context. Today we isolate all shaders from untrusted sources (WebGL, nacl, etc) into their own contexts so we can isolate their effects and do things like forcibly lose the context if it runs for too long. If we run untrusted shaders in the compositor's context, we have no way to isolate the effects of the untrusted shaders from the other compositor commands. There are many ways a malicious (or just poorly coded) shader could easily DOS the GPU and cause the compositor context to become unusable, even if the shader passes any static validation we can think of.We had security in mind from the beginning with our prototype implementation. The prototype runs Custom Filters in their own isolated GL context, separate from the compositor context. This makes the implementation similar to WebGL’s.
We will mitigate DOS attacks the same way as WebGL [2]. In addition, we can detect context crashes due to DOS attacks and prevent Custom Filters from running again on the offending page.It's fundamentally different from WebGL since it's part of the compositing path, not a separate rendering area that is later combined with the page.
A related concern here is out-of-process iframes where the textures inside the iframe will not be available to GL contexts running anywhere except for the iframe's process or the browser. A different renderer simply won't have access to any GL resources containing the iframe's content.As mentioned above, we use the same approach as built-in filters, so out-of-process iframes won’t raise any new issues related to Custom Filters. Built-in filters like blur, etc. already have access to the GL resources.
The iframe process will run in a sandbox and have no access to OpenGL directly [3]. Even though we have a separate process for iframes, the rendering of the iframe (either a set of ubercompositor frames or textures) will need to be passed to a parent compositor for on-screen rendering. Consequently, both the iframe process and the embedding page’s compositor (that might apply a filter on the iframe element) will end up running OpenGL commands through the same GPU process with access to the necessary resources. Thus, we don’t see an issue with out-of-process iframes. Are there other specific concerns that we didn’t consider?Yes. With out of process iframes, the embedding page's compositor will not have access to any of the textures of the iframe either directly or through the command buffer.
- James
Attendees
Blink: James Robinson, Adrienne Walker, Vangelis Kokkevis, Stephen White, Alex Danilo, Adam Barth, Max Heinritz
adobe: achicu (Alex), mvujovic (Max)
Conclusions
We need to think long-term about the primitives we want to add to the platform.
In general, it’s better to start with simpler platform primitives (eg CSS transitions) and expand their capabilities over time than to take something powerful (eg shaders) and try to restrict its functionality.
Ultimately, we can’t allow arbitrary author-provided shaders to run in the compositing pass for security and architectural reasons. CSS Shaders is not LGTMed for Blink.
Instead, let’s explore in ongoing technical discussion:
Meshing within CSS.
Adding capabilities to WebGL to let it reference content and enable shader-like effects.
How usecases can be addressed with other existing platform primitives.
Other possible paths to enable this type of functionality in the browser.
Notes
We must be thinking very long-term about new features because it’s a big commitment
Two routes for web platform features to be added:
1) it doesn’t get adoption and dies and can be removed
2) becomes part of the platform, all browsers must support it indefinitely, until to falls to 1)
High-level technical constraints of the compositor architecture
James drew this pic of the compositor architecture:
Shaders problems
Security: Allows author-provided code to become part of compositor pass
Notice that in this picture most of the arrows point left.
Compositor contains information the page shouldn’t be able to access; there are security concerns eg color of link.
Provides page more information by being able run code on intermediate data.
Putting restrictions on primitive
The GPU doesn’t have notion of a shader that’s restricted
It’s easier to start with limited capabilities and then expand it up, rather than start from a lots of complexity and restrict it instead.
By providing the capabilities in CSS, we let the compositor generate the shader.
Architecture: Restricts the compositor pass we can take in the future if we decide to rearchitect.
Use cases for custom filters
Current set seems kind of limited eg vertex shaders and hit testing -- you can only do it for non-interactive content.
You can apply on interactive content when it’s not interactive. You want to use something for a small amount of time.
Adobe showed a cool demo on a Nexus 7
Pinch to pull content apart, peel back, accordion
Mostly done with vertex shaders
This could done with pure fragment shaders, but not with security restrictions
Adobe on security:
We don’t leak information back through the timing
Preventing DDOS - if WebGL is off on the side (see pic), could we isolate the part of the compositor that runs the shader? Not the same way as WebGL.
It’s not right to think at the in the end the GPUs will run it anyway. The implementation may change. We need to consider the primitives that the web platform exposes.
Are there other use cases besides crinkling and manipulating content?
Some lighting effects...
What are some other ways to accomplish the same effects using existing platform primitives?
What if we had a way to refer to subcontent for CSS modification? eg CSS Reflection, but for a subset of an element
How to define this in CSS? You could use transforms and keyframes
How do you smoothly animate the transforms? We could use better animation primitives. Web Animations?
What are we missing for shaders? You can define gradients. What about a pageflip?
Fragment shaders could be needed for some usecases.
Consensus is yes, we could target CSS modifications to parts of elements. It works for small effects, but not for large effects.
Shader does not have access to textures.
We don’t want to force the compositor to use GL shaders. We don’t have to do sharing. We could apply the shader in a separate pass.
This is the opposite direction of where Google is going. UberComp creates a unified compositing pass. We want all web content to appear atomically .
Taking DOM content and processing it in WebGL? Not clear how we could do this.
Ask: How would you allow web authors to express the same effects if GLSL didn’t exist? It’s not clear this is the most natural approach for the web platform.
Adobe: Still need to define instructions that run for each Pixel.
Google: Perhaps there are other ways to accomplish this.
A: It comes down to how generic do you want the capabilities to be
Being more generic unlocked innovation eg in game lighting.
G: For webby elements it seems there are fewer usecases.
Further thoughts on using WebGL to manipulate the page: Apply an effect on a static part of the page, make it available for later one. Addresses atomicity. Then you just get a bitmap. Security concerns still exist. Restrict readbacks and restrict shaders.
Thanks for the notes Max. I had a question on one of the items.
One way I can imagine this working is changing tex{Sub}Image2D to
> Taking DOM content and processing it in WebGL? Not clear how we could do this.
accept any DOM element instead of just <video>, <img> or <canvas>. Is
that what you had in mind or something different?
On Tue, May 14, 2013 at 4:18 AM, Sami Kyostila <skyo...@google.com> wrote:
Thanks for the notes Max. I had a question on one of the items.
One way I can imagine this working is changing tex{Sub}Image2D to
> Taking DOM content and processing it in WebGL? Not clear how we could do this.
accept any DOM element instead of just <video>, <img> or <canvas>. Is
that what you had in mind or something different?Roughly, yes. The main question is whether the texture updates as the appearance of DOM element changes, or whether the texture is just a static readback of the element's current visual appearance. Having the texture be dynamic is probably more desirable from a developer point of view but might have the same implementation issues as CSS shaders. If the API just does a static readback, then there's a risk that developers would try to call the API every frame, which would be slow.
On Tue, May 14, 2013 at 4:18 AM, Sami Kyostila <skyo...@google.com> wrote:
Thanks for the notes Max. I had a question on one of the items.
One way I can imagine this working is changing tex{Sub}Image2D to
> Taking DOM content and processing it in WebGL? Not clear how we could do this.
accept any DOM element instead of just <video>, <img> or <canvas>. Is
that what you had in mind or something different?Roughly, yes. The main question is whether the texture updates as the appearance of DOM element changes, or whether the texture is just a static readback of the element's current visual appearance. Having the texture be dynamic is probably more desirable from a developer point of view but might have the same implementation issues as CSS shaders. If the API just does a static readback, then there's a risk that developers would try to call the API every frame, which would be slow.
On Tue, May 14, 2013 at 8:59 AM, Adam Barth <aba...@chromium.org> wrote:
On Tue, May 14, 2013 at 4:18 AM, Sami Kyostila <skyo...@google.com> wrote:
Thanks for the notes Max. I had a question on one of the items.
One way I can imagine this working is changing tex{Sub}Image2D to
> Taking DOM content and processing it in WebGL? Not clear how we could do this.
accept any DOM element instead of just <video>, <img> or <canvas>. Is
that what you had in mind or something different?Roughly, yes. The main question is whether the texture updates as the appearance of DOM element changes, or whether the texture is just a static readback of the element's current visual appearance. Having the texture be dynamic is probably more desirable from a developer point of view but might have the same implementation issues as CSS shaders. If the API just does a static readback, then there's a risk that developers would try to call the API every frame, which would be slow.Where do we stand wrt security concerns for this approach? I remember that the fear of timing attacks prevented us in the past from getting web content into webgl. Would imposing same-domain restrictions help?
Note, this also interacts with the compositor (only the compositor is capable of generating a texture from the DOM).
Also, security-wise, the plan for OOP iframes is to prevent the hosting page from accessing the iframe contents, be it through CPU pixels or GPU textures. The compositor in the hosting page would be unable to generate the texture for a DOM that contains an OOP iframe.
At least for Chromium I think would be feasible to implement a static
snapshot API that is quick enough to be called every frame; it's
essentially a GPU-side composite-to-texture pass, which we already do
for render surfaces (e.g., CSS opacity). The only complication is when
the DOM actually changes significantly and we have to re-rasterize,
which takes longer. In that case an asynchronous API would be a better
fit.
I'd also prefer the snapshot over dynamic updating since the latter
would make some use cases more difficult -- for example, having two
textures of the same element with two different styles.
On Tue, May 14, 2013 at 9:48 AM, Vangelis Kokkevis <vang...@google.com> wrote:
On Tue, May 14, 2013 at 8:59 AM, Adam Barth <aba...@chromium.org> wrote:
On Tue, May 14, 2013 at 4:18 AM, Sami Kyostila <skyo...@google.com> wrote:
Thanks for the notes Max. I had a question on one of the items.
One way I can imagine this working is changing tex{Sub}Image2D to
> Taking DOM content and processing it in WebGL? Not clear how we could do this.
accept any DOM element instead of just <video>, <img> or <canvas>. Is
that what you had in mind or something different?Roughly, yes. The main question is whether the texture updates as the appearance of DOM element changes, or whether the texture is just a static readback of the element's current visual appearance. Having the texture be dynamic is probably more desirable from a developer point of view but might have the same implementation issues as CSS shaders. If the API just does a static readback, then there's a risk that developers would try to call the API every frame, which would be slow.Where do we stand wrt security concerns for this approach? I remember that the fear of timing attacks prevented us in the past from getting web content into webgl. Would imposing same-domain restrictions help?Same-origin restrictions don't help in this situation because even same-origin elements contain secret information in their rasterization (e.g., whether links are blue/purple representing visited/unvisited).Instead, what we'd need to do is have a flag when creating the WebGL context:1) (default) Current security rules with no ability to draw arbitrary elements unto the canvas.2) The ability to draw arbitrary elements onto the canvas, but now the shaders are restricted in the way we currently restrict CSS shaders.It's still an open question whether (2) is secure. There's been a bunch of research on the security properties of CSS shaders, and my understanding is that folks continue to find holes. That's not particularly confidence inspiring, but potentially not the end of the world.
On Tue, May 14, 2013 at 9:50 AM, Antoine Labour <pi...@chromium.org> wrote:Note, this also interacts with the compositor (only the compositor is capable of generating a texture from the DOM).We already need to support readbacks from the compositor for other features. For example, WebKit::WebWidget::paint is an existing API that does the readback. Ideally, the compositor would render directly into the texture, but it is possible to implement without changes to the compositor.Also, security-wise, the plan for OOP iframes is to prevent the hosting page from accessing the iframe contents, be it through CPU pixels or GPU textures. The compositor in the hosting page would be unable to generate the texture for a DOM that contains an OOP iframe.I guess that depends if we're doing a read-back or a raster-to-texture... With the uber compositor, do we get into trouble if the window containing the element is undergoing some sort of window transition (e.g., being displayed at 50% opacity)? If we're compositing in the window manager effects at the same time, maybe there isn't a buffer to read back from? Hum...
On Tue, May 14, 2013 at 9:48 AM, Vangelis Kokkevis <vang...@google.com> wrote:
On Tue, May 14, 2013 at 8:59 AM, Adam Barth <aba...@chromium.org> wrote:
On Tue, May 14, 2013 at 4:18 AM, Sami Kyostila <skyo...@google.com> wrote:
Thanks for the notes Max. I had a question on one of the items.
One way I can imagine this working is changing tex{Sub}Image2D to
> Taking DOM content and processing it in WebGL? Not clear how we could do this.
accept any DOM element instead of just <video>, <img> or <canvas>. Is
that what you had in mind or something different?Roughly, yes. The main question is whether the texture updates as the appearance of DOM element changes, or whether the texture is just a static readback of the element's current visual appearance. Having the texture be dynamic is probably more desirable from a developer point of view but might have the same implementation issues as CSS shaders. If the API just does a static readback, then there's a risk that developers would try to call the API every frame, which would be slow.Where do we stand wrt security concerns for this approach? I remember that the fear of timing attacks prevented us in the past from getting web content into webgl. Would imposing same-domain restrictions help?Same-origin restrictions don't help in this situation because even same-origin elements contain secret information in their rasterization (e.g., whether links are blue/purple representing visited/unvisited).Instead, what we'd need to do is have a flag when creating the WebGL context:1) (default) Current security rules with no ability to draw arbitrary elements unto the canvas.2) The ability to draw arbitrary elements onto the canvas, but now the shaders are restricted in the way we currently restrict CSS shaders.It's still an open question whether (2) is secure. There's been a bunch of research on the security properties of CSS shaders, and my understanding is that folks continue to find holes. That's not particularly confidence inspiring, but potentially not the end of the world.
On Tue, May 14, 2013 at 9:50 AM, Antoine Labour <pi...@chromium.org> wrote:Note, this also interacts with the compositor (only the compositor is capable of generating a texture from the DOM).We already need to support readbacks from the compositor for other features. For example, WebKit::WebWidget::paint is an existing API that does the readback. Ideally, the compositor would render directly into the texture, but it is possible to implement without changes to the compositor.Also, security-wise, the plan for OOP iframes is to prevent the hosting page from accessing the iframe contents, be it through CPU pixels or GPU textures. The compositor in the hosting page would be unable to generate the texture for a DOM that contains an OOP iframe.I guess that depends if we're doing a read-back or a raster-to-texture... With the uber compositor, do we get into trouble if the window containing the element is undergoing some sort of window transition (e.g., being displayed at 50% opacity)? If we're compositing in the window manager effects at the same time, maybe there isn't a buffer to read back from? Hum...
We can always do a separate rasterization of the element when loading it into a texture (e.g., pass ForceSoftwareRenderingAndIgnoreGPUResidentContent to WebKit::WebWidget::paint), but that's going to be slow, and we're hoping to remove the ForceSoftwareRenderingAndIgnoreGPUResidentContent option once Android stops using it from the link disambiguation popup.
On Tue, May 14, 2013 at 10:57 AM, Antoine Labour <pi...@chromium.org> wrote:We can always do a separate rasterization of the element when loading it into a texture (e.g., pass ForceSoftwareRenderingAndIgnoreGPUResidentContent to WebKit::WebWidget::paint), but that's going to be slow, and we're hoping to remove the ForceSoftwareRenderingAndIgnoreGPUResidentContent option once Android stops using it from the link disambiguation popup.And you can't for OOP iframes.I see. More generally, your point is that ForceSoftwareRenderingAndIgnoreGPUResidentContent spills the guts about which things are GPU resident, which is ok for the link disambiguation popup but not ok for a web-exposed API.
I'm not very familiar with OOP iframes, but isn't this analogous to how video textures work? The renderer does not have direct access to the media stream but it can ask the GPU process to grab a texture from it. In this case it'd be the browser compositor that satisfies that request instead.
The proposal put forward before was to have WebGL mark shaders as safe or not. Safe shaders would follow the CSS Custom FX spec (no texture access period, only a color multiplier/matrix).The simplest implementation is: A WebGL context starts out as safe and clean. Safe = no non-safe shaders. Clean = no unsafe content. If a non-safe shader is compiled the safe flag is cleared. If non safe content is uploaded the clean flag is cleared. If both flags are cleared the context is lost. If the safe flag is cleared no unsafe content may be uploaded (same as today where no unsafe content may be uploaded). If the clean flag is cleared readPixels, toDataURL and using the canvas a source to other APIs (drawImage, texImage2D, ) fail with a Security exception (this is/was already implemented before we disallowed all unsafe content)That seems like it would work. What am I missing?
On Tue, May 14, 2013 at 6:14 PM, Gregg Tavares <gm...@chromium.org> wrote:
The proposal put forward before was to have WebGL mark shaders as safe or not. Safe shaders would follow the CSS Custom FX spec (no texture access period, only a color multiplier/matrix).The simplest implementation is: A WebGL context starts out as safe and clean. Safe = no non-safe shaders. Clean = no unsafe content. If a non-safe shader is compiled the safe flag is cleared. If non safe content is uploaded the clean flag is cleared. If both flags are cleared the context is lost. If the safe flag is cleared no unsafe content may be uploaded (same as today where no unsafe content may be uploaded). If the clean flag is cleared readPixels, toDataURL and using the canvas a source to other APIs (drawImage, texImage2D, ) fail with a Security exception (this is/was already implemented before we disallowed all unsafe content)That seems like it would work. What am I missing?The attack model OOP iframes want to protect against is a compromised renderer, not just malicious javascript.The scheme you suggest would need to be implemented in the GPU process (and in terms of GL/command-buffer calls, not in terms of WebGL API). It would also mean that the entire renderer would have to be disallowed to do glReadPixels, not just a single context - otherwise a compromised renderer can just use share groups or shared textures to readback the same thing from another context. Because of side channel attacks I think you'd need to restrict the shaders the compositor can use too, and that wouldn't be good. I suppose you could whitelist the compositor ones, but skia dynamically generates its shaders for filters and stuff.Maybe it can be solved, but it seems like a lot of complexity and work.
On Tue, May 14, 2013 at 6:28 PM, Antoine Labour <pi...@chromium.org> wrote:
On Tue, May 14, 2013 at 6:14 PM, Gregg Tavares <gm...@chromium.org> wrote:
The proposal put forward before was to have WebGL mark shaders as safe or not. Safe shaders would follow the CSS Custom FX spec (no texture access period, only a color multiplier/matrix).The simplest implementation is: A WebGL context starts out as safe and clean. Safe = no non-safe shaders. Clean = no unsafe content. If a non-safe shader is compiled the safe flag is cleared. If non safe content is uploaded the clean flag is cleared. If both flags are cleared the context is lost. If the safe flag is cleared no unsafe content may be uploaded (same as today where no unsafe content may be uploaded). If the clean flag is cleared readPixels, toDataURL and using the canvas a source to other APIs (drawImage, texImage2D, ) fail with a Security exception (this is/was already implemented before we disallowed all unsafe content)That seems like it would work. What am I missing?The attack model OOP iframes want to protect against is a compromised renderer, not just malicious javascript.The scheme you suggest would need to be implemented in the GPU process (and in terms of GL/command-buffer calls, not in terms of WebGL API). It would also mean that the entire renderer would have to be disallowed to do glReadPixels, not just a single context - otherwise a compromised renderer can just use share groups or shared textures to readback the same thing from another context. Because of side channel attacks I think you'd need to restrict the shaders the compositor can use too, and that wouldn't be good. I suppose you could whitelist the compositor ones, but skia dynamically generates its shaders for filters and stuff.Maybe it can be solved, but it seems like a lot of complexity and work.Sorry if I'm not up on OOP iframes. If we're protecting against compromised renderers that's already a problem today. A compromised renderer can read all memory in the renderer space (passwords, documents, email) and access all textures used by that process.
How about we just mark the entire page as not clean if there are any OOP iframes or is that too restricting?
Since the Custom Filters discussion has touched a few different topics, I think it would help to organize it around three distinct themes: implementability, security, and feature fit.
Implementability
Alex and I understand and agree that executing Custom Filters during a compositing pass is undesirable for Chromium’s architecture.
As a possible alternative, Chromium can execute Custom Filters outside of the compositing pass. During our meeting last Friday at Google, we just started exploring this approach, its challenges, and some potential solutions. Alex and I think this is the right direction, and we’ve worked out some more details. We will follow up to this thread with the details, and we’re looking forward to collaborating on the solution.
Security
There are two aspects to the security issue I would like to call out and clarify.
1) DoS Attacks
For some background, WebGL executes outside of the compositing pass. With most current graphics hardware, this does not solve the DoS attack issue. Current hardware supports only one thread of GL command execution. Since the GPU can only execute commands serially, a long-running WebGL shader can block the compositor’s commands from executing. Future graphics hardware may have multiple threads of GL command execution. With future hardware, running WebGL outside of the compositing pass can prevent DoS attacks. For example, WebGL can execute a long-running shader on one thread of GL command execution, while the compositor can execute its commands on a different, parallel thread of execution.
Regarding Custom Filters, we solve the DoS attack issue for future hardware in the same manner as WebGL, by executing Custom Filters outside of the compositing pass.
2) Timing Attacks
Timing attacks are prevented by design in Custom Filters. We disallow author access to the DOM element texture. Instead, the author outputs a color that the browser blends with the DOM element texture. Authors cannot leak information that they don’t have access to.
We are not aware of any ways to perform a timing attack with this security model, conceptually or in its current implementation. Please do point any concerns our way or file bugs on the existing implementations.
There is a W3C wiki page that provides additional details regarding the security discussion: http://www.w3.org/Graphics/fx/wiki/CSS_Shaders_Security
Feature Fit
If there are concerns regarding how the feature fits into the web platform, we should move that discussion to the W3C public-fx mailing list. This is a great place to discuss the definition of the feature and its use cases with other vendors who are implementing it.
A few prior discussions include:
Defining a CSS syntax for Custom Filters that can support alternative and future filter languages or formats (other than GLSL): http://lists.w3.org/Archives/Public/public-fx/2012OctDec/0029.html
Discussion around the security model: http://lists.w3.org/Archives/Public/public-fx/2012AprJun/0010.html
On Wed, May 15, 2013 at 2:12 AM, Gregg Tavares <gm...@chromium.org> wrote:On Tue, May 14, 2013 at 6:28 PM, Antoine Labour <pi...@chromium.org> wrote:
On Tue, May 14, 2013 at 6:14 PM, Gregg Tavares <gm...@chromium.org> wrote:
The proposal put forward before was to have WebGL mark shaders as safe or not. Safe shaders would follow the CSS Custom FX spec (no texture access period, only a color multiplier/matrix).The simplest implementation is: A WebGL context starts out as safe and clean. Safe = no non-safe shaders. Clean = no unsafe content. If a non-safe shader is compiled the safe flag is cleared. If non safe content is uploaded the clean flag is cleared. If both flags are cleared the context is lost. If the safe flag is cleared no unsafe content may be uploaded (same as today where no unsafe content may be uploaded). If the clean flag is cleared readPixels, toDataURL and using the canvas a source to other APIs (drawImage, texImage2D, ) fail with a Security exception (this is/was already implemented before we disallowed all unsafe content)That seems like it would work. What am I missing?The attack model OOP iframes want to protect against is a compromised renderer, not just malicious javascript.The scheme you suggest would need to be implemented in the GPU process (and in terms of GL/command-buffer calls, not in terms of WebGL API). It would also mean that the entire renderer would have to be disallowed to do glReadPixels, not just a single context - otherwise a compromised renderer can just use share groups or shared textures to readback the same thing from another context. Because of side channel attacks I think you'd need to restrict the shaders the compositor can use too, and that wouldn't be good. I suppose you could whitelist the compositor ones, but skia dynamically generates its shaders for filters and stuff.Maybe it can be solved, but it seems like a lot of complexity and work.Sorry if I'm not up on OOP iframes. If we're protecting against compromised renderers that's already a problem today. A compromised renderer can read all memory in the renderer space (passwords, documents, email) and access all textures used by that process.
That's correct today. Today we only worry about compromised render processes reading or writing the user's local file system. With out-of-process iframes, we're hoping to do better and protect against compromised renderers learning passwords, documents, emails, etc from other origins.How about we just mark the entire page as not clean if there are any OOP iframes or is that too restricting?There are two issues with that approach:1) It's pretty restrictive. Many web sites have like buttons or other social widgets that are iframes from another origin.2) We also need to protect against sensitive information from the same origin (e.g., the color of links reveals whether the user has visited certain URLs).As I wrote in the other thread, I don't think the security issues are significant enough to prevent implementation of this feature behind a runtime flag. The more significant issues are the constraints custom shaders impose on the rest of the system.
We need to be careful not to commit to web-exposed APIs that constrain future development of the browser, especially if other browser vendors don't have those constraints. That's a recipe for falling behind other browsers.
Since the Custom Filters discussion has touched a few different topics, I think it would help to organize it around three distinct themes: implementability, security, and feature fit.
Implementability
Alex and I understand and agree that executing Custom Filters during a compositing pass is undesirable for Chromium’s architecture.
As a possible alternative, Chromium can execute Custom Filters outside of the compositing pass. During our meeting last Friday at Google, we just started exploring this approach, its challenges, and some potential solutions. Alex and I think this is the right direction, and we’ve worked out some more details. We will follow up to this thread with the details, and we’re looking forward to collaborating on the solution.
On Wednesday, May 15, 2013 8:18:11 AM UTC-7, Adam Barth wrote:Sorry if I'm not up on OOP iframes. If we're protecting against compromised renderers that's already a problem today. A compromised renderer can read all memory in the renderer space (passwords, documents, email) and access all textures used by that process.
That's correct today. Today we only worry about compromised render processes reading or writing the user's local file system. With out-of-process iframes, we're hoping to do better and protect against compromised renderers learning passwords, documents, emails, etc from other origins.How about we just mark the entire page as not clean if there are any OOP iframes or is that too restricting?There are two issues with that approach:1) It's pretty restrictive. Many web sites have like buttons or other social widgets that are iframes from another origin.2) We also need to protect against sensitive information from the same origin (e.g., the color of links reveals whether the user has visited certain URLs).As I wrote in the other thread, I don't think the security issues are significant enough to prevent implementation of this feature behind a runtime flag. The more significant issues are the constraints custom shaders impose on the rest of the system.Could you be specific about the constraints, so we can make sure they are addressed?
On Wed, May 15, 2013 at 2:12 AM, Gregg Tavares <gm...@chromium.org> wrote:
On Tue, May 14, 2013 at 6:28 PM, Antoine Labour <pi...@chromium.org> wrote:
On Tue, May 14, 2013 at 6:14 PM, Gregg Tavares <gm...@chromium.org> wrote:
The proposal put forward before was to have WebGL mark shaders as safe or not. Safe shaders would follow the CSS Custom FX spec (no texture access period, only a color multiplier/matrix).The simplest implementation is: A WebGL context starts out as safe and clean. Safe = no non-safe shaders. Clean = no unsafe content. If a non-safe shader is compiled the safe flag is cleared. If non safe content is uploaded the clean flag is cleared. If both flags are cleared the context is lost. If the safe flag is cleared no unsafe content may be uploaded (same as today where no unsafe content may be uploaded). If the clean flag is cleared readPixels, toDataURL and using the canvas a source to other APIs (drawImage, texImage2D, ) fail with a Security exception (this is/was already implemented before we disallowed all unsafe content)That seems like it would work. What am I missing?The attack model OOP iframes want to protect against is a compromised renderer, not just malicious javascript.The scheme you suggest would need to be implemented in the GPU process (and in terms of GL/command-buffer calls, not in terms of WebGL API). It would also mean that the entire renderer would have to be disallowed to do glReadPixels, not just a single context - otherwise a compromised renderer can just use share groups or shared textures to readback the same thing from another context. Because of side channel attacks I think you'd need to restrict the shaders the compositor can use too, and that wouldn't be good. I suppose you could whitelist the compositor ones, but skia dynamically generates its shaders for filters and stuff.Maybe it can be solved, but it seems like a lot of complexity and work.Sorry if I'm not up on OOP iframes. If we're protecting against compromised renderers that's already a problem today. A compromised renderer can read all memory in the renderer space (passwords, documents, email) and access all textures used by that process.
That's correct today. Today we only worry about compromised render processes reading or writing the user's local file system. With out-of-process iframes, we're hoping to do better and protect against compromised renderers learning passwords, documents, emails, etc from other origins.How about we just mark the entire page as not clean if there are any OOP iframes or is that too restricting?There are two issues with that approach:1) It's pretty restrictive. Many web sites have like buttons or other social widgets that are iframes from another origin.2) We also need to protect against sensitive information from the same origin (e.g., the color of links reveals whether the user has visited certain URLs).
On Wed, May 15, 2013 at 8:18 AM, Adam Barth <aba...@chromium.org> wrote:
On Wed, May 15, 2013 at 2:12 AM, Gregg Tavares <gm...@chromium.org> wrote:
On Tue, May 14, 2013 at 6:28 PM, Antoine Labour <pi...@chromium.org> wrote:
On Tue, May 14, 2013 at 6:14 PM, Gregg Tavares <gm...@chromium.org> wrote:
The proposal put forward before was to have WebGL mark shaders as safe or not. Safe shaders would follow the CSS Custom FX spec (no texture access period, only a color multiplier/matrix).The simplest implementation is: A WebGL context starts out as safe and clean. Safe = no non-safe shaders. Clean = no unsafe content. If a non-safe shader is compiled the safe flag is cleared. If non safe content is uploaded the clean flag is cleared. If both flags are cleared the context is lost. If the safe flag is cleared no unsafe content may be uploaded (same as today where no unsafe content may be uploaded). If the clean flag is cleared readPixels, toDataURL and using the canvas a source to other APIs (drawImage, texImage2D, ) fail with a Security exception (this is/was already implemented before we disallowed all unsafe content)That seems like it would work. What am I missing?The attack model OOP iframes want to protect against is a compromised renderer, not just malicious javascript.The scheme you suggest would need to be implemented in the GPU process (and in terms of GL/command-buffer calls, not in terms of WebGL API). It would also mean that the entire renderer would have to be disallowed to do glReadPixels, not just a single context - otherwise a compromised renderer can just use share groups or shared textures to readback the same thing from another context. Because of side channel attacks I think you'd need to restrict the shaders the compositor can use too, and that wouldn't be good. I suppose you could whitelist the compositor ones, but skia dynamically generates its shaders for filters and stuff.Maybe it can be solved, but it seems like a lot of complexity and work.Sorry if I'm not up on OOP iframes. If we're protecting against compromised renderers that's already a problem today. A compromised renderer can read all memory in the renderer space (passwords, documents, email) and access all textures used by that process.
That's correct today. Today we only worry about compromised render processes reading or writing the user's local file system. With out-of-process iframes, we're hoping to do better and protect against compromised renderers learning passwords, documents, emails, etc from other origins.How about we just mark the entire page as not clean if there are any OOP iframes or is that too restricting?There are two issues with that approach:1) It's pretty restrictive. Many web sites have like buttons or other social widgets that are iframes from another origin.2) We also need to protect against sensitive information from the same origin (e.g., the color of links reveals whether the user has visited certain URLs).Comment #2 makes me feel like maybe some people haven't understood the proposals? The current proposals, both CSS Custom FX and the DOM Element to WebGL textures do not allow revealing the color of links. JavaScript has no access to the pixel color and the shader language proposed is constrained such that no timing attacks are possible because even the shader does not have access to the colors.
On Wed, May 15, 2013 at 8:18 AM, Adam Barth <aba...@chromium.org> wrote:
On Wed, May 15, 2013 at 2:12 AM, Gregg Tavares <gm...@chromium.org> wrote:
On Tue, May 14, 2013 at 6:28 PM, Antoine Labour <pi...@chromium.org> wrote:
On Tue, May 14, 2013 at 6:14 PM, Gregg Tavares <gm...@chromium.org> wrote:
The proposal put forward before was to have WebGL mark shaders as safe or not. Safe shaders would follow the CSS Custom FX spec (no texture access period, only a color multiplier/matrix).The simplest implementation is: A WebGL context starts out as safe and clean. Safe = no non-safe shaders. Clean = no unsafe content. If a non-safe shader is compiled the safe flag is cleared. If non safe content is uploaded the clean flag is cleared. If both flags are cleared the context is lost. If the safe flag is cleared no unsafe content may be uploaded (same as today where no unsafe content may be uploaded). If the clean flag is cleared readPixels, toDataURL and using the canvas a source to other APIs (drawImage, texImage2D, ) fail with a Security exception (this is/was already implemented before we disallowed all unsafe content)That seems like it would work. What am I missing?The attack model OOP iframes want to protect against is a compromised renderer, not just malicious javascript.The scheme you suggest would need to be implemented in the GPU process (and in terms of GL/command-buffer calls, not in terms of WebGL API). It would also mean that the entire renderer would have to be disallowed to do glReadPixels, not just a single context - otherwise a compromised renderer can just use share groups or shared textures to readback the same thing from another context. Because of side channel attacks I think you'd need to restrict the shaders the compositor can use too, and that wouldn't be good. I suppose you could whitelist the compositor ones, but skia dynamically generates its shaders for filters and stuff.Maybe it can be solved, but it seems like a lot of complexity and work.Sorry if I'm not up on OOP iframes. If we're protecting against compromised renderers that's already a problem today. A compromised renderer can read all memory in the renderer space (passwords, documents, email) and access all textures used by that process.
That's correct today. Today we only worry about compromised render processes reading or writing the user's local file system. With out-of-process iframes, we're hoping to do better and protect against compromised renderers learning passwords, documents, emails, etc from other origins.How about we just mark the entire page as not clean if there are any OOP iframes or is that too restricting?There are two issues with that approach:1) It's pretty restrictive. Many web sites have like buttons or other social widgets that are iframes from another origin.2) We also need to protect against sensitive information from the same origin (e.g., the color of links reveals whether the user has visited certain URLs).Comment #2 makes me feel like maybe some people haven't understood the proposals? The current proposals, both CSS Custom FX and the DOM Element to WebGL textures do not allow revealing the color of links. JavaScript has no access to the pixel color and the shader language proposed is constrained such that no timing attacks are possible because even the shader does not have access to the colors.
I think the reason this is difficult is the validation is applied at a program rewriting layer applied on top of OpenGL, but all of our GPU infrastructure (and the driver's infrastructure) operates at a lower more powerful level.
I think we would end up having to invent new primitives for a renderer and the GPU process to communicate with each other about what level of access is provided to a given resource.
This may be possible but would require a lot of careful thought to get a robust design.
Longer term, even without iframes it'd be really nice to restrict Renderer A's access to cross-origin data completely. For instance if renderer A has an <img> pointed to a cross-origin resource, it needs access to the metadata for that image in order to do layout but it really doesn't need access to the actual pixels for anything other than rendering, which the compositor could take care of. In the not-so-distant future one could imaging the image decoding and caching logic living in a different process (GPU or otherwise) and renderers only having opaque handles to them that are passed off the compositor to rasterize out of process. We should be careful that we don't introduce additional barriers to longer-term security improvements like this.
- James
On Wed, May 15, 2013 at 2:27 PM, Gregg Tavares <gm...@chromium.org> wrote:On Wed, May 15, 2013 at 8:18 AM, Adam Barth <aba...@chromium.org> wrote:
On Wed, May 15, 2013 at 2:12 AM, Gregg Tavares <gm...@chromium.org> wrote:
On Tue, May 14, 2013 at 6:28 PM, Antoine Labour <pi...@chromium.org> wrote:
On Tue, May 14, 2013 at 6:14 PM, Gregg Tavares <gm...@chromium.org> wrote:
The proposal put forward before was to have WebGL mark shaders as safe or not. Safe shaders would follow the CSS Custom FX spec (no texture access period, only a color multiplier/matrix).The simplest implementation is: A WebGL context starts out as safe and clean. Safe = no non-safe shaders. Clean = no unsafe content. If a non-safe shader is compiled the safe flag is cleared. If non safe content is uploaded the clean flag is cleared. If both flags are cleared the context is lost. If the safe flag is cleared no unsafe content may be uploaded (same as today where no unsafe content may be uploaded). If the clean flag is cleared readPixels, toDataURL and using the canvas a source to other APIs (drawImage, texImage2D, ) fail with a Security exception (this is/was already implemented before we disallowed all unsafe content)That seems like it would work. What am I missing?The attack model OOP iframes want to protect against is a compromised renderer, not just malicious javascript.The scheme you suggest would need to be implemented in the GPU process (and in terms of GL/command-buffer calls, not in terms of WebGL API). It would also mean that the entire renderer would have to be disallowed to do glReadPixels, not just a single context - otherwise a compromised renderer can just use share groups or shared textures to readback the same thing from another context. Because of side channel attacks I think you'd need to restrict the shaders the compositor can use too, and that wouldn't be good. I suppose you could whitelist the compositor ones, but skia dynamically generates its shaders for filters and stuff.Maybe it can be solved, but it seems like a lot of complexity and work.Sorry if I'm not up on OOP iframes. If we're protecting against compromised renderers that's already a problem today. A compromised renderer can read all memory in the renderer space (passwords, documents, email) and access all textures used by that process.
That's correct today. Today we only worry about compromised render processes reading or writing the user's local file system. With out-of-process iframes, we're hoping to do better and protect against compromised renderers learning passwords, documents, emails, etc from other origins.How about we just mark the entire page as not clean if there are any OOP iframes or is that too restricting?There are two issues with that approach:1) It's pretty restrictive. Many web sites have like buttons or other social widgets that are iframes from another origin.2) We also need to protect against sensitive information from the same origin (e.g., the color of links reveals whether the user has visited certain URLs).Comment #2 makes me feel like maybe some people haven't understood the proposals? The current proposals, both CSS Custom FX and the DOM Element to WebGL textures do not allow revealing the color of links. JavaScript has no access to the pixel color and the shader language proposed is constrained such that no timing attacks are possible because even the shader does not have access to the colors.I would say that's the goal of the constraints on the shader language. It's not 100% clear to me that the constraints achieve that goal.
Hi all,
I think this discussion has been incredibly useful and has really helped define how Custom Filters can fit with Chromium's architectural plans. I want to thank everyone for participating! :)
To help organize the output of the discussion, I've created a table [1] of the concerns and the proposed resolutions that were brought up regarding Custom Filters. I've tried to be complete, so please comment if I've missed anything or if there are any new concerns.
[1]: https://docs.google.com/document/d/1qDGmAIXdShzO7J9o6_lrCENGgkq-gKWcOiGqERppHCo/edit?usp=sharing
Thanks,
Max
I think this discussion has been incredibly useful and has really helped define how Custom Filters can fit with Chromium's architectural plans. I want to thank everyone for participating! :)
To help organize the output of the discussion, I've created a table [1] of the concerns and the proposed resolutions that were brought up regarding Custom Filters. I've tried to be complete, so please comment if I've missed anything or if there are any new concerns.
[1]: https://docs.google.com/document/d/1qDGmAIXdShzO7J9o6_lrCENGgkq-gKWcOiGqERppHCo/edit?usp=sharing