Intent to Implement: CSS Custom Filters (aka CSS Shaders)

1,943 views
Skip to first unread message

mvuj...@adobe.com

unread,
Apr 23, 2013, 10:03:47 PM4/23/13
to blin...@chromium.org

Primary eng (and PM) emails

Engineering: Alex Chiculita [ach...@adobe.com], Dirk Schulze [dsch...@chromium.org], Max Vujovic [mvuj...@adobe.com], Michelangelo De Simone [michel...@adobe.com]

Product Mgmt: Divya Manian [man...@adobe.com]


Spec

CSS Filter Effects Spec: https://dvcs.w3.org/hg/FXTF/raw-file/tip/filters/index.html#custom-filter


Summary

CSS Custom Filters enable filter effects on DOM elements using custom authored WebGL (GLSL) shaders. Authors can pass in custom parameters from CSS to their shaders. Parameters are animatable using CSS Transitions and Animations.


Motivation

CSS Custom Filters enable rich WebGL-like effects, but in the context of the DOM. They are particularly useful for transitions and animations. Custom Filters can also enable engaging experiences when combined with touch interaction.


The CSS syntax makes it easy for web authors to reuse other author’s effects, without necessarily needing to learn GLSL. For example, an author could use a pre-written page curl filter like so:


#page { filter: custom(page-curl, direction 90, amount 0.5); }


HTML5Rocks Post and Presentation by Paul Lewis: http://updates.html5rocks.com/2013/03/Introduction-to-Custom-Filters-aka-CSS-Shaders


CSS FilterLab (A playground for CSS Filters):

http://html.adobe.com/webplatform/graphics/customfilters/cssfilterlab/


Some other coverage:

http://www.webmonkey.com/2013/01/google-chrome-now-with-cinema-style-3d-effects/

http://blog.alexmaccaw.com/the-next-web

http://experiments.hertzen.com/css-shaders/index.html

http://alteredqualia.com/css-shaders/article/

http://venturebeat.com/2012/09/24/adobe-css-filterlab/

http://blattchat.com/2012/09/26/reveal-js-with-css-custom-filters/

http://blogs.adobe.com/webplatform/2012/09/21/css-custom-filters-now-available-under-flag-in-chrome-canary/


Note: The specified CSS syntax for Custom Filters has recently changed to use an @filter rule. The new syntax is currently being implemented.


Security

To prevent timing attacks, direct access to the DOM element texture is disallowed. Instead, authors can blend and composite the fragment shader output with the DOM element texture. Note that direct access to same-origin textures is allowed in the fragment shader. The W3C wiki describes the security approach*.


*: http://www.w3.org/Graphics/fx/wiki/CSS_Shaders_Security


Implementation Status

Blink has inherited the CSS Custom Filters implementation from WebKit, and we intend to continue implementing the feature. One important next step is accelerating Custom Filters in Skia and/or the Chromium Compositor. The current implementation uses a “software” path, relying on readbacks from the GPU.


Compatibility Risk

Apple has expressed public support* for CSS Custom Filters. An implementation in WebKit / Safari is also proceeding according to the W3C Spec. Apple is co-editing the spec.


Mozilla has neither raised objections to the feature nor publicly announced its interest yet. They have contributed to spec and security discussions regarding CSS Custom Filters on the public-fx mailing list.


*: http://lists.w3.org/Archives/Public/www-style/2011Oct/0076.html


OWP launch tracking bug?

https://code.google.com/p/chromium/issues/detail?id=233383


Row on feature dashboard?

Yes (Search for “custom filters”)


Requesting simultaneous permission to ship?

No. Current implementation is behind a runtime flag.


James Robinson

unread,
Apr 24, 2013, 12:07:42 AM4/24/13
to Max Vujovic, blink-dev
On Tue, Apr 23, 2013 at 3:03 PM, <mvuj...@adobe.com> wrote:

Implementation Status

Blink has inherited the CSS Custom Filters implementation from WebKit, and we intend to continue implementing the feature. One important next step is accelerating Custom Filters in Skia and/or the Chromium Compositor. The current implementation uses a “software” path, relying on readbacks from the GPU.



I have serious concerns about the implementation complexity of supporting CSS Custom Filters within the compositor that I would like to have addressed before implementation proceeds any further.  The current implementation basically ignores the compositor, which is never going to be a viable route to shipping, and I want to make sure we do have a viable path forward before committing to more complex changes.  Specifically I'm concerned that moving forward as is will calcify the architecture and make more ambitious efforts like out-of-process iframes, ubercompositor, and our many performance optimizations difficult.

The basic premise of CSS shaders is that author-provided vertex and fragment shaders run as part of the compositing pass of the page.  I'm concerned about the implementation complexity and security aspects of this change.  First, security.  You mentioned the security aspect of not allowing authors to access pixels in the content, but there are additional concerns with running untrusted shaders in the compositor's context.  Today we isolate all shaders from untrusted sources (WebGL, nacl, etc) into their own contexts so we can isolate their effects and do things like forcibly lose the context if it runs for too long.  If we run untrusted shaders in the compositor's context, we have no way to isolate the effects of the untrusted shaders from the other compositor commands.  There are many ways a malicious (or just poorly coded) shader could easily DOS the GPU and cause the compositor context to become unusable, even if the shader passes any static validation we can think of.  Since we're moving to running the compositing pass from the browser process, it'll be difficult to isolate the shader execution into a separate context.  A related concern here is out-of-process iframes where the textures inside the iframe will not be available to GL contexts running anywhere except for the iframe's process or the browser.  A different renderer simply won't have access to any GL resources containing the iframe's content.

My other concern is that this requires exposing intermediate compositor stages directly to the author in the form of inputs/outputs to the CSS shader.  This precludes a number of optimizations and changes in rendering techniques that we have applied and wish to apply within the compositor.  The proposal requires that we produce the specific geometry required by the vertex shaders, feed these into a GLSL shader, then produce a RGBA texture to feed into a GLSL fragment shader for blending.  These map reasonably well to things we currently do most of the time, but don't map very well to software compositing or using a different GL version or using DirectX for compositing (as Mozilla does) or using more exotic color formats in order to get per-channel alpha (as Mozilla is at least experimenting with) or doing more advanced geometry/culling optimizations.  Remember, if this is something we want to accept as part of the web platform, it means we have to support it forever.

For the compositor concerns, I'd like to see at least a fleshed out design proposal that can satisfy these concerns approved by our compositor team before moving forward.

I also think the compatibility section is overstated a bit.  Apple has definitely expressed interest but they won't be able to actually ship support for CSS shaders without changes to the CoreAnimation system framework, which will come as part of the next OS releases at the earliest.  They might have support in OS X 10.9 or might not.  Other than Apple has anyone said that they are able to implement this feature?

- James


Alexandru Chiculita

unread,
Apr 25, 2013, 12:35:28 AM4/25/13
to James Robinson, Max Vujovic, blink-dev
Hi James,

Thank you for your comments. We’ve considered your concerns and addressed them below. We believe there exists a clear path forward for hardware accelerating Custom Filters, and we are looking forward to working with the Blink community to flesh it out.

On Apr 23, 2013, at 5:07 PM, James Robinson <jam...@chromium.org> wrote:

On Tue, Apr 23, 2013 at 3:03 PM, <mvuj...@adobe.com> wrote:
Implementation Status
Blink has inherited the CSS Custom Filters implementation from WebKit, and we intend to continue implementing the feature. One important next step is accelerating Custom Filters in Skia and/or the Chromium Compositor. The current implementation uses a “software” path, relying on readbacks from the GPU.


I have serious concerns about the implementation complexity of supporting CSS Custom Filters within the compositor that I would like to have addressed before implementation proceeds any further.  The current implementation basically ignores the compositor, which is never going to be a viable route to shipping, and I want to make sure we do have a viable path forward before committing to more complex changes.  

Agreed. Hardware acceleration of Custom Filters is an important next step in the path to shipping the feature. We want to define the path forward as well.

We’d like to look at the feature in two parts. One part is the Blink CSS syntax and resource loading work. The second part is the accelerated pipeline rendering work. These two are orthogonal to each other, and implementation of both can occur fairly independently.

Specifically I'm concerned that moving forward as is will calcify the architecture and make more ambitious efforts like out-of-process iframes, ubercompositor, and our many performance optimizations difficult.

We understand your concerns. However, we’ve created a prototype of CSS Custom Filters rendering using the compositor, and we are confident that the feature is small enough to not impact existing or future optimizations. In our prototype, Custom Filters used a very similar approach to the existing CSS Filters like blur or drop-shadow.

We built the prototype in order to test accelerated Custom Filters on Android devices. We’ve posted details and links to the source in cr-bug 229069 [1].


The basic premise of CSS shaders is that author-provided vertex and fragment shaders run as part of the compositing pass of the page.  I'm concerned about the implementation complexity and security aspects of this change.  First, security.  You mentioned the security aspect of not allowing authors to access pixels in the content, but there are additional concerns with running untrusted shaders in the compositor's context.  Today we isolate all shaders from untrusted sources (WebGL, nacl, etc) into their own contexts so we can isolate their effects and do things like forcibly lose the context if it runs for too long.  If we run untrusted shaders in the compositor's context, we have no way to isolate the effects of the untrusted shaders from the other compositor commands.  There are many ways a malicious (or just poorly coded) shader could easily DOS the GPU and cause the compositor context to become unusable, even if the shader passes any static validation we can think of.  

We had security in mind from the beginning with our prototype implementation. The prototype runs Custom Filters in their own isolated GL context, separate from the compositor context. This makes the implementation similar to WebGL’s.

We will mitigate DOS attacks the same way as WebGL [2]. In addition, we can detect context crashes due to DOS attacks and prevent Custom Filters from running again on the offending page.

Since we're moving to running the compositing pass from the browser process, it'll be difficult to isolate the shader execution into a separate context.  

Chromium already executes GL commands in the GPU process. Moving the compositor closer to the GPU process should in fact make it easier to isolate shader execution in a separate context.

All of the other built-in CSS Filters already run in their own GL context created for Skia. In our prototype implementation, we followed the same model by creating a new GL context shared by all Custom Filters on the page. So, there’s one context for the compositor, one for built-in filters, and one for custom filters.

A related concern here is out-of-process iframes where the textures inside the iframe will not be available to GL contexts running anywhere except for the iframe's process or the browser.  A different renderer simply won't have access to any GL resources containing the iframe's content.

As mentioned above, we use the same approach as built-in filters, so out-of-process iframes won’t raise any new issues related to Custom Filters. Built-in filters like blur, etc. already have access to the GL resources.

The iframe process will run in a sandbox and have no access to OpenGL directly [3]. Even though we have a separate process for iframes, the rendering of the iframe (either a set of ubercompositor frames or textures) will need to be passed to a parent compositor for on-screen rendering. Consequently, both the iframe process and the embedding page’s compositor (that might apply a filter on the iframe element) will end up running OpenGL commands through the same GPU process with access to the necessary resources. Thus, we don’t see an issue with out-of-process iframes. Are there other specific concerns that we didn’t consider?


My other concern is that this requires exposing intermediate compositor stages directly to the author in the form of inputs/outputs to the CSS shader.  This precludes a number of optimizations and changes in rendering techniques that we have applied and wish to apply within the compositor.  

We agree that CSS Filters that only manipulate color values like sepia or brightness can be optimized and rendered in a single pass in the compositor, but other filters like blur, drop-shadow, or custom filters will require more than one pass. Moreover, SVG filters already define an even more complex input/output graph, and there’s already code in Blink & Skia to handle these scenarios.

The proposal requires that we produce the specific geometry required by the vertex shaders, feed these into a GLSL shader, then produce a RGBA texture to feed into a GLSL fragment shader for blending.  These map reasonably well to things we currently do most of the time, but don't map very well to software compositing or using a different GL version or using DirectX for compositing (as Mozilla does) or using more exotic color formats in order to get per-channel alpha (as Mozilla is at least experimenting with) or

Indeed, software compositors are not able to draw WebGL shaders yet [4]. A solution for this would benefit both WebGL and Custom Filters.

Even though DirectX is used in some compositors, WebGL is able to integrate with this pipeline using the ANGLE project. Custom Filters follow the same pattern. Moreover, Chromium uses DirectX on Windows even though Blink and the compositor use OpenGL APIs. We understand that this might change in the future, but the solutions for WebGL will be similar for Custom Filters.

doing more advanced geometry/culling optimizations.  Remember, if this is something we want to accept as part of the web platform, it means we have to support it forever.

In our implementation, Custom Filters do not change the quad geometry of the layers. A Custom Filter is just a draw pass that is applied on the layer’s texture before it is finally drawn by the compositor. The compositor still only sees one quad.


For the compositor concerns, I'd like to see at least a fleshed out design proposal that can satisfy these concerns approved by our compositor team before moving forward.

We are working on a proposal document that explains the architecture in detail. We will share that as soon as it’s ready. We are looking forward to your feedback!


I also think the compatibility section is overstated a bit.  Apple has definitely expressed interest but they won't be able to actually ship support for CSS shaders without changes to the CoreAnimation system framework, which will come as part of the next OS releases at the earliest.  They might have support in OS X 10.9 or might not.  Other than Apple has anyone said that they are able to implement this feature?

James Robinson

unread,
Apr 26, 2013, 12:34:34 AM4/26/13
to Alexandru Chiculita, Max Vujovic, blink-dev
On Wed, Apr 24, 2013 at 5:35 PM, Alexandru Chiculita <ach...@adobe.com> wrote:
We’d like to look at the feature in two parts. One part is the Blink CSS syntax and resource loading work. The second part is the accelerated pipeline rendering work. These two are orthogonal to each other, and implementation of both can occur fairly independently.

There are two parts, but without a path forward for each there's no point in keeping either in Blink.
 

Specifically I'm concerned that moving forward as is will calcify the architecture and make more ambitious efforts like out-of-process iframes, ubercompositor, and our many performance optimizations difficult.

We understand your concerns. However, we’ve created a prototype of CSS Custom Filters rendering using the compositor, and we are confident that the feature is small enough to not impact existing or future optimizations. In our prototype, Custom Filters used a very similar approach to the existing CSS Filters like blur or drop-shadow.

We built the prototype in order to test accelerated Custom Filters on Android devices. We’ve posted details and links to the source in cr-bug 229069 [1].

I've taken a look at the patch.  It's technically interesting, but doesn't address any of my concerns.  Specifically it's not compatible with compositing out of process, using rendering techniques other than OpenGL, using alternate pixel formats, or any of the other innovations we have down the pipe.
 


The basic premise of CSS shaders is that author-provided vertex and fragment shaders run as part of the compositing pass of the page.  I'm concerned about the implementation complexity and security aspects of this change.  First, security.  You mentioned the security aspect of not allowing authors to access pixels in the content, but there are additional concerns with running untrusted shaders in the compositor's context.  Today we isolate all shaders from untrusted sources (WebGL, nacl, etc) into their own contexts so we can isolate their effects and do things like forcibly lose the context if it runs for too long.  If we run untrusted shaders in the compositor's context, we have no way to isolate the effects of the untrusted shaders from the other compositor commands.  There are many ways a malicious (or just poorly coded) shader could easily DOS the GPU and cause the compositor context to become unusable, even if the shader passes any static validation we can think of.  

We had security in mind from the beginning with our prototype implementation. The prototype runs Custom Filters in their own isolated GL context, separate from the compositor context. This makes the implementation similar to WebGL’s.  

We will mitigate DOS attacks the same way as WebGL [2]. In addition, we can detect context crashes due to DOS attacks and prevent Custom Filters from running again on the offending page.

It's fundamentally different from WebGL since it's part of the compositing path, not a separate rendering area that is later combined with the page.
 
A related concern here is out-of-process iframes where the textures inside the iframe will not be available to GL contexts running anywhere except for the iframe's process or the browser.  A different renderer simply won't have access to any GL resources containing the iframe's content.

As mentioned above, we use the same approach as built-in filters, so out-of-process iframes won’t raise any new issues related to Custom Filters. Built-in filters like blur, etc. already have access to the GL resources.

The iframe process will run in a sandbox and have no access to OpenGL directly [3]. Even though we have a separate process for iframes, the rendering of the iframe (either a set of ubercompositor frames or textures) will need to be passed to a parent compositor for on-screen rendering. Consequently, both the iframe process and the embedding page’s compositor (that might apply a filter on the iframe element) will end up running OpenGL commands through the same GPU process with access to the necessary resources. Thus, we don’t see an issue with out-of-process iframes. Are there other specific concerns that we didn’t consider?

Yes.  With out of process iframes, the embedding page's compositor will not have access to any of the textures of the iframe either directly or through the command buffer.


- James

Alexandru Chiculita

unread,
Apr 26, 2013, 1:13:40 AM4/26/13
to James Robinson, Max Vujovic, blink-dev
On Apr 25, 2013, at 5:34 PM, James Robinson <jam...@chromium.org> wrote:

On Wed, Apr 24, 2013 at 5:35 PM, Alexandru Chiculita <ach...@adobe.com> wrote:
We’d like to look at the feature in two parts. One part is the Blink CSS syntax and resource loading work. The second part is the accelerated pipeline rendering work. These two are orthogonal to each other, and implementation of both can occur fairly independently.

There are two parts, but without a path forward for each there's no point in keeping either in Blink.

The CSS parsing follows the specification and has a clear path forward, therefore there's no need to remove it. We are certain that CSS Custom Filters can be implemented in the compositor without major refactoring.

 

Specifically I'm concerned that moving forward as is will calcify the architecture and make more ambitious efforts like out-of-process iframes, ubercompositor, and our many performance optimizations difficult.

We understand your concerns. However, we’ve created a prototype of CSS Custom Filters rendering using the compositor, and we are confident that the feature is small enough to not impact existing or future optimizations. In our prototype, Custom Filters used a very similar approach to the existing CSS Filters like blur or drop-shadow.

We built the prototype in order to test accelerated Custom Filters on Android devices. We’ve posted details and links to the source in cr-bug 229069 [1].

I've taken a look at the patch.  It's technically interesting, but doesn't address any of my concerns.  Specifically it's not compatible with compositing out of process, using rendering techniques other than OpenGL, using alternate pixel formats, or any of the other innovations we have down the pipe.

The patch is not fully complete yet. Also, the ubercompositor was not complete and not using DirectX at the time we forked, though we don't have a patch without OpenGL. 

We are aware of the out of process work and we are working on the proposal for it. The code for custom filter shaders is pixel format independent, so there was no need to treat it differently.

We are looking forward to update our proposal with the upcoming compositor features. Is there any road-map of the compositor that you can share?

 


The basic premise of CSS shaders is that author-provided vertex and fragment shaders run as part of the compositing pass of the page.  I'm concerned about the implementation complexity and security aspects of this change.  First, security.  You mentioned the security aspect of not allowing authors to access pixels in the content, but there are additional concerns with running untrusted shaders in the compositor's context.  Today we isolate all shaders from untrusted sources (WebGL, nacl, etc) into their own contexts so we can isolate their effects and do things like forcibly lose the context if it runs for too long.  If we run untrusted shaders in the compositor's context, we have no way to isolate the effects of the untrusted shaders from the other compositor commands.  There are many ways a malicious (or just poorly coded) shader could easily DOS the GPU and cause the compositor context to become unusable, even if the shader passes any static validation we can think of.  

We had security in mind from the beginning with our prototype implementation. The prototype runs Custom Filters in their own isolated GL context, separate from the compositor context. This makes the implementation similar to WebGL’s.  

We will mitigate DOS attacks the same way as WebGL [2]. In addition, we can detect context crashes due to DOS attacks and prevent Custom Filters from running again on the offending page.

It's fundamentally different from WebGL since it's part of the compositing path, not a separate rendering area that is later combined with the page.

AGNLE is an OpenGL abstraction on top of DirectX and can be used to create a separate OpenGL context and inject DirectX textures to execute the Custom Filters.


 
A related concern here is out-of-process iframes where the textures inside the iframe will not be available to GL contexts running anywhere except for the iframe's process or the browser.  A different renderer simply won't have access to any GL resources containing the iframe's content.

As mentioned above, we use the same approach as built-in filters, so out-of-process iframes won’t raise any new issues related to Custom Filters. Built-in filters like blur, etc. already have access to the GL resources.

The iframe process will run in a sandbox and have no access to OpenGL directly [3]. Even though we have a separate process for iframes, the rendering of the iframe (either a set of ubercompositor frames or textures) will need to be passed to a parent compositor for on-screen rendering. Consequently, both the iframe process and the embedding page’s compositor (that might apply a filter on the iframe element) will end up running OpenGL commands through the same GPU process with access to the necessary resources. Thus, we don’t see an issue with out-of-process iframes. Are there other specific concerns that we didn’t consider?

Yes.  With out of process iframes, the embedding page's compositor will not have access to any of the textures of the iframe either directly or through the command buffer.

What part of the compositor is going to render the out of process iframes on screen? Is your concern related to the read backs that we have in the software pipeline today? With the accelerated CSS Custom Filters we don't have any read backs from the GPU.



- James


Greetings,
Alex

Eric Seidel

unread,
Apr 30, 2013, 5:34:58 PM4/30/13
to Max Vujovic, blink-dev
Thank you for the heads up! The API OWNERS discussed this in our
weekly meeting this morning.

It sounds like James Robinson has some unanswered questions here.
We've set up an in-person meeting between Max, Alex and James where
they can has things out further.

On Tue, Apr 23, 2013 at 3:03 PM, <mvuj...@adobe.com> wrote:

Eric Seidel

unread,
Apr 30, 2013, 5:36:01 PM4/30/13
to Max Vujovic, blink-dev
Thank you for the heads up! The API OWNERS discussed this in our
weekly meeting this morning.

It sounds like James Robinson has some unanswered questions here.
We've set up an in-person meeting between Max, Alex and James where
they can has things out further.

On Tue, Apr 23, 2013 at 3:03 PM, <mvuj...@adobe.com> wrote:

James Robinson

unread,
May 10, 2013, 8:06:19 PM5/10/13
to Alexandru Chiculita, Max Vujovic, blink-dev
Alexandru, Max and several folks representing the compositor team met earlier today to discuss this intent to implement in more detail.  The technical discussion focused on the Chromium compositor since even though that is not part of Blink proper Blink depends on the compositor for rendering and for this feature in particular.

The key takeaway of the meeting was that we cannot support the CSS Custom Filters proposal within Chromium's compositor and, as a result, we cannot support this feature in Blink.  As such, we should not continue implementation for this feature and existing code should be removed.

We spent some time talking about alternate ways to satisfy the use cases this proposal has such as extending the CSS element() function and/or reflections to reference portions of content or adding capabilities for manipulating page content within WebGL.  I think both of these have potential merits and should be pursued through the usual processes.  Neither proposal is quite concrete enough for an intent to implement at this point.

- James

Max Heinritz

unread,
May 11, 2013, 12:55:50 AM5/11/13
to James Robinson, Alexandru Chiculita, Max Vujovic, blink-dev
For more context, here are the notes from this morning's meeting. Feel free to clarify or correct.

Attendees


  • Blink: James Robinson, Adrienne Walker, Vangelis Kokkevis, Stephen White, Alex Danilo, Adam Barth, Max Heinritz

  • adobe: achicu (Alex), mvujovic (Max)


Conclusions


  • We need to think long-term about the primitives we want to add to the platform.

  • In general, it’s better to start with simpler platform primitives (eg CSS transitions) and expand their capabilities over time than to take something powerful (eg shaders) and try to restrict its functionality.

  • Ultimately, we can’t allow arbitrary author-provided shaders to run in the compositing pass for security and architectural reasons. CSS Shaders is not LGTMed for Blink.

  • Instead, let’s explore in ongoing technical discussion:

    • Meshing within CSS.

    • Adding capabilities to WebGL to let it reference content and enable shader-like effects.

    • How usecases can be addressed with other existing platform primitives.

    • Other possible paths to enable this type of functionality in the browser.


Notes


  • We must be thinking very long-term about new features because it’s a big commitment

    • Two routes for web platform features to be added:

      • 1) it doesn’t get adoption and dies and can be removed

      • 2) becomes part of the platform, all browsers must support it indefinitely, until to falls to 1)

  • High-level technical constraints of the compositor architecture

    • High-level logical flow

      • Compositor input: browser chrome, foo.com and bar.com content (where foo.com contains bar.com), and a WebGL canvas contained in bar.com

      • Compositor output: rendered output, timing sent back to page.


James drew this pic of the compositor architecture:


    • Shaders problems

      • Security: Allows author-provided code to become part of compositor pass

        • Notice that in this picture most of the arrows point left.

        • Compositor contains information the page shouldn’t be able to access; there are security concerns eg color of link.

        • Provides page more information by being able run code on intermediate data.

        • Putting restrictions on primitive

          • The GPU doesn’t have notion of a shader that’s restricted

        • It’s easier to start with limited capabilities and then expand it up, rather than start from a lots of complexity and restrict it instead.

        • By providing the capabilities in CSS, we let the compositor generate the shader.

      • Architecture: Restricts the compositor pass we can take in the future if we decide to rearchitect.

  • Use cases for custom filters

    • Current set seems kind of limited eg vertex shaders and hit testing -- you can only do it for non-interactive content.

      • You can apply on interactive content when it’s not interactive. You want to use something for a small amount of time.

    • Adobe showed a cool demo on a Nexus 7

      • Pinch to pull content apart, peel back, accordion

      • Mostly done with vertex shaders

      • This could done with pure fragment shaders, but not with security restrictions

    • Adobe on security:

      • We don’t leak information back through the timing

      • Preventing DDOS - if WebGL is off on the side (see pic), could we isolate the part of the compositor that runs the shader? Not the same way as WebGL.

      • It’s not right to think at the in the end the GPUs will run it anyway. The implementation may change. We need to consider the primitives that the web platform exposes.

    • Are there other use cases besides crinkling and manipulating content?

      • Some lighting effects...

  • What are some other ways to accomplish the same effects using existing platform primitives?

    • What if we had a way to refer to subcontent for CSS modification? eg CSS Reflection, but for a subset of an element

    • How to define this in CSS? You could use transforms and keyframes

    • How do you smoothly animate the transforms? We could use better animation primitives. Web Animations?

    • What are we missing for shaders? You can define gradients. What about a pageflip?

    • Fragment shaders could be needed for some usecases.

    • Consensus is yes, we could target CSS modifications to parts of elements. It works for small effects, but not for large effects.

  • Shader does not have access to textures.

    • We don’t want to force the compositor to use GL shaders. We don’t have to do sharing. We could apply the shader in a separate pass.

    • This is the opposite direction of where Google is going. UberComp creates a unified compositing pass. We want all web content to appear atomically .

  • Taking DOM content and processing it in WebGL? Not clear how we could do this.

  • Ask: How would you allow web authors to express the same effects if GLSL didn’t exist? It’s not clear this is the most natural approach for the web platform.

    • Adobe: Still need to define instructions that run for each Pixel.

    • Google: Perhaps there are other ways to accomplish this.

    • A: It comes down to how generic do you want the capabilities to be

      • Being more generic unlocked innovation eg in game lighting.

    • G: For webby elements it seems there are fewer usecases.

  • Further thoughts on using WebGL to manipulate the page: Apply an effect on a static part of the page, make it available for later one. Addresses atomicity. Then you just get a bitmap. Security concerns still exist. Restrict readbacks and restrict shaders.

Eric Seidel

unread,
May 13, 2013, 8:28:32 PM5/13/13
to Max Heinritz, James Robinson, Alexandru Chiculita, Max Vujovic, blink-dev
Thank you for the notes Max.

It is clear that this feature has an architectural impact on the
Chromium compositor. It also seems that the people working on the
compositor are not ready to accept the constraints necessary for this
custom shaders approach at this time. Blink depends on the Chromium
compositor to render, and thus lack of buy-in there results in this
intent to implement being "not lgtm" at this time.

I think some of the use cases demonstrated are very interesting to see
on the web and we should find other ways to answer these without
needing to executing author's code at this very low layer in our
pipeline. Perhaps there is a way to service these cases w/o involving
GSLS at all?

That said, I can imagine at least two paths to shipping Shaders as
written. One path involves finding an approach which does not require
changes to the compositor. Another path involves becoming more
involved with work on the compositor, understanding its constraints
and help shape its future.

I hope we can discuss this at greater length here (and the compositor
constraints on graphics-dev@chromium) and come up with solutions for
these use cases which we believe are compatible with Chromium's
compositor's current and planned architecture.

Sami Kyostila

unread,
May 14, 2013, 11:18:49 AM5/14/13
to Max Heinritz, James Robinson, Alexandru Chiculita, Max Vujovic, blink-dev
Thanks for the notes Max. I had a question on one of the items.

> Taking DOM content and processing it in WebGL? Not clear how we could do this.

One way I can imagine this working is changing tex{Sub}Image2D to
accept any DOM element instead of just <video>, <img> or <canvas>. Is
that what you had in mind or something different?

- Sami

Adam Barth

unread,
May 14, 2013, 3:59:34 PM5/14/13
to Sami Kyostila, Max Heinritz, James Robinson, Alexandru Chiculita, Max Vujovic, blink-dev
On Tue, May 14, 2013 at 4:18 AM, Sami Kyostila <skyo...@google.com> wrote:
Thanks for the notes Max. I had a question on one of the items.

> Taking DOM content and processing it in WebGL? Not clear how we could do this.

One way I can imagine this working is changing tex{Sub}Image2D to
accept any DOM element instead of just <video>, <img> or <canvas>. Is
that what you had in mind or something different?

Roughly, yes.  The main question is whether the texture updates as the appearance of DOM element changes, or whether the texture is just a static readback of the element's current visual appearance.  Having the texture be dynamic is probably more desirable from a developer point of view but might have the same implementation issues as CSS shaders.  If the API just does a static readback, then there's a risk that developers would try to call the API every frame, which would be slow.

Adam

Vangelis Kokkevis

unread,
May 14, 2013, 4:48:01 PM5/14/13
to Adam Barth, Sami Kyostila, Max Heinritz, James Robinson, Alexandru Chiculita, Max Vujovic, blink-dev, Kenneth Russell
On Tue, May 14, 2013 at 8:59 AM, Adam Barth <aba...@chromium.org> wrote:
On Tue, May 14, 2013 at 4:18 AM, Sami Kyostila <skyo...@google.com> wrote:
Thanks for the notes Max. I had a question on one of the items.

> Taking DOM content and processing it in WebGL? Not clear how we could do this.

One way I can imagine this working is changing tex{Sub}Image2D to
accept any DOM element instead of just <video>, <img> or <canvas>. Is
that what you had in mind or something different?

Roughly, yes.  The main question is whether the texture updates as the appearance of DOM element changes, or whether the texture is just a static readback of the element's current visual appearance.  Having the texture be dynamic is probably more desirable from a developer point of view but might have the same implementation issues as CSS shaders.  If the API just does a static readback, then there's a risk that developers would try to call the API every frame, which would be slow.

Where do we stand wrt security concerns for this approach? I remember that the fear of timing attacks prevented us in the past from getting web content into webgl. Would imposing same-domain restrictions help? 

Vangelis

Antoine Labour

unread,
May 14, 2013, 4:50:36 PM5/14/13
to Adam Barth, Sami Kyostila, Max Heinritz, James Robinson, Alexandru Chiculita, Max Vujovic, blink-dev
On Tue, May 14, 2013 at 8:59 AM, Adam Barth <aba...@chromium.org> wrote:
On Tue, May 14, 2013 at 4:18 AM, Sami Kyostila <skyo...@google.com> wrote:
Thanks for the notes Max. I had a question on one of the items.

> Taking DOM content and processing it in WebGL? Not clear how we could do this.

One way I can imagine this working is changing tex{Sub}Image2D to
accept any DOM element instead of just <video>, <img> or <canvas>. Is
that what you had in mind or something different?

Roughly, yes.  The main question is whether the texture updates as the appearance of DOM element changes, or whether the texture is just a static readback of the element's current visual appearance.  Having the texture be dynamic is probably more desirable from a developer point of view but might have the same implementation issues as CSS shaders.  If the API just does a static readback, then there's a risk that developers would try to call the API every frame, which would be slow.

Note, this also interacts with the compositor (only the compositor is capable of generating a texture from the DOM).

Also, security-wise, the plan for OOP iframes is to prevent the hosting page from accessing the iframe contents, be it through CPU pixels or GPU textures. The compositor in the hosting page would be unable to generate the texture for a DOM that contains an OOP iframe.

Antoine

Sami Kyostila

unread,
May 14, 2013, 4:58:53 PM5/14/13
to Adam Barth, Max Heinritz, James Robinson, Alexandru Chiculita, Max Vujovic, blink-dev
At least for Chromium I think would be feasible to implement a static
snapshot API that is quick enough to be called every frame; it's
essentially a GPU-side composite-to-texture pass, which we already do
for render surfaces (e.g., CSS opacity). The only complication is when
the DOM actually changes significantly and we have to re-rasterize,
which takes longer. In that case an asynchronous API would be a better
fit.

I'd also prefer the snapshot over dynamic updating since the latter
would make some use cases more difficult -- for example, having two
textures of the same element with two different styles.

- Sami

Adam Barth

unread,
May 14, 2013, 5:29:04 PM5/14/13
to Vangelis Kokkevis, Sami Kyostila, Max Heinritz, James Robinson, Alexandru Chiculita, Max Vujovic, blink-dev, Kenneth Russell
On Tue, May 14, 2013 at 9:48 AM, Vangelis Kokkevis <vang...@google.com> wrote:
On Tue, May 14, 2013 at 8:59 AM, Adam Barth <aba...@chromium.org> wrote:
On Tue, May 14, 2013 at 4:18 AM, Sami Kyostila <skyo...@google.com> wrote:
Thanks for the notes Max. I had a question on one of the items.

> Taking DOM content and processing it in WebGL? Not clear how we could do this.

One way I can imagine this working is changing tex{Sub}Image2D to
accept any DOM element instead of just <video>, <img> or <canvas>. Is
that what you had in mind or something different?

Roughly, yes.  The main question is whether the texture updates as the appearance of DOM element changes, or whether the texture is just a static readback of the element's current visual appearance.  Having the texture be dynamic is probably more desirable from a developer point of view but might have the same implementation issues as CSS shaders.  If the API just does a static readback, then there's a risk that developers would try to call the API every frame, which would be slow.

Where do we stand wrt security concerns for this approach? I remember that the fear of timing attacks prevented us in the past from getting web content into webgl. Would imposing same-domain restrictions help? 

Same-origin restrictions don't help in this situation because even same-origin elements contain secret information in their rasterization (e.g., whether links are blue/purple representing visited/unvisited).

Instead, what we'd need to do is have a flag when creating the WebGL context:

1) (default) Current security rules with no ability to draw arbitrary elements unto the canvas.
2) The ability to draw arbitrary elements onto the canvas, but now the shaders are restricted in the way we currently restrict CSS shaders.

It's still an open question whether (2) is secure.  There's been a bunch of research on the security properties of CSS shaders, and my understanding is that folks continue to find holes.  That's not particularly confidence inspiring, but potentially not the end of the world.

On Tue, May 14, 2013 at 9:50 AM, Antoine Labour <pi...@chromium.org> wrote:
Note, this also interacts with the compositor (only the compositor is capable of generating a texture from the DOM).

We already need to support readbacks from the compositor for other features.  For example, WebKit::WebWidget::paint is an existing API that does the readback.  Ideally, the compositor would render directly into the texture, but it is possible to implement without changes to the compositor.
 
Also, security-wise, the plan for OOP iframes is to prevent the hosting page from accessing the iframe contents, be it through CPU pixels or GPU textures. The compositor in the hosting page would be unable to generate the texture for a DOM that contains an OOP iframe.

I guess that depends if we're doing a read-back or a raster-to-texture...  With the uber compositor, do we get into trouble if the window containing the element is undergoing some sort of window transition (e.g., being displayed at 50% opacity)?  If we're compositing in the window manager effects at the same time, maybe there isn't a buffer to read back from?  Hum...

We can always do a separate rasterization of the element when loading it into a texture (e.g., pass ForceSoftwareRenderingAndIgnoreGPUResidentContent to WebKit::WebWidget::paint), but that's going to be slow, and we're hoping to remove the ForceSoftwareRenderingAndIgnoreGPUResidentContent option once Android stops using it from the link disambiguation popup.

On Tue, May 14, 2013 at 9:58 AM, Sami Kyostila <skyo...@google.com> wrote:
At least for Chromium I think would be feasible to implement a static
snapshot API that is quick enough to be called every frame; it's
essentially a GPU-side composite-to-texture pass, which we already do
for render surfaces (e.g., CSS opacity). The only complication is when
the DOM actually changes significantly and we have to re-rasterize,
which takes longer. In that case an asynchronous API would be a better
fit.

How would that work with OOP iframes, as asked by Antoine Labour?
 
I'd also prefer the snapshot over dynamic updating since the latter
would make some use cases more difficult -- for example, having two
textures of the same element with two different styles.

Given that the content isn't going to be interactive in either approach (nor with CSS shaders for that matter), perhaps you're right.

Dana Jansens

unread,
May 14, 2013, 5:33:16 PM5/14/13
to Adam Barth, Vangelis Kokkevis, Sami Kyostila, Max Heinritz, James Robinson, Alexandru Chiculita, Max Vujovic, blink-dev, Kenneth Russell
On Tue, May 14, 2013 at 1:29 PM, Adam Barth <aba...@chromium.org> wrote:
On Tue, May 14, 2013 at 9:48 AM, Vangelis Kokkevis <vang...@google.com> wrote:
On Tue, May 14, 2013 at 8:59 AM, Adam Barth <aba...@chromium.org> wrote:
On Tue, May 14, 2013 at 4:18 AM, Sami Kyostila <skyo...@google.com> wrote:
Thanks for the notes Max. I had a question on one of the items.

> Taking DOM content and processing it in WebGL? Not clear how we could do this.

One way I can imagine this working is changing tex{Sub}Image2D to
accept any DOM element instead of just <video>, <img> or <canvas>. Is
that what you had in mind or something different?

Roughly, yes.  The main question is whether the texture updates as the appearance of DOM element changes, or whether the texture is just a static readback of the element's current visual appearance.  Having the texture be dynamic is probably more desirable from a developer point of view but might have the same implementation issues as CSS shaders.  If the API just does a static readback, then there's a risk that developers would try to call the API every frame, which would be slow.

Where do we stand wrt security concerns for this approach? I remember that the fear of timing attacks prevented us in the past from getting web content into webgl. Would imposing same-domain restrictions help? 

Same-origin restrictions don't help in this situation because even same-origin elements contain secret information in their rasterization (e.g., whether links are blue/purple representing visited/unvisited).

Instead, what we'd need to do is have a flag when creating the WebGL context:

1) (default) Current security rules with no ability to draw arbitrary elements unto the canvas.
2) The ability to draw arbitrary elements onto the canvas, but now the shaders are restricted in the way we currently restrict CSS shaders.

It's still an open question whether (2) is secure.  There's been a bunch of research on the security properties of CSS shaders, and my understanding is that folks continue to find holes.  That's not particularly confidence inspiring, but potentially not the end of the world.

On Tue, May 14, 2013 at 9:50 AM, Antoine Labour <pi...@chromium.org> wrote:

Note, this also interacts with the compositor (only the compositor is capable of generating a texture from the DOM).

We already need to support readbacks from the compositor for other features.  For example, WebKit::WebWidget::paint is an existing API that does the readback.  Ideally, the compositor would render directly into the texture, but it is possible to implement without changes to the compositor.
 
Also, security-wise, the plan for OOP iframes is to prevent the hosting page from accessing the iframe contents, be it through CPU pixels or GPU textures. The compositor in the hosting page would be unable to generate the texture for a DOM that contains an OOP iframe.

I guess that depends if we're doing a read-back or a raster-to-texture...  With the uber compositor, do we get into trouble if the window containing the element is undergoing some sort of window transition (e.g., being displayed at 50% opacity)?  If we're compositing in the window manager effects at the same time, maybe there isn't a buffer to read back from?  Hum...

No that isn't a problem, opacity/transforms are applied when drawing the contents into its target, not when filling in the contents itself which would be read from.

Antoine Labour

unread,
May 14, 2013, 5:57:22 PM5/14/13
to Adam Barth, Vangelis Kokkevis, Sami Kyostila, Max Heinritz, James Robinson, Alexandru Chiculita, Max Vujovic, blink-dev, Kenneth Russell
On Tue, May 14, 2013 at 10:29 AM, Adam Barth <aba...@chromium.org> wrote:
On Tue, May 14, 2013 at 9:48 AM, Vangelis Kokkevis <vang...@google.com> wrote:
On Tue, May 14, 2013 at 8:59 AM, Adam Barth <aba...@chromium.org> wrote:
On Tue, May 14, 2013 at 4:18 AM, Sami Kyostila <skyo...@google.com> wrote:
Thanks for the notes Max. I had a question on one of the items.

> Taking DOM content and processing it in WebGL? Not clear how we could do this.

One way I can imagine this working is changing tex{Sub}Image2D to
accept any DOM element instead of just <video>, <img> or <canvas>. Is
that what you had in mind or something different?

Roughly, yes.  The main question is whether the texture updates as the appearance of DOM element changes, or whether the texture is just a static readback of the element's current visual appearance.  Having the texture be dynamic is probably more desirable from a developer point of view but might have the same implementation issues as CSS shaders.  If the API just does a static readback, then there's a risk that developers would try to call the API every frame, which would be slow.

Where do we stand wrt security concerns for this approach? I remember that the fear of timing attacks prevented us in the past from getting web content into webgl. Would imposing same-domain restrictions help? 

Same-origin restrictions don't help in this situation because even same-origin elements contain secret information in their rasterization (e.g., whether links are blue/purple representing visited/unvisited).

Instead, what we'd need to do is have a flag when creating the WebGL context:

1) (default) Current security rules with no ability to draw arbitrary elements unto the canvas.
2) The ability to draw arbitrary elements onto the canvas, but now the shaders are restricted in the way we currently restrict CSS shaders.

It's still an open question whether (2) is secure.  There's been a bunch of research on the security properties of CSS shaders, and my understanding is that folks continue to find holes.  That's not particularly confidence inspiring, but potentially not the end of the world.

The problem with site isolation is that 2 will not even be possible once we tighten the restrictions. Only the browser process will be able to access the texture(s) representing the contents of an out-of-process iframe. The renderer process for the containing page cannot access the contents of the iframe, whether on the CPU or the GPU. It can't draw it using webgl on a canvas.


On Tue, May 14, 2013 at 9:50 AM, Antoine Labour <pi...@chromium.org> wrote:

Note, this also interacts with the compositor (only the compositor is capable of generating a texture from the DOM).

We already need to support readbacks from the compositor for other features.  For example, WebKit::WebWidget::paint is an existing API that does the readback.  Ideally, the compositor would render directly into the texture, but it is possible to implement without changes to the compositor.
 
Also, security-wise, the plan for OOP iframes is to prevent the hosting page from accessing the iframe contents, be it through CPU pixels or GPU textures. The compositor in the hosting page would be unable to generate the texture for a DOM that contains an OOP iframe.

I guess that depends if we're doing a read-back or a raster-to-texture...  With the uber compositor, do we get into trouble if the window containing the element is undergoing some sort of window transition (e.g., being displayed at 50% opacity)?  If we're compositing in the window manager effects at the same time, maybe there isn't a buffer to read back from?  Hum...

It's a goal for ubercompositor that there isn't a buffer to read back from. That said, we can produce said buffer from the browser process, but see above re: site isolation restrictions.
 

We can always do a separate rasterization of the element when loading it into a texture (e.g., pass ForceSoftwareRenderingAndIgnoreGPUResidentContent to WebKit::WebWidget::paint), but that's going to be slow, and we're hoping to remove the ForceSoftwareRenderingAndIgnoreGPUResidentContent option once Android stops using it from the link disambiguation popup.

And you can't for OOP iframes.

Antoine

Adam Barth

unread,
May 14, 2013, 5:59:45 PM5/14/13
to Antoine Labour, Vangelis Kokkevis, Sami Kyostila, Max Heinritz, James Robinson, Alexandru Chiculita, Max Vujovic, blink-dev, Kenneth Russell
I see.  More generally, your point is that ForceSoftwareRenderingAndIgnoreGPUResidentContent spills the guts about which things are GPU resident, which is ok for the link disambiguation popup but not ok for a web-exposed API.

Adam

James Robinson

unread,
May 14, 2013, 6:12:28 PM5/14/13
to Adam Barth, Antoine Labour, Vangelis Kokkevis, Sami Kyostila, Max Heinritz, Alexandru Chiculita, Max Vujovic, blink-dev, Kenneth Russell
On Tue, May 14, 2013 at 10:59 AM, Adam Barth <aba...@chromium.org> wrote:
On Tue, May 14, 2013 at 10:57 AM, Antoine Labour <pi...@chromium.org> wrote:

We can always do a separate rasterization of the element when loading it into a texture (e.g., pass ForceSoftwareRenderingAndIgnoreGPUResidentContent to WebKit::WebWidget::paint), but that's going to be slow, and we're hoping to remove the ForceSoftwareRenderingAndIgnoreGPUResidentContent option once Android stops using it from the link disambiguation popup.

And you can't for OOP iframes.

I see.  More generally, your point is that ForceSoftwareRenderingAndIgnoreGPUResidentContent spills the guts about which things are GPU resident, which is ok for the link disambiguation popup but not ok for a web-exposed API.

Well, it's not OK for the link disambiguation popup either but since that isn't web-exposed we can refactor it to work in a different fashion.

- James

Sami Kyostila

unread,
May 14, 2013, 6:38:36 PM5/14/13
to Antoine Labour, Adam Barth, Vangelis Kokkevis, Max Heinritz, James Robinson, Alexandru Chiculita, Max Vujovic, blink-dev, Kenneth Russell
I'm not very familiar with OOP iframes, but isn't this analogous to how video textures work? The renderer does not have direct access to the media stream but it can ask the GPU process to grab a texture from it. In this case it'd be the browser compositor that satisfies that request instead.

- Sami

Antoine Labour

unread,
May 14, 2013, 7:33:25 PM5/14/13
to Sami Kyostila, Adam Barth, Vangelis Kokkevis, Max Heinritz, James Robinson, Alexandru Chiculita, Max Vujovic, blink-dev, Kenneth Russell
On Tue, May 14, 2013 at 11:38 AM, Sami Kyostila <skyo...@google.com> wrote:
I'm not very familiar with OOP iframes, but isn't this analogous to how video textures work? The renderer does not have direct access to the media stream but it can ask the GPU process to grab a texture from it. In this case it'd be the browser compositor that satisfies that request instead.

No, the idea is that we want to prevent the renderer from manipulating the OOP iframe's contents altogether.
If it had access to the texture, a compromised renderer could simply read it back, defeating some of the protections OOP iframes are intending to provide - e.g. evil.com could read back mybank.com info (e.g. your bank account), without even resorting to timing tricks / side channel attacks.

Antoine

Gregg Tavares

unread,
May 15, 2013, 1:14:48 AM5/15/13
to Antoine Labour, Sami Kyostila, Adam Barth, Vangelis Kokkevis, Max Heinritz, James Robinson, Alexandru Chiculita, Max Vujovic, blink-dev, Kenneth Russell
The proposal put forward before was to have WebGL mark shaders as safe or not. Safe shaders would follow the CSS Custom FX spec (no texture access period, only a color multiplier/matrix).

The simplest implementation is:  A WebGL context starts out as safe and clean. Safe = no non-safe shaders. Clean = no unsafe content.  If a non-safe shader is compiled the safe flag is cleared. If non safe content is uploaded the clean flag is cleared. If both flags are cleared the context is lost. If the safe flag is cleared no unsafe content may be uploaded (same as today where no unsafe content may be uploaded). If the clean flag is cleared readPixels, toDataURL and using the canvas a source to other APIs (drawImage, texImage2D, ) fail with a Security exception (this is/was already implemented before we disallowed all unsafe content)

That seems like it would work. What am I missing?






Antoine Labour

unread,
May 15, 2013, 1:28:21 AM5/15/13
to Gregg Tavares, Sami Kyostila, Adam Barth, Vangelis Kokkevis, Max Heinritz, James Robinson, Alexandru Chiculita, Max Vujovic, blink-dev, Kenneth Russell
On Tue, May 14, 2013 at 6:14 PM, Gregg Tavares <gm...@chromium.org> wrote:
The proposal put forward before was to have WebGL mark shaders as safe or not. Safe shaders would follow the CSS Custom FX spec (no texture access period, only a color multiplier/matrix).

The simplest implementation is:  A WebGL context starts out as safe and clean. Safe = no non-safe shaders. Clean = no unsafe content.  If a non-safe shader is compiled the safe flag is cleared. If non safe content is uploaded the clean flag is cleared. If both flags are cleared the context is lost. If the safe flag is cleared no unsafe content may be uploaded (same as today where no unsafe content may be uploaded). If the clean flag is cleared readPixels, toDataURL and using the canvas a source to other APIs (drawImage, texImage2D, ) fail with a Security exception (this is/was already implemented before we disallowed all unsafe content)

That seems like it would work. What am I missing?

The attack model OOP iframes want to protect against is a compromised renderer, not just malicious javascript.

The scheme you suggest would need to be implemented in the GPU process (and in terms of GL/command-buffer calls, not in terms of WebGL API). It would also mean that the entire renderer would have to be disallowed to do glReadPixels, not just a single context - otherwise a compromised renderer can just use share groups or shared textures to readback the same thing from another context. Because of side channel attacks I think you'd need to restrict the shaders the compositor can use too, and that wouldn't be good. I suppose you could whitelist the compositor ones, but skia dynamically generates its shaders for filters and stuff.

Maybe it can be solved, but it seems like a lot of complexity and work.

Gregg Tavares

unread,
May 15, 2013, 9:12:16 AM5/15/13
to Antoine Labour, Gregg Tavares, Sami Kyostila, Adam Barth, Vangelis Kokkevis, Max Heinritz, James Robinson, Alexandru Chiculita, Max Vujovic, blink-dev, Kenneth Russell
On Tue, May 14, 2013 at 6:28 PM, Antoine Labour <pi...@chromium.org> wrote:



On Tue, May 14, 2013 at 6:14 PM, Gregg Tavares <gm...@chromium.org> wrote:
The proposal put forward before was to have WebGL mark shaders as safe or not. Safe shaders would follow the CSS Custom FX spec (no texture access period, only a color multiplier/matrix).

The simplest implementation is:  A WebGL context starts out as safe and clean. Safe = no non-safe shaders. Clean = no unsafe content.  If a non-safe shader is compiled the safe flag is cleared. If non safe content is uploaded the clean flag is cleared. If both flags are cleared the context is lost. If the safe flag is cleared no unsafe content may be uploaded (same as today where no unsafe content may be uploaded). If the clean flag is cleared readPixels, toDataURL and using the canvas a source to other APIs (drawImage, texImage2D, ) fail with a Security exception (this is/was already implemented before we disallowed all unsafe content)

That seems like it would work. What am I missing?

The attack model OOP iframes want to protect against is a compromised renderer, not just malicious javascript.

The scheme you suggest would need to be implemented in the GPU process (and in terms of GL/command-buffer calls, not in terms of WebGL API). It would also mean that the entire renderer would have to be disallowed to do glReadPixels, not just a single context - otherwise a compromised renderer can just use share groups or shared textures to readback the same thing from another context. Because of side channel attacks I think you'd need to restrict the shaders the compositor can use too, and that wouldn't be good. I suppose you could whitelist the compositor ones, but skia dynamically generates its shaders for filters and stuff.

Maybe it can be solved, but it seems like a lot of complexity and work.

Sorry if I'm not up on OOP iframes. If we're protecting against compromised renderers that's already a problem today. A compromised renderer can read all memory in the renderer space (passwords, documents, email) and access all textures used by that process.

How about we just mark the entire page as not clean if there are any OOP iframes or is that too restricting?

Sami Kyostila

unread,
May 15, 2013, 10:40:10 AM5/15/13
to Antoine Labour, Adam Barth, Vangelis Kokkevis, Max Heinritz, James Robinson, Alexandru Chiculita, Max Vujovic, blink-dev, Kenneth Russell
Thanks Antoine, I think I understand the issue now. Would it work if when generating the texture we'd just omit or obscure OOP iframe layers? A more restrictive way would be to disallow the whole API on pages with OOP iframes.

- Sami

Adam Barth

unread,
May 15, 2013, 3:18:11 PM5/15/13
to Gregg Tavares, Antoine Labour, Sami Kyostila, Vangelis Kokkevis, Max Heinritz, James Robinson, Alexandru Chiculita, Max Vujovic, blink-dev, Kenneth Russell
On Wed, May 15, 2013 at 2:12 AM, Gregg Tavares <gm...@chromium.org> wrote:
On Tue, May 14, 2013 at 6:28 PM, Antoine Labour <pi...@chromium.org> wrote:
On Tue, May 14, 2013 at 6:14 PM, Gregg Tavares <gm...@chromium.org> wrote:
The proposal put forward before was to have WebGL mark shaders as safe or not. Safe shaders would follow the CSS Custom FX spec (no texture access period, only a color multiplier/matrix).

The simplest implementation is:  A WebGL context starts out as safe and clean. Safe = no non-safe shaders. Clean = no unsafe content.  If a non-safe shader is compiled the safe flag is cleared. If non safe content is uploaded the clean flag is cleared. If both flags are cleared the context is lost. If the safe flag is cleared no unsafe content may be uploaded (same as today where no unsafe content may be uploaded). If the clean flag is cleared readPixels, toDataURL and using the canvas a source to other APIs (drawImage, texImage2D, ) fail with a Security exception (this is/was already implemented before we disallowed all unsafe content)

That seems like it would work. What am I missing?

The attack model OOP iframes want to protect against is a compromised renderer, not just malicious javascript.

The scheme you suggest would need to be implemented in the GPU process (and in terms of GL/command-buffer calls, not in terms of WebGL API). It would also mean that the entire renderer would have to be disallowed to do glReadPixels, not just a single context - otherwise a compromised renderer can just use share groups or shared textures to readback the same thing from another context. Because of side channel attacks I think you'd need to restrict the shaders the compositor can use too, and that wouldn't be good. I suppose you could whitelist the compositor ones, but skia dynamically generates its shaders for filters and stuff.

Maybe it can be solved, but it seems like a lot of complexity and work.

Sorry if I'm not up on OOP iframes. If we're protecting against compromised renderers that's already a problem today. A compromised renderer can read all memory in the renderer space (passwords, documents, email) and access all textures used by that process.

That's correct today.  Today we only worry about compromised render processes reading or writing the user's local file system.  With out-of-process iframes, we're hoping to do better and protect against compromised renderers learning passwords, documents, emails, etc from other origins.

How about we just mark the entire page as not clean if there are any OOP iframes or is that too restricting?
 
There are two issues with that approach:

1) It's pretty restrictive.  Many web sites have like buttons or other social widgets that are iframes from another origin.
2) We also need to protect against sensitive information from the same origin (e.g., the color of links reveals whether the user has visited certain URLs).

As I wrote in the other thread, I don't think the security issues are significant enough to prevent implementation of this feature behind a runtime flag.  The more significant issues are the constraints custom shaders impose on the rest of the system.  We need to be careful not to commit to web-exposed APIs that constrain future development of the browser, especially if other browser vendors don't have those constraints.  That's a recipe for falling behind other browsers.

Adam

mvuj...@adobe.com

unread,
May 15, 2013, 4:43:53 PM5/15/13
to blin...@chromium.org

Since the Custom Filters discussion has touched a few different topics, I think it would help to organize it around three distinct themes: implementability, security, and feature fit.


Implementability


Alex and I understand and agree that executing Custom Filters during a compositing pass is undesirable for Chromium’s architecture.


As a possible alternative, Chromium can execute Custom Filters outside of the compositing pass. During our meeting last Friday at Google, we just started exploring this approach, its challenges, and some potential solutions. Alex and I think this is the right direction, and we’ve worked out some more details. We will follow up to this thread with the details, and we’re looking forward to collaborating on the solution.


Security


There are two aspects to the security issue I would like to call out and clarify.


1) DoS Attacks


For some background, WebGL executes outside of the compositing pass. With most current graphics hardware, this does not solve the DoS attack issue. Current hardware supports only one thread of GL command execution. Since the GPU can only execute commands serially, a long-running WebGL shader can block the compositor’s commands from executing. Future graphics hardware may have multiple threads of GL command execution. With future hardware, running WebGL outside of the compositing pass can prevent DoS attacks. For example, WebGL can execute a long-running shader on one thread of GL command execution, while the compositor can execute its commands on a different, parallel thread of execution.


Regarding Custom Filters, we solve the DoS attack issue for future hardware in the same manner as WebGL, by executing Custom Filters outside of the compositing pass.


2) Timing Attacks


Timing attacks are prevented by design in Custom Filters. We disallow author access to the DOM element texture. Instead, the author outputs a color that the browser blends with the DOM element texture. Authors cannot leak information that they don’t have access to.


We are not aware of any ways to perform a timing attack with this security model, conceptually or in its current implementation. Please do point any concerns our way or file bugs on the existing implementations.


There is a W3C wiki page that provides additional details regarding the security discussion: http://www.w3.org/Graphics/fx/wiki/CSS_Shaders_Security


Feature Fit


If there are concerns regarding how the feature fits into the web platform, we should move that discussion to the W3C public-fx mailing list. This is a great place to discuss the definition of the feature and its use cases with other vendors who are implementing it.


A few prior discussions include:


Defining a CSS syntax for Custom Filters that can support alternative and future filter languages or formats (other than GLSL): http://lists.w3.org/Archives/Public/public-fx/2012OctDec/0029.html


Discussion around the security model: http://lists.w3.org/Archives/Public/public-fx/2012AprJun/0010.html

mvuj...@adobe.com

unread,
May 15, 2013, 5:45:37 PM5/15/13
to blin...@chromium.org, Gregg Tavares, Antoine Labour, Sami Kyostila, Vangelis Kokkevis, Max Heinritz, James Robinson, Alexandru Chiculita, Max Vujovic, Kenneth Russell
On Wednesday, May 15, 2013 8:18:11 AM UTC-7, Adam Barth wrote:
On Wed, May 15, 2013 at 2:12 AM, Gregg Tavares <gm...@chromium.org> wrote:
On Tue, May 14, 2013 at 6:28 PM, Antoine Labour <pi...@chromium.org> wrote:
On Tue, May 14, 2013 at 6:14 PM, Gregg Tavares <gm...@chromium.org> wrote:
The proposal put forward before was to have WebGL mark shaders as safe or not. Safe shaders would follow the CSS Custom FX spec (no texture access period, only a color multiplier/matrix).

The simplest implementation is:  A WebGL context starts out as safe and clean. Safe = no non-safe shaders. Clean = no unsafe content.  If a non-safe shader is compiled the safe flag is cleared. If non safe content is uploaded the clean flag is cleared. If both flags are cleared the context is lost. If the safe flag is cleared no unsafe content may be uploaded (same as today where no unsafe content may be uploaded). If the clean flag is cleared readPixels, toDataURL and using the canvas a source to other APIs (drawImage, texImage2D, ) fail with a Security exception (this is/was already implemented before we disallowed all unsafe content)

That seems like it would work. What am I missing?

The attack model OOP iframes want to protect against is a compromised renderer, not just malicious javascript.

The scheme you suggest would need to be implemented in the GPU process (and in terms of GL/command-buffer calls, not in terms of WebGL API). It would also mean that the entire renderer would have to be disallowed to do glReadPixels, not just a single context - otherwise a compromised renderer can just use share groups or shared textures to readback the same thing from another context. Because of side channel attacks I think you'd need to restrict the shaders the compositor can use too, and that wouldn't be good. I suppose you could whitelist the compositor ones, but skia dynamically generates its shaders for filters and stuff.

Maybe it can be solved, but it seems like a lot of complexity and work.

Sorry if I'm not up on OOP iframes. If we're protecting against compromised renderers that's already a problem today. A compromised renderer can read all memory in the renderer space (passwords, documents, email) and access all textures used by that process.

That's correct today.  Today we only worry about compromised render processes reading or writing the user's local file system.  With out-of-process iframes, we're hoping to do better and protect against compromised renderers learning passwords, documents, emails, etc from other origins.

How about we just mark the entire page as not clean if there are any OOP iframes or is that too restricting?
 
There are two issues with that approach:

1) It's pretty restrictive.  Many web sites have like buttons or other social widgets that are iframes from another origin.
2) We also need to protect against sensitive information from the same origin (e.g., the color of links reveals whether the user has visited certain URLs).

As I wrote in the other thread, I don't think the security issues are significant enough to prevent implementation of this feature behind a runtime flag.  The more significant issues are the constraints custom shaders impose on the rest of the system.

Could you be specific about the constraints, so we can make sure they are addressed?
 
 We need to be careful not to commit to web-exposed APIs that constrain future development of the browser, especially if other browser vendors don't have those constraints.  That's a recipe for falling behind other browsers.

Absolutely, we shouldn't commit to APIs that constrain the future development of the browser. However, we should also be careful to avoid architectural constraints that prevent future innovation. That will also make us fall behind other browsers.

In my opinion, graphical innovation in HTML and CSS won't stop at fixed-pipeline primitives like built-in filters. OpenGL itself had to make the leap from fixed pipeline to programmable pipeline at one point. I want Chromium to be at the forefront of graphical innovation, and I'm sure we all share this desire :)
 
(Sorry if you received multiple copies of this response. I've had trouble getting it to appear on the blink-dev thread.)

- Max

mvuj...@adobe.com

unread,
May 15, 2013, 7:13:50 PM5/15/13
to blin...@chromium.org
On Wednesday, May 15, 2013 9:43:53 AM UTC-7, mvuj...@adobe.com wrote:

Since the Custom Filters discussion has touched a few different topics, I think it would help to organize it around three distinct themes: implementability, security, and feature fit.


Implementability


Alex and I understand and agree that executing Custom Filters during a compositing pass is undesirable for Chromium’s architecture.


As a possible alternative, Chromium can execute Custom Filters outside of the compositing pass. During our meeting last Friday at Google, we just started exploring this approach, its challenges, and some potential solutions. Alex and I think this is the right direction, and we’ve worked out some more details. We will follow up to this thread with the details, and we’re looking forward to collaborating on the solution.


Here are the details of a possible approach to implementing Custom Filters outside of the compositing pass:

Feedback and collaboration are welcome. Everyone is invited to comment on the doc, reply to this thread, and help us shape it.

Thanks everyone!
Max

James Robinson

unread,
May 15, 2013, 8:09:49 PM5/15/13
to Max Vujovic, blink-dev, Gregg Tavares, Antoine Labour, Sami Kyostila, Vangelis Kokkevis, Max Heinritz, Alexandru Chiculita, Kenneth Russell
On Wed, May 15, 2013 at 10:45 AM, <mvuj...@adobe.com> wrote:
On Wednesday, May 15, 2013 8:18:11 AM UTC-7, Adam Barth wrote:
Sorry if I'm not up on OOP iframes. If we're protecting against compromised renderers that's already a problem today. A compromised renderer can read all memory in the renderer space (passwords, documents, email) and access all textures used by that process.

That's correct today.  Today we only worry about compromised render processes reading or writing the user's local file system.  With out-of-process iframes, we're hoping to do better and protect against compromised renderers learning passwords, documents, emails, etc from other origins.

How about we just mark the entire page as not clean if there are any OOP iframes or is that too restricting?
 
There are two issues with that approach:

1) It's pretty restrictive.  Many web sites have like buttons or other social widgets that are iframes from another origin.
2) We also need to protect against sensitive information from the same origin (e.g., the color of links reveals whether the user has visited certain URLs).

As I wrote in the other thread, I don't think the security issues are significant enough to prevent implementation of this feature behind a runtime flag.  The more significant issues are the constraints custom shaders impose on the rest of the system.

Could you be specific about the constraints, so we can make sure they are addressed?

Here's one way to reason about it that might be helpful.  This is a relatively high-level description but when thinking about long-term platform capabilities I think it's more productive to reason at a high level as the details are likely to change.  The practical considerations follow as corollaries from the higher level ones.

The compositing pass is a privileged operation (relative to web content) in a few ways.  First, it has access to resources that web content should not have access to.  The compositing pass can access the rendered pixels of web content, including cross-origin data, as well as privileged browser UI components.  Second, the compositor has to be responsive to user interactions.  We've worked hard to make sure that as much as possible slow web content does not interfere with the user's overall experience by allowing scrolling and video playback while javascript is blocking the main thread, for instance.  In order to meet these goals while compositing arbitrary web content it's essential that the compositor have an understanding of the operations required to composite a page.  In terms of privilege levels, we have to validate the data passed from the web content (a less privileged context) to the compositor (a more privileged context) to ensure we aren't accidentally elevating the privileges of web content.  We can analyse a layer tree with transforms, filter chains, animations etc to make sure that we aren't leaking information that we shouldn't and to decide the right rendering technique to hit our performance goals.  If the input to the compositor is a program, we do not have any way good way to inspect and understand what the program does (other than simulating it, which defeats the purpose of the proposal).

WebGL, as an example, doesn't have the same constraints since it only has access to resources that web content has access to and we can choose to render it at whatever framerate is appropriate for the user.  In fact, the compositor's scheduler is in control of when we allow the web produce WebGL frames.  Since WebGL is less privileged, we can provide the author far more direct control.  Control and privilege have an inverse relationship.

I think there are two primary ways to address the use cases while satisfying this constraint.  The first is to express the author's intent in a way that the compositor can understand and validate.  For instance, if there was a way to express the mesh or desired deformations in a more direct way we could use that as input to the compositor and determine how best to render it.  The second is to move the visual effects portion of this outside of the compositor.  For instance if we could have the content the author wanted to manipulate in a WebGL texture, and do so in a way that avoids accidental privilege escalations, we could then let the author use WebGL to manipulate it using the precise control we allow WebGL.

- James

Gregg Tavares

unread,
May 15, 2013, 9:27:55 PM5/15/13
to Adam Barth, Gregg Tavares, Antoine Labour, Sami Kyostila, Vangelis Kokkevis, Max Heinritz, James Robinson, Alexandru Chiculita, Max Vujovic, blink-dev, Kenneth Russell
On Wed, May 15, 2013 at 8:18 AM, Adam Barth <aba...@chromium.org> wrote:
On Wed, May 15, 2013 at 2:12 AM, Gregg Tavares <gm...@chromium.org> wrote:
On Tue, May 14, 2013 at 6:28 PM, Antoine Labour <pi...@chromium.org> wrote:
On Tue, May 14, 2013 at 6:14 PM, Gregg Tavares <gm...@chromium.org> wrote:
The proposal put forward before was to have WebGL mark shaders as safe or not. Safe shaders would follow the CSS Custom FX spec (no texture access period, only a color multiplier/matrix).

The simplest implementation is:  A WebGL context starts out as safe and clean. Safe = no non-safe shaders. Clean = no unsafe content.  If a non-safe shader is compiled the safe flag is cleared. If non safe content is uploaded the clean flag is cleared. If both flags are cleared the context is lost. If the safe flag is cleared no unsafe content may be uploaded (same as today where no unsafe content may be uploaded). If the clean flag is cleared readPixels, toDataURL and using the canvas a source to other APIs (drawImage, texImage2D, ) fail with a Security exception (this is/was already implemented before we disallowed all unsafe content)

That seems like it would work. What am I missing?

The attack model OOP iframes want to protect against is a compromised renderer, not just malicious javascript.

The scheme you suggest would need to be implemented in the GPU process (and in terms of GL/command-buffer calls, not in terms of WebGL API). It would also mean that the entire renderer would have to be disallowed to do glReadPixels, not just a single context - otherwise a compromised renderer can just use share groups or shared textures to readback the same thing from another context. Because of side channel attacks I think you'd need to restrict the shaders the compositor can use too, and that wouldn't be good. I suppose you could whitelist the compositor ones, but skia dynamically generates its shaders for filters and stuff.

Maybe it can be solved, but it seems like a lot of complexity and work.

Sorry if I'm not up on OOP iframes. If we're protecting against compromised renderers that's already a problem today. A compromised renderer can read all memory in the renderer space (passwords, documents, email) and access all textures used by that process.

That's correct today.  Today we only worry about compromised render processes reading or writing the user's local file system.  With out-of-process iframes, we're hoping to do better and protect against compromised renderers learning passwords, documents, emails, etc from other origins.

How about we just mark the entire page as not clean if there are any OOP iframes or is that too restricting?
 
There are two issues with that approach:

1) It's pretty restrictive.  Many web sites have like buttons or other social widgets that are iframes from another origin.
2) We also need to protect against sensitive information from the same origin (e.g., the color of links reveals whether the user has visited certain URLs).

Comment #2 makes me feel like maybe some people haven't understood the proposals? The current proposals, both CSS Custom FX and the DOM Element to WebGL textures do not allow revealing the color of links. JavaScript has no access to the pixel color and the shader language proposed is constrained such that no timing attacks are possible because even the shader does not have access to the colors.

Adam Barth

unread,
May 15, 2013, 11:00:30 PM5/15/13
to Gregg Tavares, Antoine Labour, Sami Kyostila, Vangelis Kokkevis, Max Heinritz, James Robinson, Alexandru Chiculita, Max Vujovic, blink-dev, Kenneth Russell
On Wed, May 15, 2013 at 2:27 PM, Gregg Tavares <gm...@chromium.org> wrote:
On Wed, May 15, 2013 at 8:18 AM, Adam Barth <aba...@chromium.org> wrote:
On Wed, May 15, 2013 at 2:12 AM, Gregg Tavares <gm...@chromium.org> wrote:
On Tue, May 14, 2013 at 6:28 PM, Antoine Labour <pi...@chromium.org> wrote:
On Tue, May 14, 2013 at 6:14 PM, Gregg Tavares <gm...@chromium.org> wrote:
The proposal put forward before was to have WebGL mark shaders as safe or not. Safe shaders would follow the CSS Custom FX spec (no texture access period, only a color multiplier/matrix).

The simplest implementation is:  A WebGL context starts out as safe and clean. Safe = no non-safe shaders. Clean = no unsafe content.  If a non-safe shader is compiled the safe flag is cleared. If non safe content is uploaded the clean flag is cleared. If both flags are cleared the context is lost. If the safe flag is cleared no unsafe content may be uploaded (same as today where no unsafe content may be uploaded). If the clean flag is cleared readPixels, toDataURL and using the canvas a source to other APIs (drawImage, texImage2D, ) fail with a Security exception (this is/was already implemented before we disallowed all unsafe content)

That seems like it would work. What am I missing?

The attack model OOP iframes want to protect against is a compromised renderer, not just malicious javascript.

The scheme you suggest would need to be implemented in the GPU process (and in terms of GL/command-buffer calls, not in terms of WebGL API). It would also mean that the entire renderer would have to be disallowed to do glReadPixels, not just a single context - otherwise a compromised renderer can just use share groups or shared textures to readback the same thing from another context. Because of side channel attacks I think you'd need to restrict the shaders the compositor can use too, and that wouldn't be good. I suppose you could whitelist the compositor ones, but skia dynamically generates its shaders for filters and stuff.

Maybe it can be solved, but it seems like a lot of complexity and work.

Sorry if I'm not up on OOP iframes. If we're protecting against compromised renderers that's already a problem today. A compromised renderer can read all memory in the renderer space (passwords, documents, email) and access all textures used by that process.

That's correct today.  Today we only worry about compromised render processes reading or writing the user's local file system.  With out-of-process iframes, we're hoping to do better and protect against compromised renderers learning passwords, documents, emails, etc from other origins.

How about we just mark the entire page as not clean if there are any OOP iframes or is that too restricting?
 
There are two issues with that approach:

1) It's pretty restrictive.  Many web sites have like buttons or other social widgets that are iframes from another origin.
2) We also need to protect against sensitive information from the same origin (e.g., the color of links reveals whether the user has visited certain URLs).

Comment #2 makes me feel like maybe some people haven't understood the proposals? The current proposals, both CSS Custom FX and the DOM Element to WebGL textures do not allow revealing the color of links. JavaScript has no access to the pixel color and the shader language proposed is constrained such that no timing attacks are possible because even the shader does not have access to the colors.
 
I would say that's the goal of the constraints on the shader language.  It's not 100% clear to me that the constraints achieve that goal.

Adam
Message has been deleted

James Robinson

unread,
May 15, 2013, 11:43:17 PM5/15/13
to Gregg Tavares, Adam Barth, Antoine Labour, Sami Kyostila, Vangelis Kokkevis, Max Heinritz, Alexandru Chiculita, Max Vujovic, blink-dev, Kenneth Russell
On Wed, May 15, 2013 at 2:27 PM, Gregg Tavares <gm...@chromium.org> wrote:



On Wed, May 15, 2013 at 8:18 AM, Adam Barth <aba...@chromium.org> wrote:
On Wed, May 15, 2013 at 2:12 AM, Gregg Tavares <gm...@chromium.org> wrote:
On Tue, May 14, 2013 at 6:28 PM, Antoine Labour <pi...@chromium.org> wrote:
On Tue, May 14, 2013 at 6:14 PM, Gregg Tavares <gm...@chromium.org> wrote:
The proposal put forward before was to have WebGL mark shaders as safe or not. Safe shaders would follow the CSS Custom FX spec (no texture access period, only a color multiplier/matrix).

The simplest implementation is:  A WebGL context starts out as safe and clean. Safe = no non-safe shaders. Clean = no unsafe content.  If a non-safe shader is compiled the safe flag is cleared. If non safe content is uploaded the clean flag is cleared. If both flags are cleared the context is lost. If the safe flag is cleared no unsafe content may be uploaded (same as today where no unsafe content may be uploaded). If the clean flag is cleared readPixels, toDataURL and using the canvas a source to other APIs (drawImage, texImage2D, ) fail with a Security exception (this is/was already implemented before we disallowed all unsafe content)

That seems like it would work. What am I missing?

The attack model OOP iframes want to protect against is a compromised renderer, not just malicious javascript.

The scheme you suggest would need to be implemented in the GPU process (and in terms of GL/command-buffer calls, not in terms of WebGL API). It would also mean that the entire renderer would have to be disallowed to do glReadPixels, not just a single context - otherwise a compromised renderer can just use share groups or shared textures to readback the same thing from another context. Because of side channel attacks I think you'd need to restrict the shaders the compositor can use too, and that wouldn't be good. I suppose you could whitelist the compositor ones, but skia dynamically generates its shaders for filters and stuff.

Maybe it can be solved, but it seems like a lot of complexity and work.

Sorry if I'm not up on OOP iframes. If we're protecting against compromised renderers that's already a problem today. A compromised renderer can read all memory in the renderer space (passwords, documents, email) and access all textures used by that process.

That's correct today.  Today we only worry about compromised render processes reading or writing the user's local file system.  With out-of-process iframes, we're hoping to do better and protect against compromised renderers learning passwords, documents, emails, etc from other origins.

How about we just mark the entire page as not clean if there are any OOP iframes or is that too restricting?
 
There are two issues with that approach:

1) It's pretty restrictive.  Many web sites have like buttons or other social widgets that are iframes from another origin.
2) We also need to protect against sensitive information from the same origin (e.g., the color of links reveals whether the user has visited certain URLs).

Comment #2 makes me feel like maybe some people haven't understood the proposals? The current proposals, both CSS Custom FX and the DOM Element to WebGL textures do not allow revealing the color of links. JavaScript has no access to the pixel color and the shader language proposed is constrained such that no timing attacks are possible because even the shader does not have access to the colors.

There are multiple aspects here.  The shader validation is intended to have the end effect that running the shader program does not reveal information about the values of pixels.  That's an interesting topic for discussion on its own.  However, GPUs and OpenGL know nothing about these shader language restrictions - they just know GLSL.  That means in order to actually run the shader program the compositor has to set up a context that has the program and that has access to the source texture.  In terms of our implementation, this means running the shader program in a context that has access to the compositor's texture.  Consider a simple example with foo.com that has an <iframe> pointed to bar.com.  Assuming OOP iframes and ubercompositor, we would have 4 relevant processes:

Renderer A hosting foo.com. Trusted with access to textures from foo.com only
Renderer B hosting bar.com.  Trusted with access to textures from bar.com.
GPU process, trusted with all textures and responsible for managing each context's access to them
Browser process, trusted with access to everything.

The compositor in the browser process is responsible for compositing together the resources from renderer A, renderer B, and the browser's UI to the final frame.  Now say we want to run a shader program from foo.com on the composited output via either CSS Custom FX or DOM<->WebGL integration.  Where is it validated and where does it run?  We can't give any contexts that Renderer A has access to access to textures that contain content from Renderer B or the Browser because at the context layer access to the texture means access to all of its pixels.  I also don't think we can reasonably run WebGL commands for a renderer from the browser process, unless you know of a good way to do that.

I think the reason this is difficult is the validation is applied at a program rewriting layer applied on top of OpenGL, but all of our GPU infrastructure (and the driver's infrastructure) operates at a lower more powerful level.  I think we would end up having to invent new primitives for a renderer and the GPU process to communicate with each other about what level of access is provided to a given resource.  This may be possible but would require a lot of careful thought to get a robust design.

Longer term, even without iframes it'd be really nice to restrict Renderer A's access to cross-origin data completely.  For instance if renderer A has an <img> pointed to a cross-origin resource, it needs access to the metadata for that image in order to do layout but it really doesn't need access to the actual pixels for anything other than rendering, which the compositor could take care of.  In the not-so-distant future one could imaging the image decoding and caching logic living in a different process (GPU or otherwise) and renderers only having opaque handles to them that are passed off the compositor to rasterize out of process.  We should be careful that we don't introduce additional barriers to longer-term security improvements like this.


- James

chic...@gmail.com

unread,
May 16, 2013, 12:16:22 AM5/16/13
to blin...@chromium.org, Gregg Tavares, Adam Barth, Antoine Labour, Sami Kyostila, Vangelis Kokkevis, Max Heinritz, Alexandru Chiculita, Max Vujovic, Kenneth Russell, jam...@google.com
The texture of an iframe never returns to the parent renderer process. It is only handled between the UberCompositor and the GPU process. We've explained this in our proposal document here: https://docs.google.com/document/d/1plAtevLg4179nEP32oOjeGI9SDxaGfy1TgnBPpPR2-c/edit?usp=sharing

Moreover, Custom Filters and WebGL are validated and executed in the GPU process. If this process is compromised in any way, then it would already have access to the whole GPU memory because it is the one responsible with making the OpenGL calls. Custom Filters are not exposing any new vulnerabilities in the GPU process. Custom Filters actually have a much more strict API surface than WebGL itself.
 

I think the reason this is difficult is the validation is applied at a program rewriting layer applied on top of OpenGL, but all of our GPU infrastructure (and the driver's infrastructure) operates at a lower more powerful level.

WebGL already validates and rewrites GLSL shaders before sending them to the OpenGL subsystem. There are a couple of "security fixes" like out of bounds array access clamping [1] that ANGLE fixes in WebGL shaders.
 
 I think we would end up having to invent new primitives for a renderer and the GPU process to communicate with each other about what level of access is provided to a given resource.

The only resources sent from the renderer process to the GPU process are the shader strings. The GPU process is finally responsible to validate the shaders and execute them on textures obtained directly from the UberCompositor process. The renderer process will never have to touch rendered content, be it from its own domain or others.
 
 This may be possible but would require a lot of careful thought to get a robust design.

Longer term, even without iframes it'd be really nice to restrict Renderer A's access to cross-origin data completely.  For instance if renderer A has an <img> pointed to a cross-origin resource, it needs access to the metadata for that image in order to do layout but it really doesn't need access to the actual pixels for anything other than rendering, which the compositor could take care of.  In the not-so-distant future one could imaging the image decoding and caching logic living in a different process (GPU or otherwise) and renderers only having opaque handles to them that are passed off the compositor to rasterize out of process.  We should be careful that we don't introduce additional barriers to longer-term security improvements like this.

This proposal is orthogonal to security improvements regarding visibility of cross-domain resources across different renderer processes. The only processes that will finally have access to the rendered content are the UberCompositor and the GPU process, that already have access to everything, since they have access to OpenGL (or any other platform technology). 
 


- James

-Alex 


mvuj...@adobe.com

unread,
May 17, 2013, 6:53:05 PM5/17/13
to blin...@chromium.org, Gregg Tavares, Antoine Labour, Sami Kyostila, Vangelis Kokkevis, Max Heinritz, James Robinson, Alexandru Chiculita, Max Vujovic, Kenneth Russell
On Wednesday, May 15, 2013 4:00:30 PM UTC-7, Adam Barth wrote:
On Wed, May 15, 2013 at 2:27 PM, Gregg Tavares <gm...@chromium.org> wrote:
On Wed, May 15, 2013 at 8:18 AM, Adam Barth <aba...@chromium.org> wrote:
On Wed, May 15, 2013 at 2:12 AM, Gregg Tavares <gm...@chromium.org> wrote:
On Tue, May 14, 2013 at 6:28 PM, Antoine Labour <pi...@chromium.org> wrote:
On Tue, May 14, 2013 at 6:14 PM, Gregg Tavares <gm...@chromium.org> wrote:
The proposal put forward before was to have WebGL mark shaders as safe or not. Safe shaders would follow the CSS Custom FX spec (no texture access period, only a color multiplier/matrix).

The simplest implementation is:  A WebGL context starts out as safe and clean. Safe = no non-safe shaders. Clean = no unsafe content.  If a non-safe shader is compiled the safe flag is cleared. If non safe content is uploaded the clean flag is cleared. If both flags are cleared the context is lost. If the safe flag is cleared no unsafe content may be uploaded (same as today where no unsafe content may be uploaded). If the clean flag is cleared readPixels, toDataURL and using the canvas a source to other APIs (drawImage, texImage2D, ) fail with a Security exception (this is/was already implemented before we disallowed all unsafe content)

That seems like it would work. What am I missing?

The attack model OOP iframes want to protect against is a compromised renderer, not just malicious javascript.

The scheme you suggest would need to be implemented in the GPU process (and in terms of GL/command-buffer calls, not in terms of WebGL API). It would also mean that the entire renderer would have to be disallowed to do glReadPixels, not just a single context - otherwise a compromised renderer can just use share groups or shared textures to readback the same thing from another context. Because of side channel attacks I think you'd need to restrict the shaders the compositor can use too, and that wouldn't be good. I suppose you could whitelist the compositor ones, but skia dynamically generates its shaders for filters and stuff.

Maybe it can be solved, but it seems like a lot of complexity and work.

Sorry if I'm not up on OOP iframes. If we're protecting against compromised renderers that's already a problem today. A compromised renderer can read all memory in the renderer space (passwords, documents, email) and access all textures used by that process.

That's correct today.  Today we only worry about compromised render processes reading or writing the user's local file system.  With out-of-process iframes, we're hoping to do better and protect against compromised renderers learning passwords, documents, emails, etc from other origins.

How about we just mark the entire page as not clean if there are any OOP iframes or is that too restricting?
 
There are two issues with that approach:

1) It's pretty restrictive.  Many web sites have like buttons or other social widgets that are iframes from another origin.
2) We also need to protect against sensitive information from the same origin (e.g., the color of links reveals whether the user has visited certain URLs).

Comment #2 makes me feel like maybe some people haven't understood the proposals? The current proposals, both CSS Custom FX and the DOM Element to WebGL textures do not allow revealing the color of links. JavaScript has no access to the pixel color and the shader language proposed is constrained such that no timing attacks are possible because even the shader does not have access to the colors.
 
I would say that's the goal of the constraints on the shader language.  It's not 100% clear to me that the constraints achieve that goal.

I've written up a document [1] that covers the shader validation and rewriting in detail. I hope it clarifies how the constraints on the language prevent sensitive information from leaking through the timing channel. Comments / clarifications / suggestions are welcome :)


- Max

Max Vujovic

unread,
May 15, 2013, 5:10:20 PM5/15/13
to Adam Barth, Gregg Tavares, Antoine Labour, Sami Kyostila, Vangelis Kokkevis, Max Heinritz, James Robinson, Alexandru Chiculita, blink-dev, Kenneth Russell
On May 15, 2013, at 8:18 AM, Adam Barth <aba...@chromium.org> wrote:

> On Wed, May 15, 2013 at 2:12 AM, Gregg Tavares <gm...@chromium.org> wrote:
> On Tue, May 14, 2013 at 6:28 PM, Antoine Labour <pi...@chromium.org> wrote:
> On Tue, May 14, 2013 at 6:14 PM, Gregg Tavares <gm...@chromium.org> wrote:
> The proposal put forward before was to have WebGL mark shaders as safe or not. Safe shaders would follow the CSS Custom FX spec (no texture access period, only a color multiplier/matrix).
>
> The simplest implementation is: A WebGL context starts out as safe and clean. Safe = no non-safe shaders. Clean = no unsafe content. If a non-safe shader is compiled the safe flag is cleared. If non safe content is uploaded the clean flag is cleared. If both flags are cleared the context is lost. If the safe flag is cleared no unsafe content may be uploaded (same as today where no unsafe content may be uploaded). If the clean flag is cleared readPixels, toDataURL and using the canvas a source to other APIs (drawImage, texImage2D, ) fail with a Security exception (this is/was already implemented before we disallowed all unsafe content)
>
> That seems like it would work. What am I missing?
>
> The attack model OOP iframes want to protect against is a compromised renderer, not just malicious javascript.
>
> The scheme you suggest would need to be implemented in the GPU process (and in terms of GL/command-buffer calls, not in terms of WebGL API). It would also mean that the entire renderer would have to be disallowed to do glReadPixels, not just a single context - otherwise a compromised renderer can just use share groups or shared textures to readback the same thing from another context. Because of side channel attacks I think you'd need to restrict the shaders the compositor can use too, and that wouldn't be good. I suppose you could whitelist the compositor ones, but skia dynamically generates its shaders for filters and stuff.
>
> Maybe it can be solved, but it seems like a lot of complexity and work.
>
> Sorry if I'm not up on OOP iframes. If we're protecting against compromised renderers that's already a problem today. A compromised renderer can read all memory in the renderer space (passwords, documents, email) and access all textures used by that process.
>
> That's correct today. Today we only worry about compromised render processes reading or writing the user's local file system. With out-of-process iframes, we're hoping to do better and protect against compromised renderers learning passwords, documents, emails, etc from other origins.
>
> How about we just mark the entire page as not clean if there are any OOP iframes or is that too restricting?
>
> There are two issues with that approach:
>
> 1) It's pretty restrictive. Many web sites have like buttons or other social widgets that are iframes from another origin.
> 2) We also need to protect against sensitive information from the same origin (e.g., the color of links reveals whether the user has visited certain URLs).
>
> As I wrote in the other thread, I don't think the security issues are significant enough to prevent implementation of this feature behind a runtime flag. The more significant issues are the constraints custom shaders impose on the rest of the system.

Could you be specific about the constraints, so we can make sure they are addressed?

> We need to be careful not to commit to web-exposed APIs that constrain future development of the browser, especially if other browser vendors don't have those constraints. That's a recipe for falling behind other browsers.

Absolutely, we shouldn't commit to APIs that constrain the future development of the browser. However, we should also be careful to avoid architectural constraints that prevent future innovation. That will also make us fall behind other browsers.

In my opinion, graphical innovation in HTML and CSS won't stop at fixed-pipeline primitives like built-in filters. OpenGL itself had to make the leap from fixed pipeline to programmable pipeline at one point. I want Chromium to be at the forefront of graphical innovation, and I'm sure we all share this desire :)

- Max

mvuj...@adobe.com

unread,
May 30, 2013, 10:15:50 PM5/30/13
to blin...@chromium.org, Gregg Tavares, Antoine Labour, Sami Kyostila, Vangelis Kokkevis, Max Heinritz, James Robinson, Alexandru Chiculita, Max Vujovic, Kenneth Russell, Adam Barth, Eric Seidel

Hi all,

I think this discussion has been incredibly useful and has really helped define how Custom Filters can fit with Chromium's architectural plans. I want to thank everyone for participating! :)

To help organize the output of the discussion, I've created a table [1] of the concerns and the proposed resolutions that were brought up regarding Custom Filters. I've tried to be complete, so please comment if I've missed anything or if there are any new concerns.

[1]: https://docs.google.com/document/d/1qDGmAIXdShzO7J9o6_lrCENGgkq-gKWcOiGqERppHCo/edit?usp=sharing

Thanks,
Max

Adam Barth

unread,
May 31, 2013, 9:25:15 AM5/31/13
to Max Vujovic, blink-dev, Gregg Tavares, Antoine Labour, Sami Kyostila, Vangelis Kokkevis, Max Heinritz, James Robinson, Alexandru Chiculita, Kenneth Russell, Eric Seidel
On Thu, May 30, 2013 at 3:15 PM, <mvuj...@adobe.com> wrote:

I think this discussion has been incredibly useful and has really helped define how Custom Filters can fit with Chromium's architectural plans. I want to thank everyone for participating! :)

To help organize the output of the discussion, I've created a table [1] of the concerns and the proposed resolutions that were brought up regarding Custom Filters. I've tried to be complete, so please comment if I've missed anything or if there are any new concerns.

[1]: https://docs.google.com/document/d/1qDGmAIXdShzO7J9o6_lrCENGgkq-gKWcOiGqERppHCo/edit?usp=sharing

Thanks for taking the time to write up this summary of the discussion.  I've left a number of comments in the doc.

Adam
Reply all
Reply to author
Forward
0 new messages