Mixed Content / CSP in browser process

27 views
Skip to first unread message

Charlie Reis

unread,
Oct 6, 2015, 1:38:08 PM10/6/15
to mk...@chromium.org, est...@chromium.org, Alex Moshchuk, Nasko Oskov, David Benjamin, cl...@chromium.org, site-isol...@chromium.org
Hi Mike, Emily, et al--
  Now that the Blink repo merge has happened, we should be ready to move the mixed content and CSP checks to the browser process (https://crbug.com/486936).  That will be especially useful once Site Isolation is enabled, so that a compromised renderer can't skip them.  I think it's also a prerequisite for PlzNavigate.

  Mike, you raised a good question about where they should live.  Much of the relevant info about frames lives on the UI thread in the browser process (e.g., FrameTreeNode, RenderFrameHost, FrameReplicationState), so I imagine that would make things easier than having it on the IO thread.  Then again, there could be performance reasons to have it on the IO thread.

  There are some similar checks in ChildProcessSecurityPolicy (which is one of the rare autolock classes available across threads), but it seems like the logic might belong better in its own classes than in there.

  Ultimately, it probably depends on what input these checks need, and what points in time they need to happen.  Does anyone have a good sense for this, and could they write up some notes to guide the design?

Thanks!
Charlie

Emily Stark

unread,
Oct 8, 2015, 9:45:29 AM10/8/15
to Charlie Reis, Joel Weinberger, mk...@chromium.org, Emily Stark, Alex Moshchuk, Nasko Oskov, David Benjamin, cl...@chromium.org, site-isol...@chromium.org
Mike, Joel, and I had a chat about mixed content this morning and I wrote up some notes and a strawman proposal here. However, before we get too deep in here, I'm starting to get worried about two somewhat fundamental concerns:

- What happens when Blink loads resources from its memory cache? Presumably we don't want Blink to go asking the browser process whether it's allowed to use cached resources.

- I'm worried that doing mixed content checks in the browser is sort of fundamentally racy, because the browser doesn't know which navigation entry or document a given request corresponds to. Imagine that Blink initiates a navigation and then, before it commits, issues a mixed request for a subresource. Events could shake out in the following order, regardless of whether the mixed content check itself happens on the IO thread or the UI thread:

1. Browser process receives the subresource request on the IO thread.
2. Renderer receives navigation response and commits the navigation.
3. UI thread receives notification that navigation has committed.
4. IO thread tells UI thread about the mixed content.

Now the UI thread updates the display for the wrong navigation entry. (I'm not sure if this is something that a class like ChildProcessSecurityPolicy could help us with.) (David: I think this is similar to the race you pointed out in talking about the broken-HTTPS subresources case.)

Thanks,
Emily

Emily Stark

unread,
Oct 12, 2015, 3:35:53 AM10/12/15
to Charlie Reis, Joel Weinberger, mk...@chromium.org, Alex Moshchuk, Nasko Oskov, David Benjamin, cl...@chromium.org, site-isol...@chromium.org
I'm going to reply to my own email, but hopefully someone else will reply soon or else I'll just keep talking to myself. :P

Here is where I'm leaning on this right now... I think we might be able to resolve the race I mentioned (see inline comment below), though it comes at the cost of always relying on Blink to tell the browser when to update the UI, which is a little sad. In general, I've gone back and forth on whether there is actual security benefit to having mixed content checks in the browser, and I think that there is, but it's not a huge obvious unadulterated security win because we will still have to trust the renderer to some extent (e.g. for loading resources from the memory cache, and for providing the resource context to the browser when not loading from the memory cache).

Moreover, it seems that Blink will always need to be able to do mixed content checks itself for resources loaded from the memory cache, using its idea of 
the frame tree instead of relying on the browser to do the check.

Therefore, I think a reasonable goal for this quarter would be for me to adapt the existing Blink MixedContentChecker to work with RemoteFrames. This will prevent OOPIFs from getting blocked on mixed content, and it's work that we will have to do no matter what: we might end up moving the actual mixed content check into a component and calling it from both Blink and the browser process, but Blink will still need to be able to provide the inputs to the mixed content check from RemoteFrames.

On Thu, Oct 8, 2015 at 3:45 PM, Emily Stark <est...@chromium.org> wrote:
Mike, Joel, and I had a chat about mixed content this morning and I wrote up some notes and a strawman proposal here. However, before we get too deep in here, I'm starting to get worried about two somewhat fundamental concerns:

- What happens when Blink loads resources from its memory cache? Presumably we don't want Blink to go asking the browser process whether it's allowed to use cached resources.

- I'm worried that doing mixed content checks in the browser is sort of fundamentally racy, because the browser doesn't know which navigation entry or document a given request corresponds to. Imagine that Blink initiates a navigation and then, before it commits, issues a mixed request for a subresource. Events could shake out in the following order, regardless of whether the mixed content check itself happens on the IO thread or the UI thread:

1. Browser process receives the subresource request on the IO thread.
2. Renderer receives navigation response and commits the navigation.
3. UI thread receives notification that navigation has committed.
4. IO thread tells UI thread about the mixed content.

I think we can resolve this as follows: when the browser process makes a decision about a mixed request, it can mark the request with that decision and send a signal back down to Blink. Blink can then decide whether the request corresponds to a document that is still current, and tell the browser to update the UI if so.

Joel Weinberger

unread,
Oct 12, 2015, 3:58:42 AM10/12/15
to Emily Stark, Charlie Reis, mk...@chromium.org, Alex Moshchuk, Nasko Oskov, David Benjamin, cl...@chromium.org, site-isol...@chromium.org
I'm going to be be unpopular opinion puffin here for a moment and state that I don't care if mixed content checks happen in the renderer.

The goal of mixed content is to alert the user to untrusted and potentially dangerous content entering the renderer. However, the premise of moving mixed content checking to the browser process is that we are worried about a compromised renderer. But if that's the case, then the renderer is already filled with untrustworthy and potentially dangerous content, so does it actually matter if we detect -new- dangerous content from entering? I'm just not sure of what we're trying to stop at that point.

In short, I like estark's plan.
--Joel

Emily Stark (Dunn)

unread,
Oct 12, 2015, 4:56:10 AM10/12/15
to Joel Weinberger, joc...@chromium.org, Emily Stark, Charlie Reis, mk...@chromium.org, Alex Moshchuk, Nasko Oskov, David Benjamin, cl...@chromium.org, site-isol...@chromium.org
+jochen

Charlie Reis

unread,
Oct 12, 2015, 4:45:31 PM10/12/15
to Emily Stark (Dunn), Joel Weinberger, Jochen Eisinger, Emily Stark, mk...@chromium.org, Alex Moshchuk, Nasko Oskov, David Benjamin, cl...@chromium.org, site-isol...@chromium.org
Sorry for the delayed reply!

On Mon, Oct 12, 2015 at 1:55 AM, Emily Stark (Dunn) <est...@google.com> wrote:
+jochen

On Mon, Oct 12, 2015 at 9:58 AM, Joel Weinberger <j...@google.com> wrote:
I'm going to be be unpopular opinion puffin here for a moment and state that I don't care if mixed content checks happen in the renderer.

The goal of mixed content is to alert the user to untrusted and potentially dangerous content entering the renderer. However, the premise of moving mixed content checking to the browser process is that we are worried about a compromised renderer. But if that's the case, then the renderer is already filled with untrustworthy and potentially dangerous content, so does it actually matter if we detect -new- dangerous content from entering? I'm just not sure of what we're trying to stop at that point.


I think I disagree with this.  If a top-level frame is secure, then we might need to block subframes from navigating to insecure pages, loading insecure scripts, etc.  The subframe could be an OOPIF in a compromised process, and if the checks to prevent loading insecure content were in the subframe's process, it could skip them.  That violates the assumptions of the top-level frame and the user, even though there's no evil code in its process.

Am I misunderstanding the way mixed content works?  I admit that I don't have a lot of familiarity with it, and maybe we'll have other checks that prevent the case I describe (e.g., cross-site document blocking)?

I'd like to hear Alex's thoughts on the mixed content checks vs Site Isolation, since I know he looked at some of the code.

I'd also like to hear Camille's thoughts, since PlzNavigate depends on at least some of the mixed content and CSP code being in the browser process.  (We won't even have a renderer process in some cases when the request is being made.)

Also, is CSP a totally different topic, or are we looking at some of the same issues there?  Enforcing CSP in the browser process doesn't seem like it adds a lot of security, but I think PlzNavigate might depend on it.  And maybe the browser can help enforce how CSP applies to subframes.


 
In short, I like estark's plan.
--Joel

On Mon, Oct 12, 2015 at 9:35 AM Emily Stark <est...@chromium.org> wrote:
I'm going to reply to my own email, but hopefully someone else will reply soon or else I'll just keep talking to myself. :P

Here is where I'm leaning on this right now... I think we might be able to resolve the race I mentioned (see inline comment below), though it comes at the cost of always relying on Blink to tell the browser when to update the UI, which is a little sad. In general, I've gone back and forth on whether there is actual security benefit to having mixed content checks in the browser, and I think that there is, but it's not a huge obvious unadulterated security win because we will still have to trust the renderer to some extent (e.g. for loading resources from the memory cache, and for providing the resource context to the browser when not loading from the memory cache).

Moreover, it seems that Blink will always need to be able to do mixed content checks itself for resources loaded from the memory cache, using its idea of 
the frame tree instead of relying on the browser to do the check.

Therefore, I think a reasonable goal for this quarter would be for me to adapt the existing Blink MixedContentChecker to work with RemoteFrames. This will prevent OOPIFs from getting blocked on mixed content, and it's work that we will have to do no matter what: we might end up moving the actual mixed content check into a component and calling it from both Blink and the browser process, but Blink will still need to be able to provide the inputs to the mixed content check from RemoteFrames.

On Thu, Oct 8, 2015 at 3:45 PM, Emily Stark <est...@chromium.org> wrote:
Mike, Joel, and I had a chat about mixed content this morning and I wrote up some notes and a strawman proposal here. However, before we get too deep in here, I'm starting to get worried about two somewhat fundamental concerns:

- What happens when Blink loads resources from its memory cache? Presumably we don't want Blink to go asking the browser process whether it's allowed to use cached resources.

I don't see the memory cache as quite as big a deal, since that particular renderer process had already been allowed to load that resource in the past.

For example, suppose we made sure that HTTP and HTTPS frames never shared a process.  An HTTP process wouldn't be able to load HTTPS pages from its memory cache, or vice versa.  (We're not planning to do this HTTP vs HTTPS policy imminently, but it's one of the options we're considering.)

I'm fine with there being renderer-side checks to make sure we do the right thing in those cases, but we may still want the browser process to have some stronger (possibly conservative) checks.

 

- I'm worried that doing mixed content checks in the browser is sort of fundamentally racy, because the browser doesn't know which navigation entry or document a given request corresponds to. Imagine that Blink initiates a navigation and then, before it commits, issues a mixed request for a subresource. Events could shake out in the following order, regardless of whether the mixed content check itself happens on the IO thread or the UI thread:

1. Browser process receives the subresource request on the IO thread.
2. Renderer receives navigation response and commits the navigation.
3. UI thread receives notification that navigation has committed.
4. IO thread tells UI thread about the mixed content.

I think we can resolve this as follows: when the browser process makes a decision about a mixed request, it can mark the request with that decision and send a signal back down to Blink. Blink can then decide whether the request corresponds to a document that is still current, and tell the browser to update the UI if so.

We might end up with some NavigationHandle IDs in the browser process that could help resolve this race, without depending on Blink.  Camille has more context there.
 
Charlie

Alex Moshchuk

unread,
Oct 12, 2015, 7:41:28 PM10/12/15
to Charlie Reis, Emily Stark (Dunn), Joel Weinberger, Jochen Eisinger, Emily Stark, Mike West, Nasko Oskov, David Benjamin, cl...@chromium.org, site-isol...@chromium.org
On Mon, Oct 12, 2015 at 1:45 PM, Charlie Reis <cr...@google.com> wrote:
Sorry for the delayed reply!

On Mon, Oct 12, 2015 at 1:55 AM, Emily Stark (Dunn) <est...@google.com> wrote:
+jochen

On Mon, Oct 12, 2015 at 9:58 AM, Joel Weinberger <j...@google.com> wrote:
I'm going to be be unpopular opinion puffin here for a moment and state that I don't care if mixed content checks happen in the renderer.

The goal of mixed content is to alert the user to untrusted and potentially dangerous content entering the renderer. However, the premise of moving mixed content checking to the browser process is that we are worried about a compromised renderer. But if that's the case, then the renderer is already filled with untrustworthy and potentially dangerous content, so does it actually matter if we detect -new- dangerous content from entering? I'm just not sure of what we're trying to stop at that point.


I think I disagree with this.  If a top-level frame is secure, then we might need to block subframes from navigating to insecure pages, loading insecure scripts, etc.  The subframe could be an OOPIF in a compromised process, and if the checks to prevent loading insecure content were in the subframe's process, it could skip them.  That violates the assumptions of the top-level frame and the user, even though there's no evil code in its process.

Am I misunderstanding the way mixed content works?  I admit that I don't have a lot of familiarity with it, and maybe we'll have other checks that prevent the case I describe (e.g., cross-site document blocking)?

I'd like to hear Alex's thoughts on the mixed content checks vs Site Isolation, since I know he looked at some of the code.

I think our biggest problem currently is that with --site-per-process, if someone embeds an OOPIF https://B, https://B can freely load scripts and embed frames from http://B even though those should be blocked.  That's the repro I gave on https://crbug.com/486936.  But to fix this, we don't need to move the mixed content check to the browser process; Emily's proposal will work just fine.

I'm actually a bit split on whether or not the check belongs in the browser process.  If https://B gets compomised, I agree there isn't much point in stopping https://B from loading http://B resources.  The case I'm curious about, which I think Charlie brought up above, is if https://A embeds an OOPIF https://B, which gets compromised, bypasses the mixed content check, and embeds a frame http://A or runs a script from http://A.  This seems bad, since these shouldn't have been loaded in the first place, and since there's no UI warning that some of what the user is looking at for A could now be controlled by MITM.  However, all of this happens in a different process, and the https://A main frame is not affected, so this is nowhere near as bad as putting mixed content into the main frame.  I'd be ok if we go with Emily's proposal to start with and harden it with browser-side checks later.

I'm also curious to hear more about PlzNavigate's dependency on mixed content, since that might be a bigger reason to do it in the browser process.

Joel Weinberger

unread,
Oct 13, 2015, 3:58:02 AM10/13/15
to Alex Moshchuk, Charlie Reis, Emily Stark (Dunn), Jochen Eisinger, Emily Stark, Mike West, Nasko Oskov, David Benjamin, cl...@chromium.org, site-isol...@chromium.org
A couple of points, although I may be very confused, so I'm looking forward to being corrected :-)
  • We're not talking about blocking resources, but only how we mark the omnibox, because we can't know the actual use type of the resource. For example, if the renderer is compromised, even if it's trying to get an HTTP script (which should be blocked), it can request as an image and then treat as a script.
  • If the embedded case, couldn't a compromised renderer request an insecure resource, quickly cause an event that forces a navigation (fake a link click, for example) back to itself (and thus now have a "fresh" omnibox without mixed content, even if it's an embedded frame), and then use the insecure resource from the memory cache?
And just to clarify, I'm not particularly arguing against moving whatever we can and need to move into the browser process, I'm just arguing against it being a big security win. Really what I'm arguing is that I think Emily's approach is a good one :-)
--Joel

Camille Lamy

unread,
Oct 13, 2015, 6:50:33 AM10/13/15
to Joel Weinberger, Alex Moshchuk, Charlie Reis, Emily Stark (Dunn), Jochen Eisinger, Emily Stark, Mike West, Nasko Oskov, David Benjamin, site-isol...@chromium.org
On Tue, Oct 13, 2015 at 9:58 AM Joel Weinberger <j...@google.com> wrote:
A couple of points, although I may be very confused, so I'm looking forward to being corrected :-)
  • We're not talking about blocking resources, but only how we mark the omnibox, because we can't know the actual use type of the resource. For example, if the renderer is compromised, even if it's trying to get an HTTP script (which should be blocked), it can request as an image and then treat as a script.
  • If the embedded case, couldn't a compromised renderer request an insecure resource, quickly cause an event that forces a navigation (fake a link click, for example) back to itself (and thus now have a "fresh" omnibox without mixed content, even if it's an embedded frame), and then use the insecure resource from the memory cache?
And just to clarify, I'm not particularly arguing against moving whatever we can and need to move into the browser process, I'm just arguing against it being a big security win. Really what I'm arguing is that I think Emily's approach is a good one :-)
--Joel

On Tue, Oct 13, 2015 at 1:41 AM Alex Moshchuk <ale...@chromium.org> wrote:
On Mon, Oct 12, 2015 at 1:45 PM, Charlie Reis <cr...@google.com> wrote:
Sorry for the delayed reply!

On Mon, Oct 12, 2015 at 1:55 AM, Emily Stark (Dunn) <est...@google.com> wrote:
+jochen

On Mon, Oct 12, 2015 at 9:58 AM, Joel Weinberger <j...@google.com> wrote:
I'm going to be be unpopular opinion puffin here for a moment and state that I don't care if mixed content checks happen in the renderer.

The goal of mixed content is to alert the user to untrusted and potentially dangerous content entering the renderer. However, the premise of moving mixed content checking to the browser process is that we are worried about a compromised renderer. But if that's the case, then the renderer is already filled with untrustworthy and potentially dangerous content, so does it actually matter if we detect -new- dangerous content from entering? I'm just not sure of what we're trying to stop at that point.


I think I disagree with this.  If a top-level frame is secure, then we might need to block subframes from navigating to insecure pages, loading insecure scripts, etc.  The subframe could be an OOPIF in a compromised process, and if the checks to prevent loading insecure content were in the subframe's process, it could skip them.  That violates the assumptions of the top-level frame and the user, even though there's no evil code in its process.

Am I misunderstanding the way mixed content works?  I admit that I don't have a lot of familiarity with it, and maybe we'll have other checks that prevent the case I describe (e.g., cross-site document blocking)?

I'd like to hear Alex's thoughts on the mixed content checks vs Site Isolation, since I know he looked at some of the code.

I think our biggest problem currently is that with --site-per-process, if someone embeds an OOPIF https://B, https://B can freely load scripts and embed frames from http://B even though those should be blocked.  That's the repro I gave on https://crbug.com/486936.  But to fix this, we don't need to move the mixed content check to the browser process; Emily's proposal will work just fine.

I'm actually a bit split on whether or not the check belongs in the browser process.  If https://B gets compomised, I agree there isn't much point in stopping https://B from loading http://B resources.  The case I'm curious about, which I think Charlie brought up above, is if https://A embeds an OOPIF https://B, which gets compromised, bypasses the mixed content check, and embeds a frame http://A or runs a script from http://A.  This seems bad, since these shouldn't have been loaded in the first place, and since there's no UI warning that some of what the user is looking at for A could now be controlled by MITM.  However, all of this happens in a different process, and the https://A main frame is not affected, so this is nowhere near as bad as putting mixed content into the main frame.  I'd be ok if we go with Emily's proposal to start with and harden it with browser-side checks later.

I'm also curious to hear more about PlzNavigate's dependency on mixed content, since that might be a bigger reason to do it in the browser process.


I'd also like to hear Camille's thoughts, since PlzNavigate depends on at least some of the mixed content and CSP code being in the browser process.  (We won't even have a renderer process in some cases when the request is being made.)

Also, is CSP a totally different topic, or are we looking at some of the same issues there?  Enforcing CSP in the browser process doesn't seem like it adds a lot of security, but I think PlzNavigate might depend on it.  And maybe the browser can help enforce how CSP applies to subframes.

I'm not very familiar with the way mixed contents work, but I can provide some lights over the constraints of PlzNavigate. Our main issue with PlzNavigate is that any check that can block navigation prior to commit should be executed on the browser process. For CSP, this entails checking if subframes can follow redirects in the browser process.

If the mixed contents check is only there to show a warning, then we can probably have it execute in the renderer just before it actually commits the navigation and have the result returned to the browser. If it can potentially block the navigation, then it needs happen in the browser for it to work with PlzNavigate.

When it comes to potential races, I don't think the one described by Emily is possible in practice. The commit IPC goes through the IO thread first before being processed by the UI thread. So, if a notification from the IO thread comes to the UI thread afterwards, it can only come from a request that was still ongoing after the commit of the navigation. However, on the renderer side, we stop those requests before we send the Commit IPC.
Thanks!
Camille 

Mike West

unread,
Oct 13, 2015, 7:26:34 AM10/13/15
to Camille Lamy, Joel Weinberger, Alex Moshchuk, Charlie Reis, Emily Stark (Dunn), Jochen Eisinger, Emily Stark, Nasko Oskov, David Benjamin, site-isol...@chromium.org
> I'm not very familiar with the way mixed contents work, but I can provide
> some lights over the constraints of PlzNavigate. Our main issue with
> PlzNavigate is that any check that can block navigation prior to commit
> should be executed on the browser process. For CSP, this entails checking if
> subframes can follow redirects in the browser process.

1. Mixed Content needs to check whether a redirect from a secure
origin to an insecure origin is allowed for nested browsing contexts.

2. CSP needs to check redirects for frame navigations, form
submissions, Beacon, Ping and probably others, and also needs to walk
the frame tree to determine if a response can be framed (similar to
X-Frame-Options, which I assume you'll also be pulling up out of the
renderer?).

> If it can
> potentially block the navigation, then it needs happen in the browser for it
> to work with PlzNavigate.

MIX, CSP, and XFO can all block navigations. It sounds like this is
enough of a reason to pull at least the navigational portion of both
MIX and CSP up out of the renderer.

-mike

Emily Stark

unread,
Oct 14, 2015, 10:48:39 AM10/14/15
to Camille Lamy, Joel Weinberger, Alex Moshchuk, Charlie Reis, Jochen Eisinger, Emily Stark, Mike West, Nasko Oskov, David Benjamin, site-isol...@chromium.org
Thanks, all! It sounds like if this is a prerequisite for PlzNavigate then the question of whether it's a security win might be moot.
Forgive my ignorance, but is this how all IPCs work? (Handled by the IO thread first) Because I was looking at didCommitProvisionalLoad and thought it looked like it went straight to the UI thread.

Camille Lamy

unread,
Oct 14, 2015, 10:51:51 AM10/14/15
to Emily Stark, Joel Weinberger, Alex Moshchuk, Charlie Reis, Jochen Eisinger, Mike West, Nasko Oskov, David Benjamin, site-isol...@chromium.org
Yes, all IPCs are handled by the IO thread first, then sent to the UI thread if the IO thread did not handle them. This is done so that the UI thread does not become bloated with all the resource messages (see https://www.chromium.org/developers/design-documents/inter-process-communication).

Charlie Reis

unread,
Oct 14, 2015, 3:24:23 PM10/14/15
to Camille Lamy, Emily Stark, Joel Weinberger, Alex Moshchuk, Jochen Eisinger, Mike West, Nasko Oskov, David Benjamin, site-isol...@chromium.org
Great-- sounds like the navigation blocking for Mixed content, CSP, and XFO should live in the browser process for PlzNavigate.  I think I agree that the security wins would instead come from our upcoming cross-site document blocking effort (which would prevent most off-limits docs from getting into the renderer in the first place, even when requested as an image).  In the meantime, just making the renderer checks work with OOPIFs sounds fine to me.

Thanks!
Charlie

Charlie Reis

unread,
Nov 25, 2015, 5:15:42 PM11/25/15
to Emily Stark, Camille Lamy, Joel Weinberger, Alex Moshchuk, Jochen Eisinger, Mike West, Nasko Oskov, David Benjamin, site-isol...@chromium.org
Emily, I wanted to follow up on this, since we're preparing to do some limited trials of --isolate-extensions on Canary.

Is there something that we can do in the short term to make the renderer checks for CSP and Mixed Content work with OOPIFs?  I'm concerned that we're failing open right now (https://crbug.com/376522 comment 4, https://crbug.com/486936).  For CSP, I imagine that affects things like child-src, default-src, frame-src, and possibly frame-ancestors when RemoteFrames are present.  For mixed content, we load things without warnings.

Any thoughts on what we can do in the short term to mitigate this, and when we can start moving these to the browser process?

Thanks!
Charlie

Emily Stark

unread,
Nov 25, 2015, 6:01:14 PM11/25/15
to Charlie Reis, Camille Lamy, Joel Weinberger, Alex Moshchuk, Jochen Eisinger, Mike West, Nasko Oskov, David Benjamin, site-isol...@chromium.org
Hi Charlie!

1.) I definitely have make-mixed-content-work-with-OOPIFs on my to-do list for this quarter. It sounds like I should probably bump up the priority, though, and I can try to get something working in the next week or so. When I last looked I think I concluded that it wouldn't be too hard to make this work (famous last words) -- mostly a matter of plumbing some additional data in a similar manner as WebSandboxFlags on the replication state.

2.) Nothing is really blocking moving mixed content checks to the browser process except someone having time to work on it, AFAIK. I was planning to focus on #1 first because we'll need to do it anyway (for memory cache loads) and it'll be the fastest path to unblocking OOPIFs.

3.) CSP I'm less sure about... I can take a look to see if there are similar quick fixes we can make to get the current renderer-side checks working with OOPIFs. One hesitation we had about moving it to the browser process is that parsing CSP in the browser made us sad from a security standpoint. I'm not sure there's much we can do about this concern, though, because calling out to a utility process on every response with a CSP is probably a no-go.

Emily

Charlie Reis

unread,
Nov 25, 2015, 6:11:46 PM11/25/15
to Emily Stark, Camille Lamy, Joel Weinberger, Alex Moshchuk, Jochen Eisinger, Mike West, Nasko Oskov, David Benjamin, site-isol...@chromium.org
Ok, thanks!

It sounds like Alex had an idea for CSP in the short term: send the header value to each affected renderer process and give it to the SecurityContext for each RemoteFrame, without parsing it in the browser process.

Charlie

Nasko Oskov

unread,
Nov 30, 2015, 2:05:25 PM11/30/15
to Charlie Reis, Emily Stark, Camille Lamy, Joel Weinberger, Alex Moshchuk, Jochen Eisinger, Mike West, David Benjamin, site-isol...@chromium.org
On Wed, Nov 25, 2015 at 3:11 PM, Charlie Reis <cr...@google.com> wrote:
Ok, thanks!

It sounds like Alex had an idea for CSP in the short term: send the header value to each affected renderer process and give it to the SecurityContext for each RemoteFrame, without parsing it in the browser process.

Charlie


On Wed, Nov 25, 2015 at 3:00 PM, Emily Stark <est...@chromium.org> wrote:
Hi Charlie!

1.) I definitely have make-mixed-content-work-with-OOPIFs on my to-do list for this quarter. It sounds like I should probably bump up the priority, though, and I can try to get something working in the next week or so. When I last looked I think I concluded that it wouldn't be too hard to make this work (famous last words) -- mostly a matter of plumbing some additional data in a similar manner as WebSandboxFlags on the replication state.

2.) Nothing is really blocking moving mixed content checks to the browser process except someone having time to work on it, AFAIK. I was planning to focus on #1 first because we'll need to do it anyway (for memory cache loads) and it'll be the fastest path to unblocking OOPIFs.

3.) CSP I'm less sure about... I can take a look to see if there are similar quick fixes we can make to get the current renderer-side checks working with OOPIFs. One hesitation we had about moving it to the browser process is that parsing CSP in the browser made us sad from a security standpoint. I'm not sure there's much we can do about this concern, though, because calling out to a utility process on every response with a CSP is probably a no-go.

There is also PlzNavigate, which will need some form of CSP on the browser side. I don't recall the details or whether we thoroughly investigated what the cases were.

Carlos Knippschild

unread,
Mar 31, 2016, 12:01:22 PM3/31/16
to Nasko Oskov, Charlie Reis, Emily Stark, Camille Lamy, Joel Weinberger, Alex Moshchuk, Jochen Eisinger, Mike West, David Benjamin, site-isol...@chromium.org, plzna...@chromium.org
Hi all

Necro-threading this to say I'm going to start working on these issues (and possibly other related ones) with the focus of supporting PlzNavigate by moving the needed security checks to the browser. I just created an umbrella issue to track this work and added some blocking issues covering what I found out so far from reading this thread and searching for existent bugs.

I see Emily and Mike have already been working on parts of this. Could you give me an update on your statuses? Do you have any suggestion of what part of this work should I help or start with? 

Mind that these security checks (and almost all of Blink) are unknown to me so there will be questions!

Emily Stark

unread,
Mar 31, 2016, 10:26:36 PM3/31/16
to Carlos Knippschild, Nasko Oskov, Charlie Reis, Emily Stark, Camille Lamy, Joel Weinberger, Alex Moshchuk, Jochen Eisinger, Mike West, David Benjamin, site-isol...@chromium.org, plzna...@chromium.org
Hi Carlos,

I did some work to make the mixed content checker work with OOPIFs, but haven't made any progress lately on moving it to the browser process. So, sadly, I don't have much wisdom to impart. :( For mixed content checking with PlzNavigate, it sounds like the main concern is that the main resource of a subframe gets run through the mixed content checker in the browser process. If you only check the main resource of a subframe rather than trying to check all subresources in the browser process, that might (?) simplify things a bit. (You don't have to handle both passive and active mixed content, for example, though I'm not sure if that actually simplifies things in practice that much.)

Do you know where in the code the browser-side mixed content check will happen? That might help me form a better idea on what code should move where.

I haven't looked too much at what needs to happen for CSP; Mike might have more state on that.

I'm happy to help where I can with specific questions as you go along.

Emily

Carlos Knippschild

unread,
Apr 1, 2016, 12:13:25 PM4/1/16
to Emily Stark, Carlos Knippschild, Nasko Oskov, Charlie Reis, Camille Lamy, Joel Weinberger, Alex Moshchuk, Jochen Eisinger, Mike West, David Benjamin, site-isol...@chromium.org, plzna...@chromium.org
From my current understanding of what mixed content checks are and what has been discussed in this thread and linked references, this is how I imagine it will work with only PlzNavigate interests in mind:
  • For any resource loaded through a navigation request we want to perform mixed content checks in the browser process, most probably by using a NavigationThrottle (runs in the UI thread).
    • To allow this to be executed in the browser we'll need to make the required information available in the browser (listed in the strawman doc; anything missing?).
  • For all other cases, mixed content checks would continue as they are, in the renderer. I think this would cover the case of Blink loading stuff from its memory cache mentioned above.
If for site isolation the other resources case should also be moved to the browser, I think a ResourceThrottle (runs in the IO thread) implementation could be used. Both implementations should share the same information sources yet to be made available in the browser.

Finally, this might be a dumb question: do main frames need mixed content checks too? Can its context disallow mixed content and its origin be insecure? It seems the answer is no but I'm uncertain. Form submissions for one seem to be an exception...

Emily Stark

unread,
Apr 5, 2016, 2:41:26 AM4/5/16
to Carlos Knippschild, Emily Stark, Nasko Oskov, Charlie Reis, Camille Lamy, Joel Weinberger, Alex Moshchuk, Jochen Eisinger, Mike West, David Benjamin, site-isol...@chromium.org, plzna...@chromium.org
On Fri, Apr 1, 2016 at 9:13 AM, Carlos Knippschild <car...@chromium.org> wrote:
From my current understanding of what mixed content checks are and what has been discussed in this thread and linked references, this is how I imagine it will work with only PlzNavigate interests in mind:
  • For any resource loaded through a navigation request we want to perform mixed content checks in the browser process, most probably by using a NavigationThrottle (runs in the UI thread).
    • To allow this to be executed in the browser we'll need to make the required information available in the browser (listed in the strawman doc; anything missing?).

That looks more or less complete, though I always seem to miss something that ends up requiring a massive amount of plumbing to get to the browser process. :) It looks like the settings that the browser process will need to learn about are allowRunningOfInsecureContent, allowDisplayOfInsecureContent, and strictMixedContentChecking.
 
  • For all other cases, mixed content checks would continue as they are, in the renderer. I think this would cover the case of Blink loading stuff from its memory cache mentioned above.
That sounds to me like a good place to start. Once we have the checks in the browser process for navigation requests, it probably won't be too hard to later refactor them if we want to check more types of requests in the browser for security purposes. As you can see upthread, there was some dissent about whether there's a real security win from doing these checks in the browser process, so starting only with PlzNavigate's needs seems reasonable.
 
If for site isolation the other resources case should also be moved to the browser, I think a ResourceThrottle (runs in the IO thread) implementation could be used. Both implementations should share the same information sources yet to be made available in the browser.

Finally, this might be a dumb question: do main frames need mixed content checks too? Can its context disallow mixed content and its origin be insecure? It seems the answer is no but I'm uncertain. Form submissions for one seem to be an exception...

Generally main frames shouldn't need mixed content checks, I think, though I'm not quite sure I follow your second question ("Can its context disallow..."). Form submissions are spec'ed as MAY be blocked, but we don't currently block them AFAIK -- we only downgrade the lock UI when we have a form with an insecure action attribute on the page. And if we were to start blocking the actual submission, I would think we could do that in Blink before telling the browser that the form has been submitted and a navigation needs to happen. (Though I might be completely misunderstanding how PlzNavigate + form submissions works.) But, anyway, for Chrome's current behavior where we just downgrade the lock icon for an insecure form attribute but don't actually block the form submission, I don't think we should need to do mixed content checks on main frame navigations.

One other consideration is that there's a bit of plumbing that tells DevTools about individual network requests that were marked as mixed content, and we should try not to break that. So when the browser blocks a subframe navigation request as mixed content, it'll need to tell the parent frame renderer about the blocked request so that the renderer can tell DevTools about it. I think it's okay to deal with this as a follow-up after you've done the actual mixed content checking, and in the meantime I can try to outline in more detail what needs to happen. (I need to refresh my memory on how the devtools plumbing works.)

Nasko Oskov

unread,
Apr 5, 2016, 10:14:32 AM4/5/16
to Emily Stark, Carlos Knippschild, Charlie Reis, Camille Lamy, Joel Weinberger, Alex Moshchuk, Jochen Eisinger, Mike West, David Benjamin, site-isol...@chromium.org, plzna...@chromium.org
On Mon, Apr 4, 2016 at 11:41 PM, Emily Stark <est...@chromium.org> wrote:

On Fri, Apr 1, 2016 at 9:13 AM, Carlos Knippschild <car...@chromium.org> wrote:
From my current understanding of what mixed content checks are and what has been discussed in this thread and linked references, this is how I imagine it will work with only PlzNavigate interests in mind:
  • For any resource loaded through a navigation request we want to perform mixed content checks in the browser process, most probably by using a NavigationThrottle (runs in the UI thread).
    • To allow this to be executed in the browser we'll need to make the required information available in the browser (listed in the strawman doc; anything missing?).

That looks more or less complete, though I always seem to miss something that ends up requiring a massive amount of plumbing to get to the browser process. :) It looks like the settings that the browser process will need to learn about are allowRunningOfInsecureContent, allowDisplayOfInsecureContent, and strictMixedContentChecking.
 
  • For all other cases, mixed content checks would continue as they are, in the renderer. I think this would cover the case of Blink loading stuff from its memory cache mentioned above.
That sounds to me like a good place to start. Once we have the checks in the browser process for navigation requests, it probably won't be too hard to later refactor them if we want to check more types of requests in the browser for security purposes. As you can see upthread, there was some dissent about whether there's a real security win from doing these checks in the browser process, so starting only with PlzNavigate's needs seems reasonable.

I usually tend to agree with this philosophy, but in this case I think the more code we have that behaves identically between regular mode and PlzNavigate, the better. I think the idea is to use NavigationThrottle to do these types of checks and blocking navigations, which is supported in both modes.
 

--
You received this message because you are subscribed to the Google Groups "PlzNavigate Team" group.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/plznavigate/CAPP_2SY6L8W9rnWfOb6Wmg%3DMVdpWd6nFKj6BjL_nG%2BPRdxakMA%40mail.gmail.com.

Emily Stark

unread,
Apr 5, 2016, 2:23:43 PM4/5/16
to Nasko Oskov, Emily Stark, Carlos Knippschild, Charlie Reis, Camille Lamy, Joel Weinberger, Alex Moshchuk, Jochen Eisinger, Mike West, David Benjamin, site-isol...@chromium.org, plzna...@chromium.org
On Tue, Apr 5, 2016 at 7:14 AM, Nasko Oskov <na...@chromium.org> wrote:


On Mon, Apr 4, 2016 at 11:41 PM, Emily Stark <est...@chromium.org> wrote:

On Fri, Apr 1, 2016 at 9:13 AM, Carlos Knippschild <car...@chromium.org> wrote:
From my current understanding of what mixed content checks are and what has been discussed in this thread and linked references, this is how I imagine it will work with only PlzNavigate interests in mind:
  • For any resource loaded through a navigation request we want to perform mixed content checks in the browser process, most probably by using a NavigationThrottle (runs in the UI thread).
    • To allow this to be executed in the browser we'll need to make the required information available in the browser (listed in the strawman doc; anything missing?).

That looks more or less complete, though I always seem to miss something that ends up requiring a massive amount of plumbing to get to the browser process. :) It looks like the settings that the browser process will need to learn about are allowRunningOfInsecureContent, allowDisplayOfInsecureContent, and strictMixedContentChecking.
 
  • For all other cases, mixed content checks would continue as they are, in the renderer. I think this would cover the case of Blink loading stuff from its memory cache mentioned above.
That sounds to me like a good place to start. Once we have the checks in the browser process for navigation requests, it probably won't be too hard to later refactor them if we want to check more types of requests in the browser for security purposes. As you can see upthread, there was some dissent about whether there's a real security win from doing these checks in the browser process, so starting only with PlzNavigate's needs seems reasonable.

I usually tend to agree with this philosophy, but in this case I think the more code we have that behaves identically between regular mode and PlzNavigate, the better. I think the idea is to use NavigationThrottle to do these types of checks and blocking navigations, which is supported in both modes.

Hmm, sorry, I'm not sure I understand. I didn't mean to contrast PlzNavigate with regular mode; instead, I was contrasting navigation requests (which we need to move into the browser in order to unblock PlzNavigate) with non-navigation subresource requests like images (which we may at some point want to move into the browser process for purely security reasons). My point was that I think it's fine to start with blocking navigation requests in the browser and for now continue to rely on mixed content checking in Blink for other types of requests.

Nasko Oskov

unread,
Apr 5, 2016, 7:06:23 PM4/5/16
to Emily Stark, Carlos Knippschild, Charlie Reis, Camille Lamy, Joel Weinberger, Alex Moshchuk, Jochen Eisinger, Mike West, David Benjamin, site-isol...@chromium.org, plzna...@chromium.org
Ah, I have misread it, sorry about that. I completely agree with your assessment, we need to do navigation requests only for now. Doing resource checks in the browser process is completely different matter.
Reply all
Reply to author
Forward
0 new messages