Network Service in Chrome

1,599 views
Skip to first unread message

John Abd-El-Malek

unread,
Feb 26, 2016, 11:39:40 AM2/26/16
to loadi...@chromium.org, net-dev
Hey folks,

As part of the effort to move Chrome towards Mojo IPC and a service architecture, here's a proposal to use a mojo network service in Chrome: https://docs.google.com/document/d/1wAHLw9h7gGuqJNCgG1mP1BmLtCGfZ2pys-PdZQ1vg7M/edit

The first stage is mostly concerned with switching to Mojo IPC. The second stage is about completely separating this code out from chrome so that we have the ability to run it in a separate process.

Please feel free to comment or edit the doc, and I'm sure we'll sync up over many meetings to discuss more.

Thanks

John Abd-El-Malek

unread,
Feb 26, 2016, 11:59:14 AM2/26/16
to loadi...@chromium.org, net-dev, chromium-mojo
+chromium-mojo 

Ryan Sleevi

unread,
Feb 26, 2016, 12:32:03 PM2/26/16
to John Abd-El-Malek, chromium-mojo, net-dev, loadi...@chromium.org

I don't think the doc spells out in a sufficient way what you view as "a network service," and that makes it hard to evaluate.

Networking is used extensively throughout Chrome, and with more and more porous layering as things like URLFetcher got moved to content and then net, but also through the development of things like Blimp and ChromeCast, Chromoting, ans WebPush and the many other exciting, but unrelated, things.

So from this, I can think of at least three possible definitions of what you mean by "networking service":
1) A URL loading service (essentially, a service-oriented URLFetcher)
2) A monolithic service that provides high-level service implementations for all of the many (not-in-//net) consumers of networking
3) A service that IPCs the 70% of //net API consumed by //net consumers

I have real concerns and objections to 3, I think 2 would be a significantly worse place to go for the code health for concerns about layering and just " shoving things into //net", and 1 is not intrinsically unreasonable - but also, not clearly specified.

Also, is going to Mojo directly the best (safest, most-productive) path? From the Mojo thread, there are still very real questions about security and performance that do not appear to have been answered. From the discussions of the new Chromium task scheduler for //base, there were many real questions about the ordering and interdependency of messages, which equally apply with Mojo's (lack of) guarantees. And from the Code Yellow discussions of moving the IO thread into IPC vs Networking, it was clear that there are many interdependencies to resolve. My concern is that "rewrite it all in Mojo" will just introduce or intrude on all those same concerns, so it would be useful to know how they're being addressed.

Considering the performance is at stake here, and how much time is being spent trying to understand and optimize //net performance, it would be great to see if Mojo has yet been able to address the performance regressions that it seems every conversion to Mojo IPC has encountered.

--
You received this message because you are subscribed to the Google Groups "Chromium Loading Performance" group.
To unsubscribe from this group and stop receiving emails from it, send an email to loading-dev...@chromium.org.
To post to this group, send email to loadi...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/loading-dev/CALhVsw3uUuXUZRGZi2Gg55jbUMsYQRzs46Rh7D%3DDmr09HSQ%3Deg%40mail.gmail.com.

John Abd-El-Malek

unread,
Feb 26, 2016, 1:50:27 PM2/26/16
to Ryan Sleevi, chromium-mojo, net-dev, loadi...@chromium.org
Thanks for the feedback. To clarify, for the most part this is about updating how the content module consumes net module. It's not about using Mojo internally in net.

There is a spectrum of how we can change how content, in particular child processes, consume net. At one end, the work to switch to mojo ipc can be thought of as a natural improvement to content as we switch all of content to use mojo this year. This can be done without any changes to src/net, and the mojo interfaces and implementation can live in src/components. At the other end, some of these interfaces could be in //net and can be the same API that is exposed by cronet. This would take advantage of Mojo's support for generating bindings for different languages. How we pick where on the spectrum this work falls is completely up in the air; the purpose of getting the conversation started now is to figure this out with networking and loading teams. That's why the document doesn't go into specifics.

You're right that Mojo IPC, in some respects, is not done or doesn't completely measure to chrome IPC. There are folks working on the performance part (amistry, rockot) and security (dcheng). However we're not waiting for Mojo IPC to reach parity because these task are parallelizable, and we know that none of the issues are insurmountable or block usage of Mojo. Happy to go into specifics more, but this is probably going off track for this thread. In summary though, you can rest assured that mass migration from chrome to mojo ipc isn't going to happen with outstanding performance or security regressions.

Some of us have synced up in person, but not all. I've setup a meeting with Ryan next Wednesday at 10am, if anyone else wants to join and chat through some of this please let me know and I'll add you to the meeting.

David Benjamin

unread,
Feb 26, 2016, 2:20:33 PM2/26/16
to John Abd-El-Malek, Ryan Sleevi, car...@chromium.org, chromium-mojo, net-dev, loadi...@chromium.org
+Carlos Knippschild who will likely be happy about this for PlzNavigate work.

You received this message because you are subscribed to the Google Groups "net-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to net-dev+u...@chromium.org.
To post to this group, send email to net...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/net-dev/CALhVsw0EkPxj%3DysHGZo0QEzC-F8Mi6huNeoxQub3meUrrfK8DA%40mail.gmail.com.

Ryan Sleevi

unread,
Feb 26, 2016, 6:58:14 PM2/26/16
to John Abd-El-Malek, Ryan Sleevi, chromium-mojo, net-dev, Chromium Loading Performance
On Fri, Feb 26, 2016 at 10:50 AM, John Abd-El-Malek <j...@chromium.org> wrote:
Thanks for the feedback. To clarify, for the most part this is about updating how the content module consumes net module. It's not about using Mojo internally in net.

This is the bit of context that I couldn't be sure of from the doc, and is super helpful to clarify.

I agree that we have a reasonably good cut point at the content layer, if we're strictly speaking about user-initiated, content-bounded, web loads (e.g. things that go through the ResourceLoader).

We're in a much harder position if we're talking about general requests (for example, fetching Spellcheck dictionaries via URLFetcher from //chrome) or talking about general networking (for example, the GCM code used to enable WebPush, Sync's custom-ish HTTP-ish stack, WebRTC).

I think there's still some concern when it comes to Phase 2, and trying to define a suitable boundary for how things like the NetworkDelegate and Extensions interplay here, since these are very much ordering-sensitive and things we don't necessarily want implemented in a network service, but that's a separate bridge.

I still think the general design of a "network service" will be one that requires a lot of care and thought - do we want multiple services handling core networking (for example, a WebRTC service handling audio/video parsing, a GCM service handling WebPush, a Chromoting service, etc) or do we want them aggregated? I realize that the design of Mojo accommodates the ability to distinguish "service" vs "process", and that we can possibly end up with reasonable layering - but it sounds like you're not trying to solve that problem right now, if I understand your response. If that's correct, I would say that's the chief concern that wasn't called out in the design doc.

John Abd-El-Malek

unread,
Feb 26, 2016, 10:01:08 PM2/26/16
to Ryan Sleevi, chromium-mojo, net-dev, Chromium Loading Performance
On Fri, Feb 26, 2016 at 3:57 PM, Ryan Sleevi <rsl...@chromium.org> wrote:


On Fri, Feb 26, 2016 at 10:50 AM, John Abd-El-Malek <j...@chromium.org> wrote:
Thanks for the feedback. To clarify, for the most part this is about updating how the content module consumes net module. It's not about using Mojo internally in net.

This is the bit of context that I couldn't be sure of from the doc, and is super helpful to clarify.

I agree that we have a reasonably good cut point at the content layer, if we're strictly speaking about user-initiated, content-bounded, web loads (e.g. things that go through the ResourceLoader).

We're in a much harder position if we're talking about general requests (for example, fetching Spellcheck dictionaries via URLFetcher from //chrome)

Why is this different? It's a very thin API to make http requests. It could use the same interface that content uses from a renderer process to service blink.
 
or talking about general networking (for example, the GCM code used to enable WebPush, Sync's custom-ish HTTP-ish stack, WebRTC).

I think there's still some concern when it comes to Phase 2, and trying to define a suitable boundary for how things like the NetworkDelegate and Extensions interplay here, since these are very much ordering-sensitive

I agree this work (stage 2 in the doc) will have to be done carefully to ensure ordering is kept right.

Let's focus on stage 1 at this point. The document was describing stage 2 to describe the long term goal of being able to run this code outside of the browser process (for security/stability), or without chrome running at all (as a low level service in chromeos for example). We should keep this in mind when implementing stage 1, so that we don't do anything that would make it harder. But we can put off stage 2 until we're done with stage 1.
 
and things we don't necessarily want implemented in a network service, but that's a separate bridge.

I still think the general design of a "network service" will be one that requires a lot of care and thought - do we want multiple services handling core networking (for example, a WebRTC service handling audio/video parsing, a GCM service handling WebPush, a Chromoting service, etc) or do we want them aggregated? I realize that the design of Mojo accommodates the ability to distinguish "service" vs "process", and that we can possibly end up with reasonable layering - but it sounds like you're not trying to solve that problem right now, if I understand your response. If that's correct, I would say that's the chief concern that wasn't called out in the design doc.

The document links to interfaces that are used to implement loading for child processes (as well as the main one) in Mandoline. We don't need all of these interface initially. But it's an example of what we would start with. 

I'm not sure how audio/video parsing or chromoting fits into this conversation. We can clear this up in Wednesday's sync up.

Ryan Sleevi

unread,
Feb 26, 2016, 11:34:57 PM2/26/16
to John Abd-El-Malek, Ryan Sleevi, chromium-mojo, net-dev, Chromium Loading Performance
On Fri, Feb 26, 2016 at 7:01 PM, John Abd-El-Malek <j...@chromium.org> wrote:
Why is this different? It's a very thin API to make http requests. It could use the same interface that content uses from a renderer process to service blink.

Not really; I'll try to explain some, so that it's archived on the lists (yay for public discussions), but perhaps some will benefit from an in-person meeting (as long as we remember to capture on the list).

The TL;DR: is that you don't want to mix user-initiated, Web-loading requests with those that aren't, because inevitably you run into user interface, security, or privacy mismatches that end up negatively affecting the Web loading experience.

(Longer explanation for context)

As a concrete example, there was a team that used a URLFetcher to talk to a Google backend. URLFetcher does not support any form of authentication - so all authentication prompts are cancelled/aborted. In this case, the team was talking to servers that would require TLS client authentication (which is optional). Their URLFetcher requests ended up poisoning the socket pools used for //content (Web) loading. When using the //content ResourceLoader (aka a renderer-initiated request), such requests for authentication would have bounced back through the UI prompts - which, due to tab-modal dialogs, always required an explicit renderer ID (which means such prompts are impossible for that teams' non-renderer initiated requests).

For a further example, consider how the //content layer handles TLS errors - it shows an interstitial. For URLFetcher, all TLS errors are prevented from being bypassed. If these shared the same socket pools, a user who bypassed a TLS error in //content would cause the URLFetcher load to ignore the TLS error - creating a security issue.

Or consider the sending of cookies, and how many (most) URLFetcher requests shouldn't include cookies. This can adversely affect socket pools, since we try to maintain socket pools distinct for //content CORS-anonymous fetches vs //content fetches (which is defined in the Fetch spec)

It's not to say we can't solve these issues, but it's certainly why I'm opposed to wholesale converting URLFetchers over to using the same resource loading stack as ResourceLoader - unless and until we holisticly address the UI, security, privacy, and performance issues. That's not to say we can't or shouldn't investigate how to migrate the //content ResourceLoader over, but I do want to stress that "Things which are web visible" fundamentally behave differently than things that aren't - and just converting them to use the same stack, without working through those issues, is problematic.

It's also worth reiterating that there are plenty of "non-request" users of networking in //components and //content (the layering between which is admittedly blurry, between what's "above" content and what's "below", since that's per-component); they could be doing things like wanting to access low-level sockets to wanting to interface with proxies (as mentioned below). So that's part of the concern with "Networking Service", because it's not "//net as a service" nor "Anything that does networking in a monolithic service"
 
The document links to interfaces that are used to implement loading for child processes (as well as the main one) in Mandoline. We don't need all of these interface initially. But it's an example of what we would start with. 

And I'm (mostly) on board with the Mojofication of the //content Loader abstraction (in part because, admittedly, I have to deal with it less, and the people that do have to deal with it are constantly confounded by it). Plus it opens the way to solve long-standing bugs that I find personally bothersome (like how our Service Worker loading code works or handles origin security), so yay for some of the Mojo ability there.
 

I'm not sure how audio/video parsing or chromoting fits into this conversation. We can clear this up in Wednesday's sync up.

They consume //net interfaces to service their goals. In the past conversations I've had with people (jschuh@, erg@, darin@) about a networking service powered by Mojo, the vision was clearly the goal of moving "everything that lives in //net" to another process. My point was that there are plenty of consumers of //net, not doing URL loading things, not doing Web-facing things, and getting those converted over is going to be a more careful and nuanced thing. It's likely that we don't want just a "Connect a socket service" - that ends up running counter to performance and, arguably, security, goals. So that will take care, and why I was trying to raise concerns with a 'networking service' concept, which really we want a number of services, some of which may perform networking. That is, I'm explicitly arguing against "Networking as a service", and trying to think more in terms of a "URL fetching service" or a "Chromoting service" or a "WebRTC service" or a "GCM service" - which may or may not live in the same process, but certainly will slice layers above and below //net as the logical code layer. Figuring out how best to do that - and keep some of the desired extensibility options (the URLRequestContext contains more than just 'URL fetching fiddly bits') - will be tricky.

John Abd-El-Malek

unread,
Feb 28, 2016, 10:27:50 PM2/28/16
to Ryan Sleevi, chromium-mojo, net-dev, Chromium Loading Performance
On Fri, Feb 26, 2016 at 8:34 PM, Ryan Sleevi <rsl...@chromium.org> wrote:


On Fri, Feb 26, 2016 at 7:01 PM, John Abd-El-Malek <j...@chromium.org> wrote:
Why is this different? It's a very thin API to make http requests. It could use the same interface that content uses from a renderer process to service blink.

Not really; I'll try to explain some, so that it's archived on the lists (yay for public discussions), but perhaps some will benefit from an in-person meeting (as long as we remember to capture on the list).

The TL;DR: is that you don't want to mix user-initiated, Web-loading requests with those that aren't, because inevitably you run into user interface, security, or privacy mismatches that end up negatively affecting the Web loading experience.

(Longer explanation for context)

As a concrete example, there was a team that used a URLFetcher to talk to a Google backend. URLFetcher does not support any form of authentication - so all authentication prompts are cancelled/aborted. In this case, the team was talking to servers that would require TLS client authentication (which is optional). Their URLFetcher requests ended up poisoning the socket pools used for //content (Web) loading. When using the //content ResourceLoader (aka a renderer-initiated request), such requests for authentication would have bounced back through the UI prompts - which, due to tab-modal dialogs, always required an explicit renderer ID (which means such prompts are impossible for that teams' non-renderer initiated requests).

For a further example, consider how the //content layer handles TLS errors - it shows an interstitial. For URLFetcher, all TLS errors are prevented from being bypassed. If these shared the same socket pools, a user who bypassed a TLS error in //content would cause the URLFetcher load to ignore the TLS error - creating a security issue.

Or consider the sending of cookies, and how many (most) URLFetcher requests shouldn't include cookies. This can adversely affect socket pools, since we try to maintain socket pools distinct for //content CORS-anonymous fetches vs //content fetches (which is defined in the Fetch spec)

It's not to say we can't solve these issues, but it's certainly why I'm opposed to wholesale converting URLFetchers over to using the same resource loading stack as ResourceLoader - unless and until we holisticly address the UI, security, privacy, and performance issues. That's not to say we can't or shouldn't investigate how to migrate the //content ResourceLoader over, but I do want to stress that "Things which are web visible" fundamentally behave differently than things that aren't - and just converting them to use the same stack, without working through those issues, is problematic.

Thanks for the background. These all seem like properties we can configure when making a request in the mojo api. We would have to be careful when migrating code to ensure that subtle behavior like this doesn't change.

This doesn't preclude using the same interfaces for making requests and receiving responses. 


It's also worth reiterating that there are plenty of "non-request" users of networking in //components and //content (the layering between which is admittedly blurry, between what's "above" content and what's "below", since that's per-component); they could be doing things like wanting to access low-level sockets to wanting to interface with proxies (as mentioned below). So that's part of the concern with "Networking Service", because it's not "//net as a service" nor "Anything that does networking in a monolithic service"
 
The document links to interfaces that are used to implement loading for child processes (as well as the main one) in Mandoline. We don't need all of these interface initially. But it's an example of what we would start with. 

And I'm (mostly) on board with the Mojofication of the //content Loader abstraction (in part because, admittedly, I have to deal with it less, and the people that do have to deal with it are constantly confounded by it). Plus it opens the way to solve long-standing bugs that I find personally bothersome (like how our Service Worker loading code works or handles origin security), so yay for some of the Mojo ability there.
 

I'm not sure how audio/video parsing or chromoting fits into this conversation. We can clear this up in Wednesday's sync up.

They consume //net interfaces to service their goals. In the past conversations I've had with people (jschuh@, erg@, darin@) about a networking service powered by Mojo, the vision was clearly the goal of moving "everything that lives in //net" to another process. My point was that there are plenty of consumers of //net, not doing URL loading things, not doing Web-facing things, and getting those converted over is going to be a more careful and nuanced thing. It's likely that we don't want just a "Connect a socket service" - that ends up running counter to performance and, arguably, security, goals.

To add some background: a mojo service vends different interfaces. Some, like raw sockets, is not something we would expose to all code. Just like with chrome ipc, we need extra checks when giving access to privileged APIs. Mojo makes this easier because it can check who the other side is, capabilities can be delegated and security checks can follow a singular path instead of chrome ipcs which usually duplicate this.

 
So that will take care, and why I was trying to raise concerns with a 'networking service' concept, which really we want a number of services, some of which may perform networking. That is, I'm explicitly arguing against "Networking as a service", and trying to think more in terms of a "URL fetching service" or a "Chromoting service" or a "WebRTC service" or a "GCM service" - which may or may not live in the same process, but certainly will slice layers above and below //net as the logical code layer.

We are in agreement about this. Splitting different requests types, like http vs sockets, is something that we follow for readability of interfaces, implementation sanity, and security. The examples of the current mojo network service is an example: this: https://code.google.com/p/chromium/codesearch#chromium/src/mojo/services/network/public/interfaces/

Ryan Sleevi

unread,
Feb 29, 2016, 12:34:26 AM2/29/16
to John Abd-El-Malek, Ryan Sleevi, chromium-mojo, net-dev, Chromium Loading Performance
On Sun, Feb 28, 2016 at 7:27 PM, John Abd-El-Malek <j...@chromium.org> wrote:
Thanks for the background. These all seem like properties we can configure when making a request in the mojo api. We would have to be careful when migrating code to ensure that subtle behavior like this doesn't change.

If experience with URLFetcher teaches us anything, it's that the more options and properties we expose, the harder it becomes to use correctly/safely.

URLFetcher vs URLRequest already exhibits this behaviour - it's virtually impossible for someone to get URLFetcher 'right' (in that it does the thing people expect for redirects, timeouts, size limits, etc), and in part, that's because we've simultaneously kept adding options while also not introducing certain behaviours, because of how many consumers there already are and the difficulty measuring.

Similarly, URLRequest is hard to get right, precisely because it exposes so many options, and requires a lot of care of thinking - but it's easier to reason about changes to behaviour.

While I can understand and appreciate the desire to harmonize, I want to make sure that we're not deciding that all birds happen to be ducks simply because they fly and have feathers - but that we're also looking at how they sound and look. As I tried to indicate in the previous reply, we've got eagles, egrets, and platply at play as well.
 
To add some background: a mojo service vends different interfaces. Some, like raw sockets, is not something we would expose to all code. Just like with chrome ipc, we need extra checks when giving access to privileged APIs. Mojo makes this easier because it can check who the other side is, capabilities can be delegated and security checks can follow a singular path instead of chrome ipcs which usually duplicate this.

I appreciate this perspective, but I suspect we're not on the same page, because I don't believe it's relevant to the concerns I was trying to raise. Hopefully we can meet and document some of this better. Capability-based systems are easy to stuff up in spectacular ways, and the precision of where the security boundary exists - both in terms of "browser<->network" and in terms of "renderer<->network" is thorny and varies.

I would say WebRTC serves as an excellent example of this nuance and challenge, and while it's hardly the only one, we can use this as a discussion point to better understand the concerns.

WebRTC has complex protocol parsing (SRTP/SCTP/RTP/DTLS) - that presently all happens in the renderer process (for security). It interfaces with low-level sockets (UDP) - but the access any given renderer has is mediated on a variety of checks controlled by the browser process, and is not a general socket service API. The parsed messages of the protocol are delivered to media services, such as the GPU.

This is a prime example of a complex service, one where the current Chrome IPC boundaries don't leave people terribly thrilled. At the same time, if we just mirrored this in Mojo, *or* used Mojo as an excuse to bring things into the Network Process (hypothetical), we'd end up making things _less_ secure (at least, based on the Mojo spleunking I did several months ago; perhaps this has changed).

An 'ideal' world might be a WebRTC service - one capable of performing the networking and fast-dispatching to audio or video services (in process or not TBD), and separate from any general networking process. Having to mediate all the networking through IPCs from a networking service is arguably *less* performant (whether we're talking Mojo or Chrome, but especially Mojo, given UDP's sensitivities), and it'd be much better if this supposed WebRTC service spoke with //net directly.

In any event, this is just one sketch of the set of concerns that await.
 

 
So that will take care, and why I was trying to raise concerns with a 'networking service' concept, which really we want a number of services, some of which may perform networking. That is, I'm explicitly arguing against "Networking as a service", and trying to think more in terms of a "URL fetching service" or a "Chromoting service" or a "WebRTC service" or a "GCM service" - which may or may not live in the same process, but certainly will slice layers above and below //net as the logical code layer.

We are in agreement about this. Splitting different requests types, like http vs sockets, is something that we follow for readability of interfaces, implementation sanity, and security. The examples of the current mojo network service is an example: this: https://code.google.com/p/chromium/codesearch#chromium/src/mojo/services/network/public/interfaces/

I don't think we are in agreement, since I was trying to argue against a 'socket service' as Mojo does. That's precisely the sort of API surface that we'd ideally *not* expose (and indeed, the 'server' port of it represents a continued pain point of maintenance and security for //net), and instead use the above examples I gave of pivoting at layers.

Hopefully this will be something to be captured in the meeting - how much of a push that the 'service oriented' nature is, since I am trying to argue that we should think of converting //net's "high level" services, rather than exposing the low-level bits (like sockets). 

John Abd-El-Malek

unread,
Feb 29, 2016, 12:27:47 PM2/29/16
to Ryan Sleevi, chromium-mojo, net-dev, Chromium Loading Performance
On Sun, Feb 28, 2016 at 9:33 PM, Ryan Sleevi <rsl...@chromium.org> wrote:


On Sun, Feb 28, 2016 at 7:27 PM, John Abd-El-Malek <j...@chromium.org> wrote:
Thanks for the background. These all seem like properties we can configure when making a request in the mojo api. We would have to be careful when migrating code to ensure that subtle behavior like this doesn't change.

If experience with URLFetcher teaches us anything, it's that the more options and properties we expose, the harder it becomes to use correctly/safely.

URLFetcher vs URLRequest already exhibits this behaviour - it's virtually impossible for someone to get URLFetcher 'right' (in that it does the thing people expect for redirects, timeouts, size limits, etc), and in part, that's because we've simultaneously kept adding options while also not introducing certain behaviours, because of how many consumers there already are and the difficulty measuring.

Similarly, URLRequest is hard to get right, precisely because it exposes so many options, and requires a lot of care of thinking - but it's easier to reason about changes to behaviour.

While I can understand and appreciate the desire to harmonize, I want to make sure that we're not deciding that all birds happen to be ducks simply because they fly and have feathers - but that we're also looking at how they sound and look. As I tried to indicate in the previous reply, we've got eagles, egrets, and platply at play as well.

We will try to be as careful as possible, as mentioned earlier, in keeping the same behavior. Will there be any regressions in a large refactoring? Probably. Will that stop us from trying to clean up the code and simplify it? No.

I invite you to raise any issues when cls are sent out to define and use these interfaces, as well as when code is switched.
 
 
To add some background: a mojo service vends different interfaces. Some, like raw sockets, is not something we would expose to all code. Just like with chrome ipc, we need extra checks when giving access to privileged APIs. Mojo makes this easier because it can check who the other side is, capabilities can be delegated and security checks can follow a singular path instead of chrome ipcs which usually duplicate this.

I appreciate this perspective, but I suspect we're not on the same page, because I don't believe it's relevant to the concerns I was trying to raise. Hopefully we can meet and document some of this better. Capability-based systems are easy to stuff up in spectacular ways, and the precision of where the security boundary exists - both in terms of "browser<->network" and in terms of "renderer<->network" is thorny and varies.

I would say WebRTC serves as an excellent example of this nuance and challenge, and while it's hardly the only one, we can use this as a discussion point to better understand the concerns.

WebRTC has complex protocol parsing (SRTP/SCTP/RTP/DTLS) - that presently all happens in the renderer process (for security). It interfaces with low-level sockets (UDP) - but the access any given renderer has is mediated on a variety of checks controlled by the browser process, and is not a general socket service API.

Just like chrome IPCs give access to UDP because the browser does security checks, the mojo service around UDP would also have these same security checks.
It's not clear to me, why a general purpose API would be harmful. One of the motivations for the "servicification" effort is to avoid having to add new code paths, including IPCs, each time a new feature comes up.


The parsed messages of the protocol are delivered to media services, such as the GPU.

This is a prime example of a complex service, one where the current Chrome IPC boundaries don't leave people terribly thrilled. At the same time, if we just mirrored this in Mojo, *or* used Mojo as an excuse to bring things into the Network Process (hypothetical), we'd end up making things _less_ secure (at least, based on the Mojo spleunking I did several months ago; perhaps this has changed).

There's no intent in moving code from more sandboxed processes to less sandboxed processes. Quite the opposite, one of the goals of exposing low level services such as for file or networking is that we can move more logic to be in the renderer.

Eventually, we also want to move more code from the massive browser process into a sandboxed process that can do networking. This is what's referred to as stage 2 in the doc. We can avoid going into details for that work since it's not going to be started on anytime soon, and we want to finish stage 1 first.
 

An 'ideal' world might be a WebRTC service - one capable of performing the networking and fast-dispatching to audio or video services (in process or not TBD), and separate from any general networking process. Having to mediate all the networking through IPCs from a networking service is arguably *less* performant (whether we're talking Mojo or Chrome, but especially Mojo, given UDP's sensitivities), and it'd be much better if this supposed WebRTC service spoke with //net directly.

So this example goes beyond the scope of what this proposal is intended to do. It's not about splitting features like webrtc across new processes.
 

In any event, this is just one sketch of the set of concerns that await.
 

 
So that will take care, and why I was trying to raise concerns with a 'networking service' concept, which really we want a number of services, some of which may perform networking. That is, I'm explicitly arguing against "Networking as a service", and trying to think more in terms of a "URL fetching service" or a "Chromoting service" or a "WebRTC service" or a "GCM service" - which may or may not live in the same process, but certainly will slice layers above and below //net as the logical code layer.

We are in agreement about this. Splitting different requests types, like http vs sockets, is something that we follow for readability of interfaces, implementation sanity, and security. The examples of the current mojo network service is an example: this: https://code.google.com/p/chromium/codesearch#chromium/src/mojo/services/network/public/interfaces/

I don't think we are in agreement, since I was trying to argue against a 'socket service' as Mojo does. That's precisely the sort of API surface that we'd ideally *not* expose (and indeed, the 'server' port of it represents a continued pain point of maintenance and security for //net), and instead use the above examples I gave of pivoting at layers.

I must have misunderstood what you meant then. I thought you meant that different interfaces would be split up, with access exposed protected to some of the more security sensitive ones.

Perhaps some of this is that there's a terminology mismatch. Replace "service" with "ipc interface". As we convert from chrome ipc to mojo ipc, we need to update cross process code. Instead of having different code paths, i.e. for pepper's sockets and P2P, having one interface that both features are built on seems better than the current status quo of each somewhat duplicating the iPC.
 

Hopefully this will be something to be captured in the meeting - how much of a push that the 'service oriented' nature is, since I am trying to argue that we should think of converting //net's "high level" services, rather than exposing the low-level bits (like sockets). 

Perhaps my previous reply clarified this. If not, another way to look at this is that this proposal is not about wrapping src/net with mojo interfaces. It's about wrapping the content code that consumes net, and which is exposed to other processes and also the browser process.

For URLFetcher specifically, it's something that used to live in content and then was moved to net because other directories needed it which don't depend on content. It seems natural that these directories can also use a mojo interface, if that's what's used by other layers that depend on content. 

John Abd-El-Malek

unread,
Apr 15, 2016, 1:23:44 PM4/15/16
to Chromium Loading Performance, net-dev, chromium-mojo
An update: here's our rough plan, which mirrors what the gpu folks are in the middle of doing. Linked from above is a list of files that we want to move out of content/browser/loader to services/network.

Any feedback would be appreciated!

Thanks
Reply all
Reply to author
Forward
0 new messages