IPC::Message is deprecated

832 views
Skip to first unread message

Ken Rockot

unread,
Jan 28, 2016, 12:30:26 PM1/28/16
to Chromium-dev
Greetings chromium-dev!

TL;DR: IPC::Message is deprecated. All new IPC in Chrome should use Mojo.

Mojo has been baking for a long while and is now ready enough for widespread use. In order to avoid having Two Ways to do IPC forever and ever, the Old Way is now deprecated and the New Way is Mojo.

Some compelling features of Mojo which make it better than the legacy alternative:
  • Generated bindings capable of supporting all of Chrome's IPC needs including passing around file handles, shared buffers, etc.
  • Intuitive abstraction around message routes. What requires a routing ID today can instead be accomplished by creating individual pipes.
  • Builtin request-response logic reduces boilerplate around tracking request IDs.
  • Support for generating C++, JS, and Java bindings ensures a consistent, write-once messaging protocol everywhere. Services and clients can be written in any of these languages.
There are some great opportunities to improve Chrome in light of this transition, and several projects are under way, including:
  • Porting all existing IPC surfaces to Mojo and in the process reconsidering some fundamental architectural choices in the browser to improve security, stability, code health, and user experience.
  • Consuming browser services directly from WebUI using Mojo JS bindings, eliminating all the custom messaging that's done today.
  • Refactoring the Apps & Extensions system to be built on shared services, deleting lots of redundant code in the process.
  • Supporting Mojo clients in the Blink platform layer, also deleting lots of redundant code (and ultimately all of content/renderer and much of content/child) in the process.
There are also a number of meta-efforts under way to ease the transition, including the ability to re-purpose existing IPC::ParamTraits definitions to quickly lift non-mojom types into Mojo messages, as well as providing a means to retain FIFO ordering between certain Mojo interfaces and legacy IPC during the transitional period.

We have a very basic Mojo in Chromium primer available as a light practical introduction to Mojo, and the chromi...@chromium.org list is a great place to get quick answers for any Mojo-related questions you may have.


Cheers!
Ken

Ben Goodger (Google)

unread,
Jan 28, 2016, 2:12:32 PM1/28/16
to Ken Rockot, Chromium-dev
I, for one, welcome our new IPC overlords!

-Ben

Daniel Cheng

unread,
Jan 28, 2016, 2:34:55 PM1/28/16
to roc...@chromium.org, Chromium-dev
A couple questions:

For security:
- I assume that mojom changes require security reviews, just like IPC message changes already do, but I don't see any mention of this on the Mojo basics.
- Are there any security gotchas with IPCs that you won't have to worry about in Mojo? What about the other way around?
- IPC::ParamTraits provides an easy way for common validation of parameters, so individual message handlers don't have to repeat all that validation (or risk forgetting to validate it at all). What is the analogous mechanism in Mojo?

What should developers do when adding a new IPC::Message to an existing IPC message handler? Should they add a new IPC::Message, or are there guidelines on how to incrementally move things to Mojo?

Finally, I'm curious about the FIFO guarantees of Mojo: in general, if you have two different Mojo interface pointers, there's no guarantee of FIFO ordering between them, right? How will this interact with things that are sensitive to the order of operations, such as cleanup when a frame is detached?

Daniel

--
--
Chromium Developers mailing list: chromi...@chromium.org
View archives, change email options, or unsubscribe:
http://groups.google.com/a/chromium.org/group/chromium-dev

Darin Fisher

unread,
Jan 28, 2016, 2:36:21 PM1/28/16
to Ben Goodger (Google), Chromium-dev, Ken Rockot

+1

Ken Rockot

unread,
Jan 28, 2016, 3:25:30 PM1/28/16
to Daniel Cheng, Chromium-dev
On Thu, Jan 28, 2016 at 11:33 AM, Daniel Cheng <dch...@chromium.org> wrote:
A couple questions:

These are great questions. In general we're still putting some of the relevant pieces (and documentation) together, but all of these concerns either have a working solution or a solution in progress at the moment.
 

For security:
- I assume that mojom changes require security reviews, just like IPC message changes already do, but I don't see any mention of this on the Mojo basics.

Yes. Documenting the security model and review process for Mojo is a TODO. We have however started adding security OWNERs for *.mojom in key places, and as the volume of mojom CLs increases we'll adjust priorities here as needed.
 
- Are there any security gotchas with IPCs that you won't have to worry about in Mojo? What about the other way around?

The gotchas are largely similar IMHO. I think where we'll ultimately see security improvements in the long term is through the reduced complexity with which we can isolate components.

One thing to be aware of is that, because pipes are so easy to pass around, developers may have to be more mindful about to and from whom they send and receive pipe endpoints. There isn't a good one-size-fits-all rule here though as the Right Thing largely depends on context.
 
- IPC::ParamTraits provides an easy way for common validation of parameters, so individual message handlers don't have to repeat all that validation (or risk forgetting to validate it at all). What is the analogous mechanism in Mojo?

Though it's still a work in progress for more complex structures, we have an analogous mojo::StructTraits<MojomType, T> in development. This can be used to provide a view of any type T so the bindings layer knows how to serialize it as the mojom-defined struct MojomType.

Likewise a deserializer (mojo::StructTraits<MojomType, T>::Read) can be provided which operates on a view of the serialized message buffer to produce a new T (also work in progress to support move-only or non-default-constructable Ts here.) The deserializer can reject the message, resulting in the bindings signaling an error on the pipe.
 

What should developers do when adding a new IPC::Message to an existing IPC message handler? Should they add a new IPC::Message, or are there guidelines on how to incrementally move things to Mojo?

This is something to be discussed and established on a case-by-case basis. In the simple case (i.e. an isolated bundle of message types with little or no ordering dependencies), interface pipes can be teased apart piece by piece with no additional complexity.
 

Finally, I'm curious about the FIFO guarantees of Mojo: in general, if you have two different Mojo interface pointers, there's no guarantee of FIFO ordering between them, right? How will this interact with things that are sensitive to the order of operations, such as cleanup when a frame is detached?

Correct -- no FIFO ordering between independent pipes. The bindings layer does support associated interfaces which is a way of sharing a single message pipe between multiple bindings endpoints that need relative FIFO.

Ben Goodger (Google)

unread,
Jan 28, 2016, 3:34:09 PM1/28/16
to Ken Rockot, Daniel Cheng, Chromium-dev
On Thu, Jan 28, 2016 at 12:24 PM, Ken Rockot <roc...@chromium.org> wrote:

Finally, I'm curious about the FIFO guarantees of Mojo: in general, if you have two different Mojo interface pointers, there's no guarantee of FIFO ordering between them, right? How will this interact with things that are sensitive to the order of operations, such as cleanup when a frame is detached?

Correct -- no FIFO ordering between independent pipes. The bindings layer does support associated interfaces which is a way of sharing a single message pipe between multiple bindings endpoints that need relative FIFO.

I'll add that the team is working on a mechanism by which rote conversion will not affect the ordering of messages sent via Chrome IPC and Mojo IPC during the transitional phase.

-Ben

Will

unread,
Jan 28, 2016, 3:36:57 PM1/28/16
to Chromium-dev, dch...@chromium.org
This is great, I'm enthusiastic about the move to mojo, and it sounds like it has a lot of benefits.

However, I think dcheng@ raises some really important points about security, in particular the porting of IPC::ParamTraits to mojo, and setting up a robust way of ensuring that security reviews are carried out. Historically, IPC has been one of the major techniques for attackers to escape our sandbox so it's critical to our security model that this level of assurance is maintained, and developers are given all the facilities they already have in IPC (e.g. traits for validation) to land secure code.

I am concerned if we move too fast without both of these in place we might introduce security regressions, can we perhaps prioritize the above work and mark it as a blocker on actually deprecating IPC::Message?

Will

Ken Rockot

unread,
Jan 28, 2016, 4:20:00 PM1/28/16
to w...@chromium.org, Chromium-dev, Daniel Cheng
On Thu, Jan 28, 2016 at 12:36 PM, Will <w...@chromium.org> wrote:
This is great, I'm enthusiastic about the move to mojo, and it sounds like it has a lot of benefits.

However, I think dcheng@ raises some really important points about security, in particular the porting of IPC::ParamTraits to mojo, and setting up a robust way of ensuring that security reviews are carried out. Historically, IPC has been one of the major techniques for attackers to escape our sandbox so it's critical to our security model that this level of assurance is maintained, and developers are given all the facilities they already have in IPC (e.g. traits for validation) to land secure code.

I am concerned if we move too fast without both of these in place we might introduce security regressions, can we perhaps prioritize the above work and mark it as a blocker on actually deprecating IPC::Message?

Of course that's a valid concern, and I didn't mean to convey that security isn't still a very high priority. I think the most significant security risks we're taking here are simply a product of the system being a new system and developers lacking experience with it. I don't see how deferring does anything but prolong that risk.

Keep in mind that we expect much of the initial conversion work to be done by a smaller group of engineers who are already comfortable building and consuming Mojo services. New IPC messages are not added to Chrome so frequently that we can't manually assist with the development of new features in the short-term to make sure things remain secure. As conversion progresses, more of these open questions will have concrete answers and it will become easier for others to participate in the refactoring.

Deprecation makes the intent unquestionably clear -- something we feel is important to do sooner rather than later -- and we've reached a point where there isn't much more to learn without a little artificial forcing of more widespread adoption.

Egor Pasko

unread,
Jan 29, 2016, 9:18:01 AM1/29/16
to roc...@chromium.org, Chromium-dev
Do you have estimations of the impact on binary size from the new bindings?

Also, it seems you have been studying performance impact. I could not find detailed descriptions of those linked from the proposal, are they somewhere else?

--
--
Chromium Developers mailing list: chromi...@chromium.org
View archives, change email options, or unsubscribe:
http://groups.google.com/a/chromium.org/group/chromium-dev
---
You received this message because you are subscribed to the Google Groups "Chromium-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to chromium-dev...@chromium.org.



--
Egor Pasko

Erik

unread,
Jan 29, 2016, 10:41:50 AM1/29/16
to Chromium-dev, roc...@chromium.org
Does Mojo support passing Mach ports on Mac? 

The Mac port of Chrome has transitioned to using Mach ports to back all shared memory regions, which means any new, cross-platform IPC messages that include a SharedMemoryHandle parameter may still have to use IPC::Message?
To unsubscribe from this group and stop receiving emails from it, send an email to chromium-dev+unsubscribe@chromium.org.



--
Egor Pasko

Ken Rockot

unread,
Jan 29, 2016, 11:18:28 AM1/29/16
to Egor Pasko, Chromium-dev
On Fri, Jan 29, 2016 at 6:16 AM, Egor Pasko <pa...@google.com> wrote:
Do you have estimations of the impact on binary size from the new bindings?

No, but it's something we're considering. It's a lot of generated code and we may want to optimize it down. We also anticipate deleting a lot of other code during the refactoring process though, so my baseline expectation is to at least break even on total size.


Also, it seems you have been studying performance impact. I could not find detailed descriptions of those linked from the proposal, are they somewhere else?

So far much of the performance analysis has been pretty ad hoc and focused on microbenchmarks to ensure we're at least equal or better at the transport layer. There's some work under way this quarter to a closer look at specific metrics, and to some extent benchmarking performance in isolation won't tell us as much as pushing some more IPCs over Mojo and monitoring the suite of performance metrics we already have in place across Chrome.

My candid expectation is that initially we'll be using more CPU to serialize Mojo messages and we'll need to optimize the bindings layer further. It remains to be seen how much of a practical impact this will have.

Ken Rockot

unread,
Jan 29, 2016, 11:20:40 AM1/29/16
to Erik, Chromium-dev
On Fri, Jan 29, 2016 at 7:41 AM, Erik <erik...@chromium.org> wrote:
Does Mojo support passing Mach ports on Mac? 

Not at the moment.
 

The Mac port of Chrome has transitioned to using Mach ports to back all shared memory regions, which means any new, cross-platform IPC messages that include a SharedMemoryHandle parameter may still have to use IPC::Message?

Right, we currently aren't using Mojo shared buffers for any IPC in Chrome, but this is definitely something we need to support to do so. We're soon going to be adding a sync broker to the root process that's needed for sandboxed shared buffer allocation on POSIX systems anyway. We can broker mach port transfers through the same service.
 
To unsubscribe from this group and stop receiving emails from it, send an email to chromium-dev...@chromium.org.



--
Egor Pasko

Egor Pasko

unread,
Jan 29, 2016, 1:09:36 PM1/29/16
to Ken Rockot, Chromium-dev

On Fri, Jan 29, 2016 at 5:17 PM, Ken Rockot <roc...@chromium.org> wrote:
On Fri, Jan 29, 2016 at 6:16 AM, Egor Pasko <pa...@google.com> wrote:
Do you have estimations of the impact on binary size from the new bindings?

No, but it's something we're considering. It's a lot of generated code and we may want to optimize it down. We also anticipate deleting a lot of other code during the refactoring process though, so my baseline expectation is to at least break even on total size.


Also, it seems you have been studying performance impact. I could not find detailed descriptions of those linked from the proposal, are they somewhere else?

So far much of the performance analysis has been pretty ad hoc and focused on microbenchmarks to ensure we're at least equal or better at the transport layer. There's some work under way this quarter to a closer look at specific metrics, and to some extent benchmarking performance in isolation won't tell us as much as pushing some more IPCs over Mojo and monitoring the suite of performance metrics we already have in place across Chrome.

My candid expectation is that initially we'll be using more CPU to serialize Mojo messages and we'll need to optimize the bindings layer further. It remains to be seen how much of a practical impact this will have.

I guess my main point is that it is extremely hard to catch regressions when they creep in gradually in small portions. I would prefer a path where the primitives we build on top are performant enough at all times. How can we make sure that the rate of optimizing Mojo IPC outpaces the rate of migration to Mojo IPC? One approach is to optimize first and migrate strictly later, but I assume it is too constraining for you?

Dmitry Skiba

unread,
Jan 29, 2016, 2:14:49 PM1/29/16
to Ken Rockot, Egor Pasko, Chromium-dev
On Fri, Jan 29, 2016 at 8:17 AM, Ken Rockot <roc...@chromium.org> wrote:
On Fri, Jan 29, 2016 at 6:16 AM, Egor Pasko <pa...@google.com> wrote:
Do you have estimations of the impact on binary size from the new bindings?

No, but it's something we're considering. It's a lot of generated code and we may want to optimize it down. We also anticipate deleting a lot of other code during the refactoring process though, so my baseline expectation is to at least break even on total size.


Also, it seems you have been studying performance impact. I could not find detailed descriptions of those linked from the proposal, are they somewhere else?

So far much of the performance analysis has been pretty ad hoc and focused on microbenchmarks to ensure we're at least equal or better at the transport layer. There's some work under way this quarter to a closer look at specific metrics, and to some extent benchmarking performance in isolation won't tell us as much as pushing some more IPCs over Mojo and monitoring the suite of performance metrics we already have in place across Chrome.

Is http://crbug.com/500019 still relevant? Or Mojo IPC performance is better when used only with Mojo types, but not when used as a transport level for Chrome IPC?

Greg Thompson

unread,
Jan 29, 2016, 2:42:01 PM1/29/16
to dsk...@google.com, Ken Rockot, Egor Pasko, Chromium-dev
Rather than micro-benchmarks, how about measuring the typical message patterns between some Chrome processes (e.g., message size, frequency, etc) and building a benchmark suite that exercises these patterns using both current IPC and mojo? This would be an apples-to-apples comparison of real-world messaging. It would be unfortunate if, for example, mojo wins on certain micro-benchmarks, but ends up having higher CPU overhead and latency for real Chrome messages.

Ken Rockot

unread,
Jan 29, 2016, 3:10:44 PM1/29/16
to Greg Thompson, Dmitry Skiba, Egor Pasko, Chromium-dev, Sam McNally
On Fri, Jan 29, 2016 at 11:41 AM, Greg Thompson <g...@chromium.org> wrote:
Rather than micro-benchmarks, how about measuring the typical message patterns between some Chrome processes (e.g., message size, frequency, etc) and building a benchmark suite that exercises these patterns using both current IPC and mojo? This would be an apples-to-apples comparison of real-world messaging. It would be unfortunate if, for example, mojo wins on certain micro-benchmarks, but ends up having higher CPU overhead and latency for real Chrome messages.

Yep, that's part of the plan for getting more specific metrics.
 

On Fri, Jan 29, 2016 at 2:14 PM 'Dmitry Skiba' via Chromium-dev <chromi...@chromium.org> wrote:
On Fri, Jan 29, 2016 at 8:17 AM, Ken Rockot <roc...@chromium.org> wrote:
On Fri, Jan 29, 2016 at 6:16 AM, Egor Pasko <pa...@google.com> wrote:
Do you have estimations of the impact on binary size from the new bindings?

No, but it's something we're considering. It's a lot of generated code and we may want to optimize it down. We also anticipate deleting a lot of other code during the refactoring process though, so my baseline expectation is to at least break even on total size.


Also, it seems you have been studying performance impact. I could not find detailed descriptions of those linked from the proposal, are they somewhere else?

So far much of the performance analysis has been pretty ad hoc and focused on microbenchmarks to ensure we're at least equal or better at the transport layer. There's some work under way this quarter to a closer look at specific metrics, and to some extent benchmarking performance in isolation won't tell us as much as pushing some more IPCs over Mojo and monitoring the suite of performance metrics we already have in place across Chrome.

Is http://crbug.com/500019 still relevant? Or Mojo IPC performance is better when used only with Mojo types, but not when used as a transport level for Chrome IPC?

sammc@ is investigating using Mojo as the transport for IPC::Message, and I believe the plan is to resurrect the IPC::ChannelMojo implementation; so its performance is still relevant but hasn't been reevaluated in a while. The system layer of Mojo has been completely replaced since we last looked into this.

Ken Rockot

unread,
Jan 29, 2016, 3:37:14 PM1/29/16
to Egor Pasko, Chromium-dev
On Fri, Jan 29, 2016 at 10:07 AM, Egor Pasko <pa...@chromium.org> wrote:
I guess my main point is that it is extremely hard to catch regressions when they creep in gradually in small portions. I would prefer a path where the primitives we build on top are performant enough at all times. How can we make sure that the rate of optimizing Mojo IPC outpaces the rate of migration to Mojo IPC? One approach is to optimize first and migrate strictly later, but I assume it is too constraining for you?

It's a balance for sure, but I don't think migrating strictly later is necessary. There's a healthy separation of concerns among the transport layer, the system API layer, and the public bindings API layer, so ideally we'd maintain momentum by parallelizing conversion and optimization. Nothing about the system fundamentally precludes its performance matching that of what we have today, so as long as we're diligent in measuring and improving it the inevitable end state should be an acceptable one.

On Fri, Jan 29, 2016 at 11:41 AM, Greg Thompson <g...@chromium.org> wrote:
Rather than micro-benchmarks, how about measuring the typical message patterns between some Chrome processes (e.g., message size, frequency, etc) and building a benchmark suite that exercises these patterns using both current IPC and mojo? This would be an apples-to-apples comparison of real-world messaging. It would be unfortunate if, for example, mojo wins on certain micro-benchmarks, but ends up having higher CPU overhead and latency for real Chrome messages.

This is an idea that's been discussed and it's likely a great direction to go. Even here I suspect there's some risk of not being a real apples-to-apples comparison since in a real Chrome process, IPC has to compete with lots of other tasks. Possibly the worst thing to come out of that though would be over-optimization, which I suppose is an OK problem to have.

noqlm...@gmail.com

unread,
Jan 29, 2016, 11:32:58 PM1/29/16
to dsk...@google.com, Ken Rockot, Egor Pasko, Chromium-dev
Is there any serialization around shared memory ? 

If u are using any of Protobufs serialization / deserialization then I think you should be fine. IPC benchmarks / waking threads up leaves you in the millisecond range and serialization is in the 0.6.-1.2 microseconds - if the benchmarks are not accurate on your perf page are they in microsecond maybe ? 100 microseconds on average isn't bad for event based processing on x86_64

Now there is tons of optimizations to make at a later date with multiple messages.... But yeah I personally think the sleeping / waking generaly constitutes an order of magnitude higher performance impact than a decent / awesome serializer especially if it's lazy on the client side.

Sent from my iPhone

Ken Rockot

unread,
Jan 29, 2016, 11:46:20 PM1/29/16
to noqlm...@gmail.com, Dmitry Skiba, Egor Pasko, Chromium-dev
On Fri, Jan 29, 2016 at 8:31 PM, <noqlm...@gmail.com> wrote:
Is there any serialization around shared memory ? 

Yes, there's cross-platform support for serializing handles to shared buffer resources.
 

If u are using any of Protobufs serialization / deserialization then I think you should be fine. IPC benchmarks / waking threads up leaves you in the millisecond range and serialization is in the 0.6.-1.2 microseconds - if the benchmarks are not accurate on your perf page are they in microsecond maybe ? 100 microseconds on average isn't bad for event based processing on x86_64

We aren't using protobuf serialization for this effort in any capacity.


Now there is tons of optimizations to make at a later date with multiple messages.... But yeah I personally think the sleeping / waking generaly constitutes an order of magnitude higher performance impact than a decent / awesome serializer especially if it's lazy on the client side.


Right. We suspect that's true as well, but we'll be more comfortable saying it once we have more concrete data. :)

Daniel Bratell

unread,
Feb 2, 2016, 4:52:49 AM2/2/16
to Egor Pasko, Ken Rockot, Chromium-dev
On Fri, 29 Jan 2016 17:17:26 +0100, Ken Rockot <roc...@chromium.org> wrote:

On Fri, Jan 29, 2016 at 6:16 AM, Egor Pasko <pa...@google.com> wrote:
Do you have estimations of the impact on binary size from the new bindings?

No, but it's something we're considering. It's a lot of generated code and we may want to optimize it down. We also anticipate deleting a lot of other code during the refactoring process though, so my baseline expectation is to at least break even on total size.

Generated code as well as templated [1] code is sometimes surprisingly large. We have some tools in the chromium repository to investigate and compare binary size (answering the "why" and not just "how much"). Those tools can found at tools/binary_size and I'd be happy to help if you have any questions.

/Daniel

[1] bind

--
/* Opera Software, Linköping, Sweden: CET (UTC+1) */
Reply all
Reply to author
Forward
0 new messages