Hi Chromium-Mojo,I am recently debugging CrOS/ml-service's multiprocess's IPC and trying to understand some internals of mojo but get confused with the relation among the Broker class, the BrokerHost class and the "broker process". It seems,1. Although its name contains the word "broker", "BrokerHost" is actually created in sending invitations. So more accurately speaking, it always resides in the "inviter process" (usually the parent process) rather than the unique "broker process" in the graph, although they are usually the same in chrome. Is this correct?
2. If the observation in 1 is correct, it seems the "broker responsibilities" (like creating a shared buffer) are actually handled in the inviter processes (where BrokerHost resides) rather than the unique "broker process" in the graph. Is this correct?
If the observation and guesses in 1 and 2 above are correct, it seems the brokering work is actually being shared by many nodes in the graph. Then what is the point of still having a unique "broker process"?
--Thanks!Best,Honglin
You received this message because you are subscribed to the Google Groups "chromium-mojo" group.
To unsubscribe from this group and stop receiving emails from it, send an email to chromium-moj...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/chromium-mojo/CAFstQzOKAw67k%3DXKAsMCYr7XCsaMNkVZ7e_-PVDNV_jaU1vGzQ%40mail.gmail.com.
The second one should really be done by the broker process, and the fact that it isn't is just an artifact from when this was first implemented and we weren't thinking much about more complex process graphs than Chrome's.
If the observation and guesses in 1 and 2 above are correct, it seems the brokering work is actually being shared by many nodes in the graph. Then what is the point of still having a unique "broker process"?The "broker process" is something different altogether, and it must still exist for several important reasons:
- It's responsible for introducing processes to each other: suppose you have processes A (the broker), B, and C. A invites B and C. Over various application-level messages, B sends a pipe endpoint to A who then forwards it along to some service in C. Now you have a message pipe routing between B and C, but perhaps this is the first such pipe and there's not yet any direct (OS-level) link between the two processes. When B hear's C's name and realizes it doesn't know how to reach C, it will ask A (the broker) for an introduction. A will create a new socketpair (on POSIX) and send one end to B in a message that says "this is C", and the other end to C in a message that says "this is B." Now they can talk to each other.
- On Windows, handle transfer between processes happens via a privileged system call to directly manipulate another process's handle table. Because of this, all messages carrying system handles must be relayed through some privileged process who can do the necessary handle duplications. The broker process does this.
- In some edge cases it's necessary to broadcast an event to every connected process, and this is done via a request to the broker process.
In essence, the broker process is the only process in the graph that is guaranteed to know about (and have a direct OS-level link to) every other process in the graph, and it's the only one guaranteed to be privileged enough to globally manage handle duplication on Windows.
Thanks for the quick and informative response, Ken! I understand it now. May I confirm that (which is useful in debugging whether there are file descriptor leaks for us):1. Each non-broker leaf node in the graph will only have one "sync IPC" platform channel (and this sync IPC channel is to the "inviter process", not the "broker process").
2. There is at most one "Async IPC" platform channel between any pair of nodes (i.e. the NodeChannel). That is to say, except for invitation and sharedbuffer, every other things are going through this channel.