I'm working on https://code.google.com/p/chromium/issues/detail?id=295103 , which is related to Linux SxS (side-by-side) migration, but also affects current Chrome Linux packages but in a more subtle way.The problem is that ResourceBundle lazily loads resources from disk when requested. The resources might have been removed from under the browser (as is the case with above bug), but it's also possible even without SxS that the package manager has done an upgrade and the files on disk are out of sync with running version of Chrome.One easy way to deal with this problem is to say it's inevitable. It's indeed non-trivial to fix since even if we load all the resources at startup (which is also not that easy since the list of resources to load needs to be complete and stay accurate; also it'd likely regress startup performance) there'd still be a small window of time between chrome binary being loaded into memory and resource files being loaded from disk during which the resource files on disk may become out of sync with the running binary.One solution to this problem would be to bake the resources into chrome binary. This might also regress startup performance, but generally it should result in lazy loading of resources and should guarantee things staying in sync.What are your opinions, thoughts and recommendations about this? Please let me know if I should explain the problem better.Pawel
Do you have any advice for making all Chrome subprocesses (like utility process, service, anything else that executes the "chrome" binary) use the zygote on Linux?This is https://code.google.com/p/chromium/issues/detail?id=22703 and I'd need to fix it before enabling side-by-side packages on Linux. This also affects current Chrome Linux anyway, see below.Selected quotes from that 2009 bug:The more I think about this, the more I'm convinced that this needs to be fixed beforelaunch. (John Abd-El-Malek)We don't want to ever go out to disk when looking for data afterstartup, since they can be changed by an update. (Evan Martin)Have the utility process run out of process on Linux again byusing the /proc/self/exe trick we use for plugins. Since we don'tneed any resources from .pak files, this should be safe. (Tony Chang)And the above seems to no longer be true - which shouldn't be surprising since the comment is from 2009 and the correct behaviour is pretty hard to test for in an automated manner.My conclusion is that current behaviour is definitely not correct, it was considered serious enough to possibly block the launch on Linux, and it would be great to finally fix that.
Some specific questions:1. So should I add a non-sandboxed zygote host for processes that do not run sandboxed?
1b. How can I easily see which of the current Chrome subprocesses are SUID-sandboxed and which are not?
2. Do we have a good way to prevent future breakages caused by people forgetting to launch a Chrome subprocess through zygote?
PawełOn Tue, Sep 24, 2013 at 2:00 PM, Paweł Hajdan, Jr. <phajd...@chromium.org> wrote:I'm working on https://code.google.com/p/chromium/issues/detail?id=295103 , which is related to Linux SxS (side-by-side) migration, but also affects current Chrome Linux packages but in a more subtle way.The problem is that ResourceBundle lazily loads resources from disk when requested. The resources might have been removed from under the browser (as is the case with above bug), but it's also possible even without SxS that the package manager has done an upgrade and the files on disk are out of sync with running version of Chrome.One easy way to deal with this problem is to say it's inevitable. It's indeed non-trivial to fix since even if we load all the resources at startup (which is also not that easy since the list of resources to load needs to be complete and stay accurate; also it'd likely regress startup performance) there'd still be a small window of time between chrome binary being loaded into memory and resource files being loaded from disk during which the resource files on disk may become out of sync with the running binary.One solution to this problem would be to bake the resources into chrome binary. This might also regress startup performance, but generally it should result in lazy loading of resources and should guarantee things staying in sync.What are your opinions, thoughts and recommendations about this? Please let me know if I should explain the problem better.Pawel
--
--
Chromium Developers mailing list: chromi...@chromium.org
View archives, change email options, or unsubscribe:
http://groups.google.com/a/chromium.org/group/chromium-dev
To unsubscribe from this group and stop receiving emails from it, send an email to chromium-dev...@chromium.org.
On Tue, Oct 1, 2013 at 3:31 PM, Paweł Hajdan, Jr. <phajd...@chromium.org> wrote:
1. So should I add a non-sandboxed zygote host for processes that do not run sandboxed?The zygote is what currently sets up the setuid sandbox, so you'd need a different one for processes that don't want the setuid sandbox (GPU process, NPAPI plugin processes, others?).
1b. How can I easily see which of the current Chrome subprocesses are SUID-sandboxed and which are not?They should be 1:1 with the ones that use the zygote. So I suppose, audit callers of BrowserChildProcessHost::Launch?
2. Do we have a good way to prevent future breakages caused by people forgetting to launch a Chrome subprocess through zygote?Remove the use_zygote bool from BrowserChildProcessHost::Launch.
On Oct 2, 2013 1:13 AM, "Antoine Labour" <pi...@google.com> wrote:
>>
>> 1. So should I add a non-sandboxed zygote host for processes that do not run sandboxed?
>
>
> The zygote is what currently sets up the setuid sandbox, so you'd need a different one for processes that don't want the setuid sandbox (GPU process, NPAPI plugin processes, others?).
>
Not knowing too much about the setuid sandbox details. What would prevent the zygote to set it up after the fork() and before handling control to the subprocess-specific code?
That's what happens with the Android Zygote (not the Chromium one which doesn't exist on this platform). This allows for several levels of sandboxing.
Not knowing too much about the setuid sandbox details. What would prevent the zygote to set it up after the fork() and before handling control to the subprocess-specific code?
Paweł
[+jln]IIRC each zygote shares the same ASLR layout. So, there would appear to be definite security implications to sharing a zygote for every process. But I would defer to Julien's take on this.
The fact that the GPU process currently has a different ASLR layout to renderer processes is considered a pretty useful property.
Are we talking about bringing the GPU under the same zygote as renderers or a brand new zygote for process types that do not live under the setuid sandbox?
On Wed, Oct 16, 2013 at 11:53 AM, Chris Evans <cev...@google.com> wrote:
The fact that the GPU process currently has a different ASLR layout to renderer processes is considered a pretty useful property.
Are we talking about bringing the GPU under the same zygote as renderers or a brand new zygote for process types that do not live under the setuid sandbox?The latter. As said in one of my posts above, some code that currently doesn't go through zygote would be non-trivial to sandbox, e.g. PluginLoaderPosix.I'd like to address the immediate issue first, which is processes that don't go through zygote break upgrading Chrome on Linux, by adding a second, unsandboxed zygote.
After that, anyone would be free to work on moving processes from unsandboxed to the sandboxed zygote.
Note that for any concerns you might raise, should we sacrifice correctness (not breaking during upgrade) for an increased security benefit?
On Wed, Oct 16, 2013 at 11:58 AM, Paweł Hajdan, Jr. <phajd...@chromium.org> wrote:
On Wed, Oct 16, 2013 at 11:53 AM, Chris Evans <cev...@google.com> wrote:
The fact that the GPU process currently has a different ASLR layout to renderer processes is considered a pretty useful property.
Are we talking about bringing the GPU under the same zygote as renderers or a brand new zygote for process types that do not live under the setuid sandbox?The latter. As said in one of my posts above, some code that currently doesn't go through zygote would be non-trivial to sandbox, e.g. PluginLoaderPosix.I'd like to address the immediate issue first, which is processes that don't go through zygote break upgrading Chrome on Linux, by adding a second, unsandboxed zygote.Well, this is nice, because IMHO it is security positive. Currently the browser and GPU processes share the same ASLR layout and it sounds like they would not if we had a new unsandboxed zygote.
After that, anyone would be free to work on moving processes from unsandboxed to the sandboxed zygote.
Note that for any concerns you might raise, should we sacrifice correctness (not breaking during upgrade) for an increased security benefit?No, we should have both correctness and security.
On Wed, Oct 16, 2013 at 12:25 PM, Chris Evans <cev...@chromium.org> wrote:
On Wed, Oct 16, 2013 at 11:58 AM, Paweł Hajdan, Jr. <phajd...@chromium.org> wrote:
On Wed, Oct 16, 2013 at 11:53 AM, Chris Evans <cev...@google.com> wrote:
The fact that the GPU process currently has a different ASLR layout to renderer processes is considered a pretty useful property.
Are we talking about bringing the GPU under the same zygote as renderers or a brand new zygote for process types that do not live under the setuid sandbox?The latter. As said in one of my posts above, some code that currently doesn't go through zygote would be non-trivial to sandbox, e.g. PluginLoaderPosix.I'd like to address the immediate issue first, which is processes that don't go through zygote break upgrading Chrome on Linux, by adding a second, unsandboxed zygote.Well, this is nice, because IMHO it is security positive. Currently the browser and GPU processes share the same ASLR layout and it sounds like they would not if we had a new unsandboxed zygote.I think that's the opposite.Today we exec the GPU process, but with the unsandboxed zygote, we would only fork.
(Oops, I had missed this thread!)
Both Mark and myself have been pondering adding new "sub-Zygotes".
It's something I wanted to consider this quarter, but I would like to
have a good sense of what the changes related to Mojo should be first.
Some of the current well-known problems are:
- Threads: anything that fork() cannot be threaded without jumping
through hoops.
This creates a tremendous amount of complexity and
makes APIs such "GetTerminationStatus()" roughly impossible to
implement correctly on Linux (since they need to be blocking). To fix
this I've been adding the "known_dead" flag to a bunch of APIs which
is very awkward.
This is a difficult problem. CLONE_PARENT could be used to solve it
(the thing that actually does the fork() would be a child, but it
feels very hack-ish. See crbug.com/157458 or crbug.com/274827 for some
examples and explanations.
- The weird "ZygoteForkDelegate" interface that is used by NaCl when
it should really be its own separate Zygote. Currently a fork request
for NaCl goes to the "normal" Zygote which will route it to the NaCl
Zygote. Since both have to be monothreaded, this is especially
problematic. c.f. crbug.com/133453
- The Zygote was created partly because Chrome is updated "in-place"
on Linux. (And executing a new version of Chrome when starting a new
renderer process wouldn't work). However, this property has been lost
since there are too many "non Zygote" process types. Moreover, a lot
of the code that tries to be careful in re-executing /proc/pid/fd/X is
broken because base/ added a few 'readlink' in some of the high level
APIs which break the "same-inode" goal. Example: crbug.com/257149
When designing a new model for a Zygote, we need keep a few things in mind:
- It can't be "one Zygote". We need a few "model process" around. How
many is a matter of trade-offs
The more process types one model process supports, the less the Zygote
becomes useful.