Authors: Charles Reis and Steven D. Gribble
Date: EuroSys 2009
Novel idea: The architecture of web browsers must be modernized. Web applications must be isolated from each other in the browser in order to achieve true fault tolerance and high performance.
Main results: The authors present and justify a multi-process browser architecture and discuss its implementation in Google Chrome.
Impact: If you haven't been using Chrome, you should try it.
Evidence: The authors verified that crashes only affected their current rendering engine process or plug-in process. They also measured user-perceived latency by measuring the time interval between right-clicking on a page and the corresponding display of the context menu (the multi-process model won hands-down). The authors did reveal that their current implementation of the multi-process model incurs a large cost in terms of memory overhead.
Prior work: Does anyone else remember the "Launch folder windows in a separate process" option [1] from Windows 2000? If you had the RAM, it was great. Apparently that option is still disabled by default in Windows 7, for some reason which is beyond me.
Competitive work: Apple and Mozilla are still catching up.
Reproducibility: Chromium is open source (BSD-licensed).
Isolating Web Programs in Modern Browser Architectures
Authors
Charles Reis, Steven Gribble
Date
EuroSys, April 2009
Novel Idea
Isolating "web programs" in different processes, according to rules
that define how these web programs could interact, improving security
and responsiveness to user input.
Main Results
This paper characterizes web pages as programs and defines clearly the
terms involved in such characterization. Moreover, they provide a
browser implementation that actually isolates this programs in
different process.
Impact
Isolating web programs in different processes increases overall
browser security, as sensitive information is exposed more sparingly,
fault-tolerance, as crashes are enclosed in a single web program, and
also usability responsiveness, as having multiple processes naturally
benefits from multiprocessor architectures.
Evidence
The paper is first concerned on defining the terms used to
characterize web pages as programs (and its instances): sites,
browsing instances and site instances. Then, they mention the
isolation policies implemented in Chromium, bringing up implementation
issues among these policies.
They evaluate their implementation on a dual core machine, and discuss
fault-tolerance, security, speedup, latency, compatibility and
overhead. Although the discussion is ample, it lacks some depth for
some evaluation criteria (see Questions+Criticism).
Prior Work + Competitive Work
Browsers in which web pages run in a single process, such as Firefox
and Safari -- they have been analyzed by the authors (+ Bryan Bershad
and Henry Levy) as being fault-prone in a previous technical report.
They mention the OP and Tahoma browsers, but the authors claim that
not enough details about how page interactions are given in both
cases.
IE8 is presented as a similar approach of isolating web pages, but the
authors claim that multiple site instances are not isolated from each
other in this browser.
Reproducibility
I believe that the experiments are reproducible. The paper gives a
clear definition of the metrics in question and names the tools used
in the tests (the about:crash feature, the Chromium's own task
manager, the Alexa service, and so on).
Questions + Criticism
[Criticism] The merit of the paper is discussing the issues involving
isolation, a relatively simple concept, in a very ample manner. The
trade-off is that more involving technical discussion in the
evaluation section is lacking.
More specifically, in the evaluation section they could have discussed
other things [Questions]:
1) How "big" web programs would impact response delay when using a
multiple-processes approach, and how would that compare to the
response delay in a single-process approach? (They have a simplified
version of this proposed test.)
2) If we increased the number of processor cores and the number of
concurrently opened web sessions, how would this show on the speedup?
3) How many miliseconds of latency are reasonable to user perception?
(I've been told that 100ms is the threshold when users start noticing
a delay - it would be nice if they discussed this topic and provided
the sources of information instead on relying on intuition.)
4) Are there any statistics about how often popular plugins crash?
Ideas for Further Work
Using OS-level virtualization (FreeBSD jails, Linux V-Server) to
increase process isolation in situations that require extra security.
On Mon, Nov 15, 2010 at 7:47 PM, Rodrigo Fonseca
<rodrigo...@gmail.com> wrote:
Paper Title |
Isolating Web Programs in Modern Browser Architectures |
Author(s) |
Charles Reis, Steven D. Gribble |
Date | 2009 |
Novel Idea | Improve web browser stability by isolating groups of pages from the same domain. |
Main Result(s) |
A multi-process architecture that identifies and isolates 'site instances' and browser plugins, leading to vastly improved stability and concurrency over the traditional single-process 'monolithic' browser model. They seek to provide a browser viable in the current 'marketplace', and to do so must make a number of compromises, sacrificing full isolation for the ability to allow many existing web applications to function, and for not having to require standards changes or other input from web page owners. |
Impact | A great browser that lots of people use! |
Evidence | They provide a bunch of compelling evidence for their claims, including a remarkable table of latencies incurred by loading other pages in the background under each model - the multiprocess model exhibits almost no increased latency, while the monolithic model incurs several orders of magnitude more. |
Prior Work | They mention other proposed isolation schemes, but claim to be the first that has made the practicality/purity tradeoff that lets Chrome compete in the wider world of normal internet users. |
Reproducibility | Chromium is open source. |
Question | When are the compromises they have to make bad? Does the user ever notice odd crash behavior (one site crashing another that they thought was unrelated)? They mention the tradeoff, but don't go into how bad it is. |
Criticism | Though Chromium is open source, it would still be nice if they spent a bit more time talking about implementation details. The paper concentrates almost exclusively on making general, theoretical claims, and then on proving that they have implemented them, but it leaves out most of the middle stages. |
Ideas for further work | A more full-featured pdf viewer lol |