Reviews: Chrome

25 views
Skip to first unread message

Rodrigo Fonseca

unread,
Nov 15, 2010, 7:47:56 PM11/15/10
to CSCI2950-u Fall 10 - Brown
Hi,

Please post your reviews of Chrome as a reply to this message.

Thanks,
Rodrigo

Sandy Ryza

unread,
Nov 16, 2010, 2:16:34 AM11/16/10
to CSCI2950-u Fall 10 - Brown
Title:
Isolating Web Programs in Modern Browser Architectures

Authors:
Charles Reis, Steven D. Gribble

Date:
EuroSys '09

Novel Idea:
The authors propose a browser architecture that places rendering
components into different OS-level processes. Their approach divides
related pages into browsing instances and divides pages within
browsing instances into site instances, pages from a single site. It
attempts to group pages belonging to a single site instance into a
process and isolate them from other processes with the goals of
improving performance, fault tolerance, and security, among others.
They implement their changes in the Chromium browser.

Main Result(s):
The architecture improves performance by removing the effect of slow
tabs on other tabs, with only minor overhead caused by using more
processes. It improves fault tolerance by protecting tabs from the
effect of other tabs crashing. However, the improvements come with a
significant increase in memory used, up to 4x for blank tabs, and more
than double for average pages.

Evidence:
The authors implemented their architecture in the open-source Chromium
browser, and ran comparisons against a single-process Chromium
implementation. They ran their experiments on a computer running
Windows XP equipped with a dual code 2.8 GHz Pentium D processor.
They provide numbers on performance in a few different circumstances,
including how tabs respond when others are lagging and how fast a set
of tabs opened at the same time take to load. They also provide
numbers on memory used.

Impact:
According to Wikipedia, Chrome is the third most widely used browser
with a 8.47% browser usage share.


Prior Work:
The OP browser uses processes for different web pages and for
different browser components (JavaScript engine, renderer, etc.). In
the Tahoma browser, web programs are isolated in virtual machines, but
boundaries must be explicitly specified.

Competitive Work:
Chrome mainly competes against commercial single process browsers such
as Internet Explorer and Firefox, although these are increasingly
transitioning to multi-process architectures.

Reproducibility:
All the code is open source.

Criticism:
The accountability benefit isn't much and definitely doesn't rank with
the other benefits. Is this an expected benefit to the user? How are
they expected to connect processes in a the task manager to tabs in
their browser? They might only want to kill the process taking up the
most memory if it's non-essential.

Question & Idea For Future Work:

Would there be some way for servers to be able to distinguish between
different browsing instances? What protocol changes would this
entail? Does this make any sense at all?

Tom Wall

unread,
Nov 15, 2010, 8:04:11 PM11/15/10
to CSCI2950-u Fall 10 - Brown
Isolating Web Programs in Modern Browser Architectures
Chares Reis, Steven D. Gribble
EuroSys 2009

Novel Idea:
In the early days of OS's there were isolation issues between
different programs. Today, as the web evolves from static content to
dynamic, web programs, browsers face a similar challenge: multiple web
programs running in a single browser can adversely affect each other.
The authors define a browser model that helps isolate web programs
while remaining compatible with existing applications.

Main Result:
In order to retain compatibility with existing web applications, they
have to make some compromises to their model. They implement varying
levels of isolation in google chrome using the process as a boundary.

Evidence:
They do a couple of tests on tab startup time and memory usage. While
it uses slightly more memory, pages are usually more responsive and of
course are tolerant of crashes in other processes.

Impact:
This is pretty useful and even with a slight memory overhead it is
definitely worth it. Firefox and the other browsers are beginning to
follow suit.

Reproducibility:
Chromium is open source and most browsers have since implemented
similar functionality.

Their specific tests might be hard to reproduce because they don't go
into too much detail about what sites they visit (or more importantly,
the size and nature of each site), but at the same time they get their
point across and it wouldn't be hard to design a similar test to come
up with the same conclusions.

Similar Work:
[Reis 2007b] and [Cox 2006] both address the same problem, but they do
so in a different way. Instead of implicitly inferring program
boundaries, they call for an explicit definition of each component of
a web program.

Questions:
How prevalent is the cross site javascript API that is not supported?
Have other browsers encountered the same problem/found a solution for
it?

Criticism:
Overall I thought it was a pretty nice paper, however their claims
about accountability (3.3) seem somewhat over-hyped. You could already
figure out which web program is to blame for poor performance
(usually) due to the nature of the program or by viewing the source if
necessary. And the fact that programs run in different processes
doesn't help too much since they all show up as chrome.exe in the task
manager; you can't (as far as I know) figure out which process is
running what without killing it and seeing which tab died. In the
monolithic model, you could probably close a suspect tab and witness
the drop in memory for the same effect.

Future Work:
Work on getting site instance credentials to be isolated. Possibly a
new, backwards compatible cookie model?

On Nov 15, 7:47 pm, Rodrigo Fonseca <rodrigo.fons...@gmail.com> wrote:

Zikai

unread,
Nov 15, 2010, 9:32:12 PM11/15/10
to CSCI2950-u Fall 10 - Brown
Paper Title: Isolating Web Programs in Modern Browser Architectures

Author(s):
Charles Reis (University of Washington)
Steven Gribble (Google)

Date/Conference: EuroSys 09

Novel Idea: Based on web programs and web program instances to define
program boundaries in browser that isolate program instances to each
other. Use sites, browsing instances and site instances to identify
the boundaries concretely.

Main Results: (1) Present abstractions of web programs and web program
instances, and show how these abstractions and their concrete
definitions (sites and site instances) clarify how browser components
interact and how appropriate program boundaries are identified.
(2) Identify backwards compatibility tradeoffs that constrain how web
content can be divided into programs without disrupting existing web
sites
(3) Design a multi-process browser architecture that isolates web
program instances from each other. Implement Google Chrome browser
based on the architecture. Evaluate Chrome in terms of robustness and
performance and compare it with monolithic architecture browser.

Evidence: (1) In Part 3.3, authors perform a theoretic analysis on
how the multi-process browser architecture of Chrome address various
problems like fault-tolerance, accountability, memory management,
performance and security.
(3) In Part4, authors evaluate Chrome’s process-per-site-instance mode
on robustness
(fault-tolerance, accountability and memory management) and
performance (responsiveness, speedup and latency) and compare it with
its monolithic mode.

Prior Work: Chromium’s security architecture [Barth 2008], Chromium’s
document [Google 2008]

Competitive Work: IE8, Safari, Firefox, OP browser[Grier 2008],
Tahoma[Cox 2006], SubOS[Ioannidis 2001], Mozilla Prism[Mozilla 2008],
Fluid[Ditchendorf 2008]

Reproducibility: Because Chrome is widely available, it is easy to get
a copy and reproduce the experiments.

Question: Is the architecture designed only for providing isolation
between well-behaved web programs? Can it deal with malicious scripts?

Criticism: Like Gazelle paper, the evaluation for robustness part is
not sufficient. They only use Chrome’s “about:crash” internal testing
mechanism but do not test the browser intensively in various ill-
behaved websites. Therefore, they cannot guarantee that Chrome’s
robustness holds for all possible kinds of websites.



On Nov 15, 7:47 pm, Rodrigo Fonseca <rodrigo.fons...@gmail.com> wrote:

Basil Crow

unread,
Nov 15, 2010, 11:39:31 PM11/15/10
to brown-csci...@googlegroups.com
Title: Isolating Web Programs in Modern Browser Architectures

Authors: Charles Reis and Steven D. Gribble

Date: EuroSys 2009

Novel idea: The architecture of web browsers must be modernized. Web applications must be isolated from each other in the browser in order to achieve true fault tolerance and high performance.

Main results: The authors present and justify a multi-process browser architecture and discuss its implementation in Google Chrome.

Impact: If you haven't been using Chrome, you should try it.

Evidence: The authors verified that crashes only affected their current rendering engine process or plug-in process. They also measured user-perceived latency by measuring the time interval between right-clicking on a page and the corresponding display of the context menu (the multi-process model won hands-down). The authors did reveal that their current implementation of the multi-process model incurs a large cost in terms of memory overhead.

Prior work: Does anyone else remember the "Launch folder windows in a separate process" option [1] from Windows 2000? If you had the RAM, it was great. Apparently that option is still disabled by default in Windows 7, for some reason which is beyond me.

Competitive work: Apple and Mozilla are still catching up.

Reproducibility: Chromium is open source (BSD-licensed).

[1] http://img683.imageshack.us/i/folwin.png/

Shah

unread,
Nov 15, 2010, 9:10:34 PM11/15/10
to CSCI2950-u Fall 10 - Brown
Title:

Isolating Web Programs in Modern Browser Architectures

Authors:

[1] Charles Reis
[2] Steven D. Gribble

Source and Date:

EuroSys '09: Proceedings of the 4th ACM European conference on
Computer systems, Nuremberg, Germany. March 31 - April 3, 2009.

Novel Idea:

The authors present a multi-process browser architecture that
addresses today’s reliability issues seamlessly by providing
isolation.

Main Result:

The scientists present three main results in this paper:

[1] First, they present abstractions of web programs and determine how
their boundaries can be determined.

[2] Second they identify backward compatibility constraints.

[3] Third, they present a multi-process browser - Google Chrome - and
evaluate its performance.

Impact:

This paper has been cited some 40 odd times in just over one year.
Given that the idea of having a multi-process is novel, it’s bound to
only grow in popularity in the years to come. In fact, (as is
mentioned in the paper) it looks like Microsoft’s Internet Explorer
has already followed suit with the idea of isolating processes in its
browser.

Evidence:

In Section 4, the scientists present evidence to support their claims.
Specifically, they conduct experiments to test all the following
aspects of the browser:

[1] Fault Tolerance

[2] Accountability

[3] Memory Management

[4] Responsiveness

[5] Speedup

[6] Latency

Finally, they discuss the overhead of multi-process as well as talk
about whether it remains compatible.

Prior Work:

In Section 5, the authors mention that many popular browsers like
Firefox and Safari run as a single process. They further add that they
are vulnerable to attacks. They also state that these browsers face
robustness and compatibility challenges.

Competitive Work:

In the same section as above the researchers list several other
browsers that have been decomposed to provide modular safety.
Specifically they mention the OP browser, Tahoma and Internet Explorer
8 - all of which offer multi-process architectures or attempt to
isolate programs.

Reproducibility

Although the source code is available the authors don’t delve into too
much detail to give the specific setup of each of their experiments.

Questions:

[1] Is the drawback of multi-process browsers the fact that they hog
up system resources? Or is this not true?

[2] Why did the author resort to using Windows XP during their tests?

Criticism:

The evaluation section is not very thorough. The authors fail to
provide enough details or conduct enough experiments. Further, their
use of a somewhat obsolete OS (Windows XP) doesn’t help their case.

Ideas for Further Work:

As the scientists mention at the end of Section 3 they leave the study
of the secure isolation of web principals as possible future work.


Dimitar

unread,
Nov 15, 2010, 10:58:19 PM11/15/10
to CSCI2950-u Fall 10 - Brown
Isolating Web Programs in Modern Browser Architectures

Authors: Charles Reis, Steven D. Gribble

Date:April1-3 2009

Novel Idea: Current web browser architectures do not provide
sufficient isolation between
concurrent executing programs, which leads to performance challenges,
fault tolerance and
isolation problems. In order to prevent some of those problems the
authors try to identify program
boundaries, crate process per site Instance and preserve compatibility
with the existing content

Main Results: Chromium web browser uses this architecture. The
architecture makes the browser
fault tolerant and increases performance. In their architecture each
instance of web program has
its own rendering engines that contains the code for rendering HTML ,
parsing and executing web
program. Active plug ins are executed in separate processes which
allows Chromium to preserve
compatibility with existing content. The remaining code is put in
Browser Kernel(one process) that
handles cookies, cache, history between programs .

Impact: Chrome browser is considered to be one of most robust
browsers. It is likely in the
future other browsers to use the same architecture.

Evidence: The authors try to compare the benefits and the cost of
moving from monolithic to
a multi-process architecture. The result clearly shows that the multi-
process browser is better for
interactive programs, performance can be increased in some cases, and
it can be more fault tolerant.
The test results also show that there is some overhead which includes
the use of extra memory, and
slower start up time for blank pages due to creating of a new
process.

Competitive work: Several research proposal have created multi-process
browser such as OP, but
most of them have done at the cost of compatibility.

Reproducibility: I think the test results are easily reproducible and
implementation of their architecture
is also reproducible since Chromium is open source.

Criticism: The authors could have compared Chromium with other
browsers that have multi-process
architecture.




On Nov 15, 7:47 pm, Rodrigo Fonseca <rodrigo.fons...@gmail.com> wrote:

Duy Nguyen

unread,
Nov 15, 2010, 11:42:05 PM11/15/10
to brown-csci...@googlegroups.com
Paper Title 
Isolating web programs in modern browser architectures

Authors
Reis, Charles and Gribble, Steven D.

Date 
EuroSys 2009 

Novel Idea 
A new browser architecture: a web site is divided into independent
components, and the browser has multiple processes to handle these
components.

Main Result
Although having memory overhead, Chrome shows its robustness in fault
tolerance, memory management, accountability and performance.

Impact
I think this work has big impact in the context of browser development,
especially when high end PCs are more and more affordable to users and
memory overhead is not a big issue.

Evidence 
The experiments show that multi-process browser not only has no side
effects on user experience when surfing web but also gain better performance.
Single-process Chrome has less memory overhead, but multi-process Chrome
reclaims memory better once it is no longer needed.

Prior Work
SubOS, Tahoma,..

Competitive Work
Gazelle

Reproducibility
Yes. Chrome is open source

Criticism 
They should do some direct comparisons with IE, FF, Safari as well.

Visawee

unread,
Nov 15, 2010, 11:22:53 PM11/15/10
to CSCI2950-u Fall 10 - Brown
Paper Title :
Isolating Web Programs in Modern Browser Architectures


Author(s) :
Charles Reis, Steven D. Gribble


Date :
EuroSys’09, April 1–3, 2009, Nuremberg, Germany


Novel Idea :
A multi-process browser architecture that isolates web program
instances from each other, improving fault tolerance, resource
management, and performance.


Main Result(s) :
(1) Fault tolerance : Chromium can isolate crash of a web program
instance from the others.
(2) Memory Management : Chromium can isolate usage of memory from a
web program instance from the others.
(3) Speedup and Latency : Multi-process Chromium is significantly
faster than Monolithic Chromium especially when running concurrent web
program instances on multi-core processor.
(4) Overhead : Multi-process Chromium consumes more memory than
Monolithic Chromium, and the consumption rate is linear to the number
of tabs opened.


Impact :
A faster web browser that performs well even when a user opens many
web program instances. It also provides resource management and fault
isolation between web program instances.


Prior Work :
The prior works are mainly Monolithic browsers which are prone to
robustness and performance issues.


Competitive Work :
Internet Explorer 8 also has a multi-process architecture that can
offer some of the same benefits as discussed in the paper. IE 8
separates browser and renderer components, but it doesn’t isolate site
instances from each other.


Evidence :
The authors set up experiments comparing between Multi-process mode
and Monolithic mode of Chromium. The experiments cover Fault
Tolerance, Accountability, Memory Management, Responsiveness, Speedup,
Latency, and Overhead of these two modes.


Reproducibility :
The results are reproducible. The experiments are explained in detail,
and we can also obtain the Chromium browser on the Internet.


Criticism :
- The authors should also compare Chromium with other Monolithic
browsers (e.g., Firefox, Safari).

On Nov 15, 7:47 pm, Rodrigo Fonseca <rodrigo.fons...@gmail.com> wrote:

Abhiram Natarajan

unread,
Nov 15, 2010, 7:52:28 PM11/15/10
to CSCI2950-u Fall 10 - Brown
Paper Title: Isolating Web Programs in Modern Browser Architectures

Author(s): Charles Reis, Steven D. Gribble

Date: 2009, Eurosys

Novel Idea: (1) Abstraction of web programs and program instances (2)
Identification of backward compatibility tradeoffs that constrain how
web content can be divided into programs without disrupting existing
websites (3) Envisioning of a multi-process browser architecture that
isolates these web program instances from each other

Main Result(s): A Light-Weight browser that exhibits improved fault
tolerance, resource management and performance.

Impact: A browser that is robust! Clearly Chrome forced firefox and IE
to look into their robustness.

Evidence: The authors perform quantitative analysis of the benefits
and costs of the architecture.

Prior Work: Previously Existing Browsers(!), work as in Grier 2008,
Cox 2006, Ioannidis 2001, Zeigler 2008, etc...

Competitive Work: The authors provide thorough evidence of how the
change from a monolithic to a multi-process architecture improves
robustness in terms of fault tolerance, accountability and memory
management. Also, they quantify the memory overhead for the
architecture and discuss how Chromium satisfies backward
compatibility.

Reproducibility: The numbers presented in the paper should be
reproducible given that it is open source.

Criticism: Chrome is the "default browser" on my system, so personally
I like it better than the others. I have heard it is not a 10/10
w.r.t. stability like firefox, and I have found that to be true with
my experiences. However, it is a phenomenal bit of innovation. I am a
huge google fan.

On Nov 15, 7:47 pm, Rodrigo Fonseca <rodrigo.fons...@gmail.com> wrote:

Hammurabi Mendes

unread,
Nov 15, 2010, 9:21:02 PM11/15/10
to brown-csci...@googlegroups.com
Paper Title

Isolating Web Programs in Modern Browser Architectures

Authors

Charles Reis, Steven Gribble

Date

EuroSys, April 2009

Novel Idea

Isolating "web programs" in different processes, according to rules
that define how these web programs could interact, improving security
and responsiveness to user input.

Main Results

This paper characterizes web pages as programs and defines clearly the
terms involved in such characterization. Moreover, they provide a
browser implementation that actually isolates this programs in
different process.

Impact

Isolating web programs in different processes increases overall
browser security, as sensitive information is exposed more sparingly,
fault-tolerance, as crashes are enclosed in a single web program, and
also usability responsiveness, as having multiple processes naturally
benefits from multiprocessor architectures.

Evidence

The paper is first concerned on defining the terms used to
characterize web pages as programs (and its instances): sites,
browsing instances and site instances. Then, they mention the
isolation policies implemented in Chromium, bringing up implementation
issues among these policies.

They evaluate their implementation on a dual core machine, and discuss
fault-tolerance, security, speedup, latency, compatibility and
overhead. Although the discussion is ample, it lacks some depth for
some evaluation criteria (see Questions+Criticism).

Prior Work + Competitive Work

Browsers in which web pages run in a single process, such as Firefox
and Safari -- they have been analyzed by the authors (+ Bryan Bershad
and Henry Levy) as being fault-prone in a previous technical report.

They mention the OP and Tahoma browsers, but the authors claim that
not enough details about how page interactions are given in both
cases.

IE8 is presented as a similar approach of isolating web pages, but the
authors claim that multiple site instances are not isolated from each
other in this browser.

Reproducibility

I believe that the experiments are reproducible. The paper gives a
clear definition of the metrics in question and names the tools used
in the tests (the about:crash feature, the Chromium's own task
manager, the Alexa service, and so on).

Questions + Criticism

[Criticism] The merit of the paper is discussing the issues involving
isolation, a relatively simple concept, in a very ample manner. The
trade-off is that more involving technical discussion in the
evaluation section is lacking.

More specifically, in the evaluation section they could have discussed
other things [Questions]:

1) How "big" web programs would impact response delay when using a
multiple-processes approach, and how would that compare to the
response delay in a single-process approach? (They have a simplified
version of this proposed test.)

2) If we increased the number of processor cores and the number of
concurrently opened web sessions, how would this show on the speedup?

3) How many miliseconds of latency are reasonable to user perception?
(I've been told that 100ms is the threshold when users start noticing
a delay - it would be nice if they discussed this topic and provided
the sources of information instead on relying on intuition.)

4) Are there any statistics about how often popular plugins crash?

Ideas for Further Work

Using OS-level virtualization (FreeBSD jails, Linux V-Server) to
increase process isolation in situations that require extra security.

On Mon, Nov 15, 2010 at 7:47 PM, Rodrigo Fonseca
<rodrigo...@gmail.com> wrote:

Jake Eakle

unread,
Nov 16, 2010, 10:02:11 AM11/16/10
to brown-csci...@googlegroups.com
Paper Title

Isolating Web Programs in Modern Browser Architectures 

Author(s)

Charles Reis, Steven D. Gribble 

Date 2009
Novel Idea Improve web browser stability by isolating groups of pages from the same domain.
Main Result(s) A multi-process architecture that identifies and isolates 'site instances' and browser plugins, leading to vastly improved stability and concurrency over the traditional single-process 'monolithic' browser model. 

They seek to provide a browser viable in the current 'marketplace', and to do so must make a number of compromises, sacrificing full isolation for the ability to allow many existing web applications to function, and for not having to require standards changes or other input from web page owners. 
Impact A great browser that lots of people use!
Evidence They provide a bunch of compelling evidence for their claims, including a remarkable table of latencies incurred by loading other pages in the background under each model - the multiprocess model exhibits almost no increased latency, while the monolithic model incurs several orders of magnitude more.
Prior Work They mention other proposed isolation schemes, but claim to be the first that has made the practicality/purity tradeoff that lets Chrome compete in the wider world of normal internet users.
Reproducibility Chromium is open source.
Question When are the compromises they have to make bad? Does the user ever notice odd crash behavior (one site crashing another that they thought was unrelated)? They mention the tradeoff, but don't go into how bad it is.
Criticism Though Chromium is open source, it would still be nice if they spent a bit more time talking about implementation details. The paper concentrates almost exclusively on making general, theoretical claims, and then on proving that they have implemented them, but it leaves out most of the middle stages. 
Ideas for further work A more full-featured pdf viewer lol

oh whoa i never clicked send!? ack!
--
A warb degombs the brangy. Your gitch zanks and leils the warb.

Siddhartha Jain

unread,
Dec 13, 2010, 3:34:38 AM12/13/10
to brown-csci...@googlegroups.com
Novel Idea:
The main idea is to separate the tasks done by other browsers in one process such as rendering 
different pages into seperate processes

Main Results:
The framework is described. Related pages (pages belonging to the same site) are grouped under the same
process. There is a process limit - after 20 processes, old processes are used (I believe the limit is 35
in modern versions though I could be wrong) for mitigating process overhead.

Impact:
A popular browser which has some interesting ideas.

Evidence:
The latency figures for starting up new tabs are given. The memory overhead is given. Fault tolerance and
accountability are discussed (the latter is a bit specious since one may not know just looking at the process
table which tab is responsible)

Reproducibility:
Open source!

Question:
If similar pages were loaded on Chrome vs. say Firefox, how would the latency figures for the browser and the affect
on the performance of other non-browser processes look like.

Ideas for future work:
Maybe it would make more sense to group pages for a process under how easy to render a page is? If a user opens 50 tabs,
40 of which are easy to render, then it might be better for the same rendering process to be responsible for those 40
pages.


On Mon, Nov 15, 2010 at 7:47 PM, Rodrigo Fonseca <rodrigo...@gmail.com> wrote:

Joost

unread,
Dec 14, 2010, 5:16:33 PM12/14/10
to CSCI2950-u Fall 10 - Brown
Title: Isolating Web Programs in Modern Browser Architectures
Authors: Charles Reis Steven D. Gribble
Date: EuroSys 09 April 1-3

Novel Idea: The authors propose building a web browser that seeks to
tackle the challenges of the increasing complexity of the web by
dividing different domains into different processes, similar to the
divide that occurs on an OS level.
Main Results: The authors have successfully prototyped the new browser
(as evidenced by Chrome's existence), and the effects of having each
tab be considered its own process leads to noticeable absence of
performance drop on loading two websites concurrently.
Impact: Once again, a Google product is on its way to establishing a
market share in a new field, and by extension, allowing itself to more
quickly evolve their own webservices, since they now control both the
front and back end.
Evidence: The authors compared their browser with multi-processing
enabled and disabled and showed that the latency in the multi-
processor instance is significantly lower than in the single processor
setup. However, there was a noticeable increase in the amount of
memory usage in the multi-processor instance (about a factor of 2).
Prior Work: The authors drew a lot from work on monolithic browser
such as Firefox and Safari, and also from the early days of OS and
threads and multi-processors in particular.
Reproducibility: Running the experimental setup that the authors had
would be easy given that Chrome is free to download.
Question/Criticism: It would have been nice to see a comparison
between Chrome and other browsers currently on the market, not just a
comparison of monolithic vs multi-processed.

On Nov 15, 7:47 pm, Rodrigo Fonseca <rodrigo.fons...@gmail.com> wrote:
Reply all
Reply to author
Forward
0 new messages