11/17 Reviews - Chrome

3 views
Skip to first unread message

Rodrigo

unread,
Nov 16, 2009, 6:21:44 PM11/16/09
to CSCI2950-u Fall 09 - Brown
Please post your reviews here...

Juexin Wang

unread,
Nov 17, 2009, 12:01:02 AM11/17/09
to brown-cs...@googlegroups.com
Paper Title  
isolating web programms in modern browser architectures
 
Author(s)  
Charles Reis Steven D. Gribble

Date  
April, 2009
 
Novel Idea  
Divide web content into separate web programs. Isolating web web program instance by using an OS process for each instance and the components to improve the browser's robustness and performance.

 
Main Result(s) 
- Present abstractions of web programs and program instances, show that these abstractions clarify how browser components interact and how appropriate program boundaries can be identified.
- Identify backwards compatibility tradeoffs that constrain how web content can be divided into programs without disrupting existing web sites.
- Present a multi-process browser architecture that isolates these web program instances from each other, improving fault tolerance, resource management, and performance.
- Discuss how this architecture is implemented in Google Chrome, and we provide a quantitative performance evaluation examining its benefits and costs.


Impact  
Consider the web pages as programs, rather than simple documents. This approach can improve the performance of web browsers. Also, it separate web pages into instances, isolate them from each other by using OS processes. This architecture reduce the impact of failures, isolate memory management, each instance can run safely.


 
Evidence  
- Define a web program based on the range of origins to which its pages may legally belong.
- Whether two containers are connected can be indicated by seeing if the DOM bindings expose references to each other.
- Browsing instance, which matches the notion of a “unit of related browsing contexts”, are connected subsets divided from the browser’s page containers.
- New browsing instances are created each time the user opens a fresh browser window, and they grow each time an existing window creates a new connected window or frame.
- Site instance is a set of connected, samesite pages within a browsing instance. one site instance per site is allowed within a given browsing instance.
- Pages from the same site should have no references to each other when they belong to different site instances in separate browsing instances. In all, pages in separate site instances are independent of each other, if they can reference to each other depends on whether they are belong to same browsing instances.
- The browsing instance and site instance boundaries are orthogonal to the groupings of windows and tabs in the browser.
- In section 2.3 there are some practical issues about runtime environment that affect how site instances can be implemented and isolated from each other.
- Create a rendering engine process for each instance of a web program, including the components that parse, render, and execute web programs. A second process known as the browser kernel contains most of the remaining browser components such as storage functionality, network stack and UI. A third type of process run the plug-ins.


 
Prior Work
N/A
 
Competitive work  
N/A
 
Reproducibility  
 N/A
 
Question  & Criticism  
There are 3 types of processes, only the rendering engine processes can be seen that isolated each instances, but those instances still sharing the other 2 processes for storage, UI, network and plug-in. So there is still unsafe, compare to the "safe" mentioned by authors in the first section, existed in the browser.

Authors should show us an example surfing on internet, in that how many rendering engine processes are really generated and how. in the other word, how well the isolation works. Because I think the argument in section 2 and 3 are kinda strong, they should show us it really can divide them into different instances.

The instance boundaries are not sufficient, authors also mentioned this.


--
J.W
Happy to receive ur message

Steve Gomez

unread,
Nov 16, 2009, 7:13:08 PM11/16/09
to CSCI2950-u Fall 09 - Brown
Author: Charles Reis and Steven D. Gribble
Paper Title: "Isolating web programs in modern browser architectures"
Date: In EuroSys '09

This paper presents the Google Chrome browser architecture, built off
of Chromium, and motivates its design by an analysis of web program
abstractions that demonstrate how web content can be divided into web
programs. The authors state a general goal of being able to use these
abstractions to execute web programs in a high-performance and robust
way.

Prior work includes 'monolithic' browsers (single process) which do
not handle web programs with the robustness that Chrome claims. Other
experimental work is being done to improve process isolation in
projects like Tahoma, SubOS, and the multi-process IE8. Site-specific
browsers, like Mozilla Prism and Fluid, also include process isolation
but use a different paradigm for web programs.

This work has made some impact already. On delivering the goal of
'robustness', the paper does a good job detailing how well Chrome
reclaims memory, gracefully fails during process failure/bugs, and
manages resource accounting. Taken with the performance evaluation,
we see that Chrome is a viable architecture, at least for certain web
contents.

The evaluation of Chrome is very straight-forward. Load time, memory
overhead, and latency are measured experimentally, comparing the multi-
process version of Chrome versus the monolithic standard
architecture. The results generally show that Multi-process is faster
and can load with less delay in rich content pages than in
Monolithic. For blank pages, Monolithic has less overhead, and
overall memory overhead (tested by measuring physical memory + an
approximation of shared resources) was higher for Multi-process.

There are some simplifications to criticize in the evaluation. For
starters, it would be nice to compare Multi-process against the big
browser players (Firefox, IE, Safari) instead of monolithic Chromium.
The authors argue that switching the flag in Chromium is nice because
it allows them to just test the architecture change, without
implementation details. But if Google wants to push Chrome to users
(and it does) it makes sense to see how it stacks up. Maybe those
implementation details aren't completely trivial.

The explanation of approximating shared memory between processes also
seemed a bit of a hand wave. The authors could try to justify
averaging the total shared bytes, or explain when this leads to a bad
approximation. For future work, is there a better way to monitor
inside the architecture?

qiao xie

unread,
Nov 17, 2009, 12:11:14 AM11/17/09
to brown-cs...@googlegroups.com
[Paper Title]

isolating web programms in modern browser architectures
 
 
[Author]

Charles Reis Steven D. Gribble

[Date]
April, 2009

[Novel Idea]
Isolating web instances from each other bu hiring OS process to isolate the impacts and memory usages. This mechanism can easily keep safe of each separated process, improve the browser's robustness and performance.

[Main Result]
Present abstractions of web programs and program instances.

Identify backwards compatibility tradeoffs that constrain how web content can be divided into programs without disrupting existing web sites.
Present a multi-process browser architecture that isolates these web program instances from each other.
Discuss how this architecture improving fault tolerance, resource management, and performance.

[Impact]
Isolate web pages into instances by using OS process to provide an instance-safe browser, that means each instance has its own memory management by OS and suffer little impact of failures from others.
 

[Evidence]
- Web program is based on the range of origins.
- DOM bindings can be used to judge if two containers are connected, by seeing references.
- Pages do share some other communication channels in the browser.
- Subdivide a browsing instance into groups of pages from the same web program to form site instances. There can be only one site instance per site within a given browsing instance.
- The same site can have multiple site instances in separate browsing instances. In this case, pages from different site instances have no references to each other.
- A single browsing instance may contain site instances from different sites (e.g., sites A and C). In this case, pages in different site instances do have references to each other.
- Pages in separate site instances are independent of each other.

- The browsing instance and site instance boundaries are orthogonal to the groupings of windows and tabs in the browser.
- There are 3 types of processes: rendering engine process for each instance, browser kernel process serve for other components such as storage, network and UI. The last one is plug-in process.
 
[Question]  
 
The author said that it's now lack a precise program abstraction for browsers, and the instance boundaries are not sufficient. So how much better would be gain if we can improve them? The evaluation doesn't tell.
 
 
[Criticism]  
If the instance divide are not good, the the isolation is not sufficient. The evaluation doesn't show the details about statue of rendering engine processes, we don't know for example if we visit 10 pages from 5 sites, how the instances decided.


2009/11/17, Rodrigo <rodrigo...@gmail.com>:

Sunil Mallya

unread,
Nov 16, 2009, 9:52:38 PM11/16/09
to CSCI2950-u Fall 09 - Brown
Paper Title
Isolating web programs in modern browser architectures
Author(s)
Reis, Charles and Gribble, Steven D.
Date
EUROSYS 2009

Novel Idea
Showing that web content can be divided into separate web programs and
running separate instances of these programs within the browser.

Main Result(s)
The paper acknowledges the difficulty of isolating program boundaries
in web content by describing the following challenges :- finding a way
for browsers to identify program boundaries, addressing complication
when trying to preserve compatibility and re-architecting the browser
to support the isolation. Then the authors discuss about these
challenges in detail like why it is difficult to find programs in
browsers and the problems faced by monolithic browser architectures.
They then define ideal abstractions and concrete definitions to
isolating the browsing instances.
Then the authors propose a way to incorporate all of these by
implementing architecture in the open source chromium web browser.
This architecture dedicates one process to each program instance and
one for the browser components. The good thing about doing so is
isolating the memory management and errors that occur in individual
instances which makes the browser more robust.

Impact
This is certainly a big leap in terms of the evolution of browsers and
we could see a lot of browsers moving this way.
We could have more fancy stuff like this : A 2x- Faster web (
http://blog.chromium.org/2009/11/2x-faster-web.html )

Evidence
The evaluation is mainly motivated by the fact to evaluate the
effectiveness of moving from a monolithic to multi-process
architecture in chrome.
To prove chrome is more robust they simulate the page crashes on
browsers and find that in chrome only the particular browsing instance
crashes.
Also to evaluate the impact on performance due to multi process
architecture they evaluate responsiveness, speedup and latency factors
while loading additional pages (real or blank).

Criticism
Still not perfect, I have personally experience problems with ajax
requests on chrome.
I
deas for further work
May be there will be a day when the whole of the web would be
controlled by google and browsers & google could act as client –
servers in literal sense and provide more effective one to one
services with more security.

小柯

unread,
Nov 16, 2009, 11:14:31 PM11/16/09
to brown-cs...@googlegroups.com
Paper Title:    Isolating Web Programs in Modern Browser Architectures

Authors:        Charles Reis
                    Steven D. Gribble

Date:            2009

Novel Idea:
    Proposing a method to identify boundaries of web program so that different site instance couldn't interfere others. The different program, because of backward compatibility and other consideration, is defined by domain name. Different web program runs in different process to guarantee its isolation and security.

Main Result:
    The whole idea is implemented in Google Chrome and Chrome is tested to evaluate its fault-tolerance and robustness.

Impact:
    In present days, the web applications become more complicated than before. It's not just document today but programs; Therefore, there should be some isolation between programs to protect them being affected by each other. After this idea is proposed and implemented in chrome, this issue becomes a new field of research.

Evidence:
    Authors define many new terminology and explain why and how they identify and separate different web program. Then, some evaluation is provided to prove that it not only is robust but won't create much overhead.
  
Prior Work:

Competitive work:
    Gazelle

Reproducibility:
    Yes.

Question:
    Chrome does its isolation based on process mechanism, which means that each process has its own address space and CPU time; Though it guarantees the security, wouldn't that cause performance problem if there are many web programs running? And because it assigns many browser tasks to different process, wouldn't that be slow to first time start the browser?

Criticism:
  
Ideas for further work:
   

2009/11/16 Rodrigo <rodrigo...@gmail.com>

joeyp

unread,
Nov 16, 2009, 11:49:00 PM11/16/09
to CSCI2950-u Fall 09 - Brown
Isolating Web Programs in Modern Browser Architectures

C Reis and S Gribble

This paper presents a browser architecture that attempts to model
instances of *web programs* running within it. The goal is to isolate
groups of related sites and pages that represent a cohesive web
program, both functionally and for performance reasons. There is a
lot of emphasis on backwards compatibility with existing web
programming techniques, including using the window element in
javascript, and dealing with SOP issues.

The paper mostly makes an argument for Chromium's architecture from a
performance and robustness standpoint. The goal is mainly to take the
parts of the browsing activity that represent entire program-chunks of
behavior, and be able to divide these up into different processes.
This allows them to leverage the performance benefits of being in
totally separate processes, and also to fail separately (a collection
of tabs for one program can fail without the whole browser failing).
This separation, similar to the approach taken in the Gazelle paper,
seems like a good step forward, and an acknowledgement of the type of
computation that modern browsers are expected to handle.

The way that Chromium decides which windows will be part of the same
browsing instance is essentially a heuristic on common usage patterns.
Pages principal that get opened from a tab will use the same
rendering/javascript/dom bindings process, while user-opened windows
or tabs will start new rendering processes. Each rendering process
will handle all of its script executing and rendering in a *single*
thread.

It is pretty important that this happen in a single thread.
Javascript is definitively single-threaded, so it would not be a good
thing if per site instance there were multiple processes running
Javascript for the same "application." This actually might be a
problem with the process-per-site (as opposed to process-per-site
instance) model, if multiple site instances interacted with one
another.

It is a little bit of a security concern that changing the location of
a window keeps it within the same rendering process. This keeps
attacks that involve, say, putting a malicious link in an email,
viable. The link opens a new tab or window or whatever, but there
still might be exploitable forward and backward references
(window.opener and the like) around. The malicious site could even be
navigated to at a later time. To be clear, this is *always* a problem
in browsers that don't do any separation between windows and tabs, but
it is still something to keep in mind with Chromium's model of
separation. It also doesn't make that much sense logically - users
don't always open a new, fresh tab when they want to do something new
- and is mostly an implementation artifact.

In contrast, Gazelle is not so inhibited about starting new processes
and further isolating in these cases. From comparing the two, it
looks like the tradeoff is mostly in backwards compatibility - Chrome
lets sites that leverage all of the inter-window interactions that are
available for sites that open the new windows themselves. Gazelle, in
contrast, requires everyone to go through its API to communicate
between different pages.

On Nov 16, 6:21 pm, Rodrigo <rodrigo.fons...@gmail.com> wrote:

Kevin Tierney

unread,
Nov 16, 2009, 7:30:38 PM11/16/09
to brown-cs...@googlegroups.com
Paper Title: Isolating Web Programs in Modern Browser Architectures
Author(s): Reis and Gribble
Date: EuroSys 09
Novel Idea:
Browsers can be improved through the use of a multi-process
architecture that isolates different websites, allowing only limited
interaction. The authors show how these can be achieved through
viewing abstractions of web programs to view browser component
interactions, and present compromises to dealing with backwards
compatibility with the web.

Main Result(s)
Using a coarse granularity approach in which entire web sites are
segregated into their own processes the authors show that their
browser is competitive with a monolithic browser approach. They
acknowledge backwards compatibility issues relating to the DOM API,
and manage to do so by allowing for some shared data between
processes.

Impact
This is an important paper in terms of advancing browsers. Browsers
are asked to do more and more, resulting in slow webpages and an
inability to take advantage of multi-core architectures. (On top of
security problems). This approach seems likely to be the dominant form
of web browser in the future.

Evidence + competitive work
The authors do some tests comparing monolithic vs. multi-process
browsers in which the multi-process tends to use more memory, and has
higher latency for several tasks (new tab, navigation), but not so
much that it would be unusable.

Prior Work
This work builds on monolithic browsers (firefox, safari) and browsers
that allow for multiple processes by starting new browser instances.

Reproducibility
I could reproduce something similar.

Question
How are plugins handled by this architecture? Are they sandboxed?

Criticism
Although cross-site scripting attacks should be mitigated by a good
implementation of a multi-process architecture, it seems like browser
plugins are a major security hole- if they are given access to make
system calls, they could potentially access resources from other
processes.

Ideas for further work
No

Dan Rosenberg

unread,
Nov 17, 2009, 12:03:48 AM11/17/09
to brown-cs...@googlegroups.com
Paper Title
Isolating Web Programs in Modern Browser Architectures

Authors
Charles Reis, Steven D. Gribble


Date
April, 2009

Novel Idea
The authors present a model for defining how the browser can be divided into independnet components, and demonstrate how Google implements Chrome as a multi-process architecture.

Main Result
The authors demonstrate their claims of robustness in fault tolerance, memory management, and accountability of individual web applications.

Impact
Chrome provides an alternative to existing browsers, trading some overhead in memory footprint for benefits in other areas.

Evidence
To demonstrate robustness and measure performance, the authors measure latency, memory usage, and the effects of a crash in several scenarios.

Prior Work
Chrome seeks to improve on existing browsers, which face difficulties due to a monolithic architecture that is ill-designed for the workloads of most modern web applications.

Reproducibility
The experiments provided could be easily reproduced, especially if the code inserted into the Chrome kernel process to measure time was provided.

Criticism
I appreciated the honesty with which they admit their goals focus on robustness, treating security as a separate issue to be handled elsewhere.  I'm assuming most of the unresolved security concerns have since been addressed in the actual implementation of Chrome.  I understand the reasoning behind comparing Chrome's multi-process implementation to its own monolithic mode, but I still would have liked to see comparisons to other browsers.  It's entirely possible that Chrome's monolithic mode is poorly implemented in comparison to other browsers, and so these experiments might not accurately portray the advantages of a multi-process architecture over a well-implemented monolithic browser.

Questions/Ideas for Further Work
Obviously, addressing the relevant security issues is a natural extension, as the authors admit.  I would be interested in techniques that could be employed to minimize the additional memory footprint associated with using multiple processes.  Finally, imposing a limit on the number of simultaneous rendering engines makes sense for performance reasons, but I wonder it may introduce robustness or security problems (for instance, what if a malicious web app is running on the same rendering engine as another app)?

Xiyang Liu

unread,
Nov 16, 2009, 11:50:01 PM11/16/09
to brown-cs...@googlegroups.com
Paper Title           
Isolating Web Programs in Modern Browser Architectures

Author(s)           
Charles Reis, Steven D. Gribble

Date               
EuroSys’09, April 1–3, 2009

Novel Idea           
The paper presented the goal and design of Chrome - a multi-process browser. It isolates web programs by assigning different processes. There are four models of process assignments, monolithic, process-per-browsing-instance, process-per-site-instance and process-per-site. Process-per-site-instance model provides the best isolation granularity.

Main Result(s)           
The paper showed that by using a multi-process browser and carefully control the isolation granularity, the robustness of browser is greatly improved. It also adds value to security and memory management. Although this introduces memory and latency overhead, they are quite acceptable compared to the benefits.

Impact               
The idea of multi-process browser will greatly impact the future design of browsers and operating systems. With the heavy use of javascript and Flash in active web pages, it is desirable to isolate web programs and manages them separately.

Evidence           
The paper explained how multi-process architecture addresses vulnerability issues raised by monolithic browser. It was also examined in simulated and real cases.

Prior Work           
NA

Competitive work       
Chrome was compared to several experimental architectures but none of the compared browsers has a well-defined and backward isolation model as Chrome. The current version of IE8 also uses a multi-process architecture but only isolates trust levels of pages. The research project, Gazelle, introduced by the other paper, extends the browser to a multi-principal OS and tries to provide exclusive protection on resources.

Reproducibility           
Yes. The application and source code are downloadable from Internet.

Criticism           
Chrome OS will be released in this week. It would be interesting to explore how Chrome extends to an operating system.



On Mon, Nov 16, 2009 at 6:21 PM, Rodrigo <rodrigo...@gmail.com> wrote:

yash

unread,
Nov 16, 2009, 11:48:35 PM11/16/09
to CSCI2950-u Fall 09 - Brown
Paper: Isolating Web Programs in Modern Browser Architectures

Authors: Charles Reis and Steven D. Gribble

Novel Idea: The paper introduces a new multi-process architecture for
web-browsers, making them robust and more efficient to handle the
modern websites.

Main Result: The paper proposes a new multi-process architecture to
solve the reliability problems in current generation browsers. In the
new model every web program runs as a separate process leveraging
support from the underlying OS to reduce the impact of failures,
isolate memory management and improve performance.

Impact: This architecture presents a robust environment for accessing
modern websites which are more like applications rather than web-
pages.

Evidence: The paper provides evidence to show that the proposed
architecture is superior to the traditional monolithic browsers.
Results of the comparison test done with traditional monolithic
architecture is also included (like the loading time comparison,etc.)

Reproducibility: Yes it can be reproduced. Open-source for this
architecture (Chromium) is available.

Criticism: The multi-process architecture leaves much more memory-
footprint compared to the monolithic browsers, posing an efficiency
question.

Spiros E.

unread,
Nov 16, 2009, 8:38:43 PM11/16/09
to CSCI2950-u Fall 09 - Brown
The paper presents Chromium, a multi-process browser architecture that
claims to provide enhanced resource management, performance, and fault-
tolerance. The paper develops an abstraction of web application
interaction and use this to guide the isolation of pages into
processes, for the purpose of limiting negative interactions between
independent web pages, and to limit fate-sharing in the event of a
page error.

The issue of memory management is raised by the paper. It claims that
the address space isolation offered by threads would isolate the
effects of memory leaks that would lead to slow performance and
perhaps crashes. However, according to a Mozilla engineer, the real
concern is fragmentation. Process isolation would limit the effects of
this as well, but it doesn't get to the heart of the problem: it's the
21st century and these applications should be garbage collected.
(http://blog.pavlov.net/2007/11/10/memory-fragmentation/)

Another sort of isolation that the paper addresses is performance
isolation. One example the paper provides is synchronous
XmlHttpRequests locking the entire browser up. This is a legitimate
point, except nobody makes synchronous XmlHttpRequests. If they do,
their web application either 1. has fundamental design flaws; or 2. is
not a web application you want to use.

The performance evaluation of the paper does not the performance of
Chrome to any other browsers. Instead the multi-process architecture's
performance is pitted against the same architecture limited to a
single process. It would have been nice to see how Chrome stacked up
against competing monolithic browsers.

The paper raises an issue in which sub-resources of a page may
unintentionally leak user credentials. If a user is logged into site
A, and then the user loads site B, which requests resources from site
A, the request for those resources will carry with it the user's
credentials, which the paper claims should not be the case. In the
interest of compatibility, Chrome does not impose this restriction.
However, it seems like an easy thing to offer as an option.

Marcelo Martins

unread,
Nov 16, 2009, 6:30:10 PM11/16/09
to brown-cs...@googlegroups.com
Paper Title "Isolating Web Programs in Modern Browser Architectures"

Author(s) Charles Reis and Steven D. Gribble

Date Eurosys'09, April 2009

Novel Idea

Reis and Gribble propose the separation of web content into programs
that are isolated via OS processes. Such approach is based on the
argument that web pages are growing in complexity and demand for
resources, which may lead to information leakage, browser session
crashes and slow rendering of multiple pages.

Main Result(s)

Experiments comparing monolithic sessions with the multi-process browser
architecture provided by Chromium show that the latter presents better
responsiveness when dealing with multiple-page loading and can take
advantage of parallel processing. On the other hand, multi-process
sessions incur higher memory requirements for process creation.

Impact

Running site instances on different processes can address a number of
robustness issues found in monolithic browsers. These include fault
tolerance against entire-browser crashes when a site misbehaves,
finer-grained accountability of resources, better memory management and
performance improvements through parallel loading. In addition, the
Chromium model can be used to support a higher level of security.

Evidence

Section 4 reports a series of experiments concerning the performance and
requirements of multi-process Chromium and a monolithic-based browsing
session. These experiments show that, in overall, multi-process browsing
improve navigation performance. Additionally, a series of anecdotes
regarding competitive browsers, such as Opera and MS Internet Explorer,
provide insights on the decisions for Chromium.

Prior Work

Apart from the monolithic browsers, which have existed since the
primordials of the Web, related work that inspired Chromium include
modular navigating architectures such as the OP browser and Tahoma and
site-specific browsers, such as Mozilla Prism and Fluid. Finally, the
Webkit rendering engine is used in the implementation of Google's web
browser.

Competitive work

The main target of the article is the current trend of monolithic
browsers; in special, MS Internet Explorer, Mozilla Firefox and Opera.

Reproducibility

Chromium is available as a binary application and open source code from
Google's website. The experiments are easy to be reproduced and
well-described in Section 4.

Criticism

1.) The authors claim that many typical machines have upwards of 1GB of
physical memory to dedicate to tabs. This is not necessarily true if we
consider that such memory also needs to be shared among the operating
system and other applications. Netbooks and cell phones, which are also
popular, cannot provide such memory requirements.

2.) Using Alexa top pages to select samples of Web access for some of
the experiments is not a wise decision, especially when measuring the
multi-process overhead. The average site instance footprint reported in
Section 4.3 is based on a really small sample number that cannot be
generalized. The authors could have used a better set of network
requests. One possibility would be testing different categories of sites
based on the size of their footprint (light, medium, heavy) and see how
multi-process and monolithic Chromium perform.

3.) Another criticism is the lack of consistency throughout the
experiments. While some of them use heavy-based web apps such as Google
Maps and GMail, others use a different number of popular pages from Alexa.

4.) Why not a direct comparison with other brands of monolithic
browsers? Measuring the fault tolerance, responsiveness and memory
management of IE, Firefox, Safari and others is definitely not
capturing "irrelevant implementation differences", but actually a really
relevant point to be made, especially if Google "seek[s] adoption by
real web users".

5.) Section 3.3 and 4.1 are basically the same. Why repeat the same text
twice?

6.) The failure of one plugin process results in all the web pages not
functioning properly. Although this is better than a full-browser
crash, it is far from perfect. How will a user realize that his Youtube
page is not working anymore because of a crash from a non-related
flash-based ad on another page?

Ideas for future work

1.) Currently, Chromium places a fixed limit on the number of renderer
processes it creates. An extension could update this number based on the
available memory of a user's machine and on the history of memory
consumption.

Andrew Ferguson

unread,
Nov 16, 2009, 11:49:09 PM11/16/09
to brown-cs...@googlegroups.com
Paper Title
"Isolating web programs in modern browser architectures"

Authors
Charlie Reis and Steve Gribble

Date
EuroSys 2009

Novel Idea / Main Results
The authors propose splitting the browser into multiple OS processes
in order to enforce better security and performance isolation between
sites. The paper describes a "process-per-site-instance" model which
isolates both gmail.com from nytimes.com, but also separate Gmail or
Pandora instances. However, to ensure backwards compatibility, the
authors do not propose separating content from different subdomains,
or content from different domains drawn to the same frame. The ideas
are implemented in the Chromium/Chrome web browser.

Impact
High. The benefits of moving faulty browser components to their own OS
process so they can fail independently are great (at a minimum, plug-
ins should be separated). All of the major browsers are looking to
implement this research, or a variant.

Evidence
As evidence of the feasibility and practicality of this idea, the
authors show that multiple OS processes have little to no negative
effect on the web browsing experience visually or interactively. In
fact, on multi-core architectures, the web browsing experience is
improved due to the simultaneous streams of computation. The
experiments do show, however, that monolithic browsers have lower
memory footprints because they can better exploit code sharing between
components. On the other hand, the multi-process browser does a better
job of releasing memory with which it is finished.

Prior Work
Key pieces of prior work include the research browsers OP, Tahoma, and
SubOS.

Competitive Work
This work is competitive with the Gazelle prototype as both are
designed for wide-spread use.

Reproducibility
The Chromium source is open so the experiments should be reproducible.
The Firefox team is attempting something similar with their browser.

Question
None.

Criticism
In section 3.2, the authors explain that Chromium places a limit on
the number of processes it will create and re-uses processes if
needed. Does this create a possible attack vector? (by generating too
many processes through another means)

Ideas for further work
As the browser moves closer to being an OS, what other Os techniques
will need to be re-invented? What OS-specific security attacks can be
applied to such a browser?

Rodrigo Fonseca

unread,
Nov 17, 2009, 8:34:34 AM11/17/09
to brown-cs...@googlegroups.com
---------- Forwarded message ----------
From: James Tavares <james....@gmail.com>
Date: Mon, Nov 16, 2009 at 10:37 PM
Subject: Re: [csci2950-u] 11/17 Reviews - Chrome
To: Rodrigo <rodrigo...@gmail.com>


*Chrome*

Paper Title: Isolating Web Programs in Modern Browser Architectures

Author(s): Charles Reis, Steven D. Gribble

Date: EuroSys ’09. April 1-3, 2009.

Novel Idea: The paper seeks to automatically identify and isolate
instances of *web programs* so that they can be run in distinct
operating system processes. Recognizing that these boundaries are
impossible to decipher in the current architecture of the web, the
authors offer the less precise, but more concrete definitions of
/site/ (all pages in the same <protocol, domain port> tuple, to
approximate a *web program*), /browsing instance/ (the set of
interconnected tabs and windows, even if they are not on the same
site), and /site instance/ (a /site/ within a /browsing instance/, to
approximate *web program instance*). Finally, in their implementation
of Chrome, the authors allow the user to select from the following
isolation models: monolithic, process-per-browsing-instance,
process-per-site-instance, or process-by-site.

Main Result(s)/Evidence: The Chrome architecture is presented (see
above), and in the evaluation the authors attempt to “confirm” their
design choices. From these experiments the authors make claims that
multi-process browsing has: higher fault tolerance (a crash on one
page only crashes one browsing instance), greater accountability (OS
task manager shows resource usage per browsing instance), better
memory management, quicker responsiveness (user-perceived latency
interacting with one page while another is loading), and faster
simultaneous rendering of multiple web pages. The increased latency
for creating a new process for each browsing instance is offset by the
speed-up inherent in parallelizing page loading & rendering, but
memory overhead is high compared to the monolithic approach and
increases linearly (with a greater slope than monolithic) as the
number of browsing instances increases.

Impact: Remains to be seen – a year after its release, chrome’s market
share is at 5% but is indeed trending higher
(http://thenextweb.com/2009/10/25/chrome-nears-5-market-share/). Given
that multi-core processors are here to stay, it would seem that
monolithic browsers will eventually need to be replaced with browsers
that have greater and greater opportunities for parallelism. Chrome is
a small first step in this direction.

Prior/Competitive Work: Chrome is preceded by monolithic browsers such
as Firefox, IE and Konqueror. Site-specific browsers use user-created
desktop shortcuts to run new instances of a web browser such as
Firefox for a particular site. Chrome differs from the latter in that
it is able to automatically infer web program boundaries.

Reproducibility: Chrome is available as a free download and Chromium
source is available for download. The experiments would be fairly
trivial to reproduce.

Question: I think that there are good arguments for isolating
different web apps besides speed improvements (such as for increased
security), but if we’re only looking for speed, couldn’t many of the
speed improvements mentioned in this paper be attained by
multi-threading the rendering engine? A simple per-site-instance lock
could be used to ensure serialization on DOMs located within the same
site-instance (as recommended in section 2.3.1.).

Criticism: The authors claim that they did not evaluate against other
browsers to “avoid capturing irrelevant implementation differences in
our results”. They should have at least compared monolithic Chrome to
some other modern browser. Without this, nearly all of their
performance based measurements are useless for comparison – how do we
know that monolithic Chrome is not really slow? Maybe there are
monolithic architectures (firefox, etc) that outperform Chrome’s
multi-process approach!

Future Work: Perhaps there is a transparent way to include web
principle data (cookies, etc) in the /browsing instance/ as well. I
frequently find myself needing to log into the same website under two
aliases at the same time, which almost always means opening sessions
under alternate browsers (IE, Firefox, etc).
Reply all
Reply to author
Forward
0 new messages