Hi everyone,
I met with Pam Greene and Adam Barth on Wednesday to discuss Web
Request and content scripts and their use for privacy and security
enhancing extensions. I offered to write up the major discussion
points of our meeting on this mailinglist, and so this post is part of
that.
I believe that the Chrome APIs are maturing nicely, and that with the
addition of the blocking Web Request APIs, a powerful set of privacy
and security enhancing extensions will soon become possible to write.
In particular, I believe that it should soon be possible to write a
fingerprint resistance extension to enhance Incognito mode to help
provide protections against a network adversary:
https://wiki.mozilla.org/Fingerprinting
In my view, the best way to write this sort of extension is to use a
combination of content scripts and the proposed blocking versions of
the Web Request APIs. Using content scripts to wrap various Javascript
objects allows for rapid development of defenses against evolving
fingerprint techniques without requiring the modification of core
browser code.
I have a prototype of the content script mechanisms for this at:
https://trac.torproject.org/projects/tor/ticket/1816#comment:4
However, this and other privacy and security enhancing extensions will
need strict assurances from the APIs they use, as they will be
providing additional security properties for their users through these
APIs. Any "special cases" where either Web Request does not get
informed of a request the browser makes, or where content scripts fail
to apply, are potential security issues.
So far, it does appear as though the V8 Javascript interpreter is much
more resilient against many of the closure-busting and introspection
techniques that make the content script approach insecure on Firefox.
Unfortunately, it appears as though content scripts are not always
applied in cases where they need to be when used from a security
perspective. Adam demonstrated that they do not apply to "about:blank"
urls, which are used as an initial document whenever a user-generated
popup is launched. I've also confirmed that they are not applied to
"data:" urls, which can be used to load html and javascript to obtain
a fingerprint without the hooks having been applied.
There is also a similar problem with special urls that are exempt from
content script manipulation. Adam and Pam tell me this includes
several other "about:" urls, as well as "
chrome.google.com" and that
the latter is done over concerns for abuse. My brother tells me that
it is primarily because the gallery interaction is changing in future
Chrome releases to allow for one-click installs, and that content
scripts running on the gallery could thus enable extensions to
escalate priviledges using clickjacking.
However, security and privacy enhancing addons do need some way to
also handle these urls. It is too easy to redirect a user to an
exempted url to induce their browser to leak privacy-sensitive
information on to the network or to load malicious plugin objects.
One option is to create a special gallery_script permission that comes
with the appropriate warnings, and/or disables the 1-click install and
restores the old confirmation dialog behavior.
The other option is to give the Web Request and Web Navigation APIs
the ability to present a user with a blocking confirmation dialog
before they navigate (or are forced to navigate) to these pages,
warning them that the security properties they expect from their
privacy and security enhancing extensions do not apply on such pages.
Apart from Javascript, plugins also provide a very large vulnerability
surface that privacy and security enhancing extensions are concerned
about. For Tor use, we outright disable plugins in Firefox, both
because of plugins' ability to bypass proxy settings, and because of
the tremendous amount of data most plugins are able to gather on a
user's computer, even under normal operating conditions. Adam pointed
out that the HTML5 @sandbox attribute could be used for this purpose,
if it could be made available to content script to apply to a main
frame:
http://blog.chromium.org/2010/05/security-in-depth-html5s-sandbox.html
However, ideally extensions would be able to pick and choose which
plugins were allowed to run, especially if some plugins are known to
be rigorously sandboxed via NativeClient, or if settings were
available to attempt to force sandboxing of plugin objects even if
this is not normally done for functionality reasons.
*Phew*. This should cover the general issues that security and privacy
enhancing addons will face. I will also be writing a second mail with
some Tor-specific features that will be needed for adding a Tor mode
to chrome, based on Incognito mode.
Are the above issues something I should create tasks for on the
chromium bug tracker? Or does additional discussion need to happen
with respect to abuse potential for some of these changes?
Also, are there other vectors, url schemes, or javascript quirks that
I need to be aware of before it is safe or possible to use these APIs
for privacy and security enhancement?