Hiya,
I finally got a chance to read the SPDY spec today - it all makes
total sense to me and seems like a great upgrade to the web. With the
exception of the SSL requirement.
Bram Cohen has already made some strong arguments against tying SPDY
to SSL. I'd like to make some more and tie them all together.
The first thing is to figure out what the goal is. The SPDY spec
doesn't discuss this beyond saying "we believe that the long-term
future of the web depends on a secure network connection". In the
discussion on chromium-discuss Mike Belshe elaborated:
"When we think of protocols of the future, we think it is intolerable
that you could connect to your bank and not have the site actually be
your bank .... The amount of money lost annually due to failing to
protect
communications is absurd."
So there's both encryption and authentication. I'm not sure lack of
encryption is actually a problem in practice. Anecdotally, I read a
lot of reports of people being defrauded, but these reports always
boil down to phishing, malware infections or server-side security
breaches. I don't remember the last time I read about a security
incident where the problem was "cc details sniffed in transit". In
many ways, I think SSL is to wire security what UNIX/DAC is to desktop
security .... a protocol designed for the challenges of a different
time and for the majority of people no longer as useful.
In particular SSL has little to say about the first part of that
problem statement, connecting to what you think is your bank but it
actually isn't. Reliably communicating site identity isn't a protocol
problem - SSL + EV certs has solved that for good, for the sites that
need it. The problem is a user interface design issue. As SPDY is a
protocol there isn't much it can do here .... it can't stop people
confusing bank0famerica.com with the real thing. Enabling SSL by
default won't help - some phishing campaigns already proactively
enable SSL to get the lock icon anyway.
SSL doesn't cause any problems for script kiddies - they already have
point and click solutions for hijacking peoples internet connections.
So SSL is primarily useful for stopping systematic snooping by the big
guys - telcos and governments. Whether you think the risk of this is
great enough to encrypt all traffic depends on your personal
politics.
But if SSL was free, none of that would matter. Why not do it?
SSL is not free. As already noted it breaks caching. Edge networks
aren't a replacement for HTTP level transparent caching.
They can be
extremely expensive and there are only a few major players, placing
them out of reach for most providers. They take control out of those
who need it (people with restricted connections) and move it to those
who have better things to think about (the content providers). And
they don't solve the "last mile" problem. For instance Google Earth
has had problems with schools who can't use it because the traffic is
not cacheable, and a classroom full of children loading it
simultaneously crushed their uplinks. This is especially a problem
outside of North America and Europe, exactly the places SPDY should be
helping most. We have data on edge caching vs ISP proxy caching from
experiments with Google Maps, follow up with me internally if you want
pointers to this.
SSL can push users who are already CPU saturated over the edge and
make their experience unusable. I believe that was the conclusion from
studying the issue on one large Google product (again ask me on an
internal list for more details if you want them).
SSL is redundant in many cases. For instance if you access a website
over a 3G connection, that traffic is already encrypted to the base
station. To intercept it you'd need to tap the tower backhaul or
peering points .... so we're back to government being the threat
model. Ditto for cable or DSL.
SSL is a battery pig. Forcing SSL on for all traffic seriously
degrades battery life on smartphones because it is implemented on the
CPU (in contrast to the already present 3G/kasumi ciphering which is
done in hardware).
Finally, SSL is often unnecessary. Many sites (eg wikipedia) serves
the same pages to all users who are logged out and most users are
logged out. Encrypting that data achieves nothing.
I'm echoing Bram here but I'll say it again. Let's not tie a useful
upgrade to HTTP to something with significant cost that actually works
against the latency goals.
SSL has also been put forward as a backwards compatibility hack. I'm
not sure if the costs of SSL are really worth supporting users behind
buggy proxies - in particular if those buggy proxies are performing
content caching. It's possible that SPDY might make connections slower
for those users rather than faster! But hard data on the effectiveness
of upgrade mechanisms is needed.
thanks
-mike
-- egnor
- fetch 10K over HTTP: 0.45 mAH, tx=892B, rx=10960B, cpu=0.12s
- fetch 10K over HTTPS: 0.60 mAH, tx=1378B, rx=12036B, cpu=2.81s
- fetch 100K over HTTP: 0.66 mAH, tx=2945B, rx=106136B, cpu=0.15s
- fetch 100K over HTTPS: 0.83 mAH, tx=6047B, rx=107267B, cpu=2.34s
Hi guys,I am closely watching the "SPDY story" and personally I believe that it is something revolutionary. I was just wondering what do you think about the fact that SSL is not free - do you consider the fact that you have to pay for a certificate an obstacle. I live in Croatia, Europe, and spending a few hundreds of dollars here is considered to be a big investment. Most people have a monthly wage which is lower than the price of an SSL certificate (I compared to the price at Verisign and Thawte). I'm not sure about other countries but I believe Croatia is not the only one out there... And if you want to have a high penetration to the market - you need to make the technology available to the masses, not just to make it booked for the rich ones... I hope you get my point...
Note also that we might be enabling SPDY use over self-signed certs,
although this is still in flux.
> So, if SSL connections are taking 2 seconds, the problem is aboveI wrote an Android app that happened to browse to an SSL protected
> OpenSSL. Unfortunately, I don't know anything about the upper layers
> of Android.
page as part of its operation. There was a lot of garbage collection
going on, I think the Java part of the stack is pretty inefficient.
I'm sure it could be improved, and though G1 era hardware is not
exactly "old" I guess you could write it off in the SPDY timeframe.
I still think looking at 3G is misleading. We can say, "we don't trust
carrier proxies" but they are there for policy reasons and I suspect
operators will not be impressed at attempts to forcibly take them
away. This is especially true as LTE/4G is expected to shift the
bottleneck from the radio interface to the tower backhauls, giving
operators big incentives to add multi-layer caching directly to tower
sites as a way to reduce deployment costs.
As operators are an unavoidable MITM I don't see a way to prevent
downgrades on SPDY sites. And the hop after the operator is the
backbone, to which the only real threat is governments. And government
isn't really a part of the SSL threat model anyway. That's why I think
SSL over 3G makes little sense.
On Wed, Dec 16, 2009 at 6:46 AM, Mike Hearn <he...@google.com> wrote:> So, if SSL connections are taking 2 seconds, the problem is aboveI wrote an Android app that happened to browse to an SSL protected
> OpenSSL. Unfortunately, I don't know anything about the upper layers
> of Android.
page as part of its operation. There was a lot of garbage collection
going on, I think the Java part of the stack is pretty inefficient.
I'm sure it could be improved, and though G1 era hardware is not
exactly "old" I guess you could write it off in the SPDY timeframe.It's not a matter of writing it off - it's about understanding if there are implementation issues or protocol issues.If the problem is implementation, then we shouldn't worry about it while designing the protocol.
I still think looking at 3G is misleading. We can say, "we don't trust
carrier proxies" but they are there for policy reasons and I suspect
operators will not be impressed at attempts to forcibly take them
away. This is especially true as LTE/4G is expected to shift the
bottleneck from the radio interface to the tower backhauls, giving
operators big incentives to add multi-layer caching directly to tower
sites as a way to reduce deployment costs.
As operators are an unavoidable MITM I don't see a way to prevent
downgrades on SPDY sites. And the hop after the operator is the
backbone, to which the only real threat is governments. And government
isn't really a part of the SSL threat model anyway. That's why I think
SSL over 3G makes little sense.
SPDY doesn't prevent all proxies - it only prevents transparent proxies. Network operators can still use explicit proxies (configured on the device). So carriers can do transcoding of images, caching, etc. If the origin servers move to SSL, however, they won't be able to do this (just like they can't do it today).
Mike
Why would you want to do that? There is no shared authority
relation between http and https -- they are entirely
different services on different ports that may be
handled by different machine, perhaps on different continents.
....Roy
Could be. This was early in 2009 so pre-Cupcake even. My knowledge is
probably out of date by now.
Transparent proxies are generally used to simplify the users life -
most users won't know how to explicitly configure a proxy, and asking
people to run some binary from the ISP is tricky. If SPDY proxies
could be pushed via DHCP that'd probably help. But it's getting more
and more complicated.
DNSSEC would seem to solve that problem without the overhead of
encrypting all traffic.
That leaves backwards compatibility. The standard way to introduce a
new network protocol is a new port. Just using a different port is
going to work on any network that doesn't implement port-based default
deny rules. The ones that do probably will also have proxies that
won't support SPDY. If the admins decide to upgrade to a SPDY
compatible proxy, they can as well just open up the SPDY port at the
same time.
I think we can find out how many networks are blocking some newly
chosen port by running a google.com experiment. This should get good
coverage across network types, companies, residential networks etc.
It'd be interesting to correlate this with browser use. I'd wager that
networks which block some newly chosen port are dominated by old
versions of IE.
It is tempting to use SSL to try and evade conservative sysadmins, and
it has precedent - Skype tried the same thing. However that just made
sysadmins around the world hate skype. Here is an example of what one
admin did:
http://lists.grok.org.uk/pipermail/full-disclosure/2005-November/038646.html
Given that admins conservative enough to block all outbound non-web
traffic are certainly conservative enough to mandate a given browser
version, I'm skeptical real SPDY deployments would encounter too much
blocking.
I don't fully understand why DNSSEC isn't equivalent to the level of
SSL being proposed actually. If SPDY won't require EV certs then it
boils down to an assertion about the integrity of DNS, right?
DNSSEC would seem to solve that problem without the overhead of
encrypting all traffic.
That leaves backwards compatibility. The standard way to introduce a
new network protocol is a new port. Just using a different port is
going to work on any network that doesn't implement port-based default
deny rules. The ones that do probably will also have proxies that
won't support SPDY. If the admins decide to upgrade to a SPDY
compatible proxy, they can as well just open up the SPDY port at the
same time.
I think we can find out how many networks are blocking some newly
chosen port by running a google.com experiment. This should get good
coverage across network types, companies, residential networks etc.
It'd be interesting to correlate this with browser use. I'd wager that
networks which block some newly chosen port are dominated by old
versions of IE.
It is tempting to use SSL to try and evade conservative sysadmins, and
it has precedent - Skype tried the same thing. However that just made
sysadmins around the world hate skype. Here is an example of what one
admin did:
http://lists.grok.org.uk/pipermail/full-disclosure/2005-November/038646.html
Given that admins conservative enough to block all outbound non-web
traffic are certainly conservative enough to mandate a given browser
version, I'm skeptical real SPDY deployments would encounter too much
blocking.
I think Gerv Markhams document says it better than I could. The SSH
model has UI issues that make it undeployable, see section 3:
http://www.gerv.net/security/self-signed-certs/
> I guess SIP is a good indication of how much fun requiring firewall changes
> is :-)
Home NAT/firewalls can be reconfigured automatically by the browser
using UPnP. Corp firewalls yes, not much fun, but trying to bypass
them with SSL will just lead to skype-like arms races.
> A different port will still require 2 TCP connection
Yeah but they can be done in parallel. If the SPDY connection comes
back within 50ms of HTTP use that. Setting up a TCP connection then
throwing it away with no content written is really cheap.
I don't fully understand why DNSSEC isn't equivalent to the level of
SSL being proposed actually. If SPDY won't require EV certs then it
boils down to an assertion about the integrity of DNS, right?
DNSSEC would seem to solve that problem without the overhead of
encrypting all traffic.
That leaves backwards compatibility. The standard way to introduce a
new network protocol is a new port. Just using a different port is
going to work on any network that doesn't implement port-based default
deny rules. The ones that do probably will also have proxies that
won't support SPDY. If the admins decide to upgrade to a SPDY
compatible proxy, they can as well just open up the SPDY port at the
same time.
I think we can find out how many networks are blocking some newly
chosen port by running a google.com experiment. This should get good
coverage across network types, companies, residential networks etc.
It'd be interesting to correlate this with browser use. I'd wager that
networks which block some newly chosen port are dominated by old
versions of IE.
> Or remembering the cert on the first connection, like SSH does.I think Gerv Markhams document says it better than I could. The SSH
model has UI issues that make it undeployable, see section 3:
http://www.gerv.net/security/self-signed-certs/
> I guess SIP is a good indication of how much fun requiring firewall changesHome NAT/firewalls can be reconfigured automatically by the browser
> is :-)
using UPnP. Corp firewalls yes, not much fun, but trying to bypass
them with SSL will just lead to skype-like arms races.
Yeah but they can be done in parallel. If the SPDY connection comes
> A different port will still require 2 TCP connection
back within 50ms of HTTP use that. Setting up a TCP connection then
throwing it away with no content written is really cheap.
It is called SRP protocol [1]. There is draft [2] which describes how
to implement it in the SSL/TLS.
It is already integrated into Firefox in some development branch,
also GnuTLS implemented it for server side.
It isn't good for everything, but have many advantages:
- is quite fast
- no public keys/certificates needed (beyond the initial setting of
password)
- phishing is not possible if you know that connection is using it
It is so, because SRP protects both against passive and active MITM
attacks.
Unfortunetly it is only usefull on the webpages (or othere services)
where you need to log-in. Good for your banks, facebook, gmail,
googe wave, or forum, but not nacassarly for site from which you
download some executables to run them on your computer.
Neverthles it is extremally usefull and secure. When i found
it I was first thinking that it is imposible to design protocol,
obeying all restriction they wanted. In fact they did it.
It is similar to TLS-PSK, but PSK is symetric.
Both sides have secret.
SRP is more resistant to brute force,
and server doesn't have really secret (password).
(Which is quite important if someone will stole the database of
passwords, similary like why we are using kind of hashing
password on the servers side. But it SRP is much more
secure than just adding random salt :D ).
===============
0. http://srp.stanford.edu/
1. http://en.wikipedia.org/wiki/Secure_Remote_Password_protocol
1a. RFC 2945
2. RFC 5054
3. http://tools.ietf.org/html/draft-ietf-pppext-eap-srp-03
OK, that's a good point. I don't know what level would be acceptable.
I'll think about this some more today.
> Unfortunately, you're proposing a protocol which leaves the user hung out to
> dry - he knows nothing about http/spdy/ports/whatever, and yet his
> connection just fails.
i don't think it has to be quite that dire. If you type in an address
for the first time like google.com, it could use a "half spdy" via
http whilst it tries to bring up the new multiplexed protocol in the
background. For instance the first request might look like a regular
GET / HTTP/1.1 with "application/spdy" in the accept encodings list.
When that is present, the browser can send back not just index.html
but the other resources too in a simple
<length><content><length><content> bundle.
As the browser knows what resources the user will require after seeing
their first request, this should be almost equivalent in speed to
going straight to SPDY (no header compression though). By the time the
webpage or user then makes extra less predictable requests the new
stream type is either up and running or timed out. It also has the
advantage that you don't need to fudge around with DNS to enable SPDY.
For instance hosting providers could enable SPDY for all their
websites in one go even if the DNS records for those sites aren't
under their control.
I'm not sure sending two SYN packets simultaneously would cause
scalability problems. Browsers have already bumped the number of
simultaneous HTTP connections they allow quite significantly, those
are much more expensive and the web did not collapse. Servers already
have to be able to handle lots of SYNs to deal with synflood attacks.
And the double SYN would only occur when
a) the user types in an address directly
b) the SPDYness of the site is unknown
c) and/or the SPDYness of the current network is unknown
For (a) many users get to sites via bookmarks or search engines
anyway. Those can embed the SPDYness of a site into the URL with a
spdy:// scheme. Yes the browser will have to notice that its IP
address or wifi MAC changed for c, but then browsers are already
reading this data for geolocation so the modification shouldn't be too
hard.
Yes, of course. Government is not really a part of the SSL threat
model. The moment you consider the government to be your foe, things
get a LOT harder and less consumer friendly.
> Are all CA roots absolutely secure - how many people have
> or had access to them ? Is the physical access strictly controlled ?
There are quite complex rules CAs must follow, I think some of them
are about physical security yes.
> At least SSH prevents MITM after the first request, and when combined with
> DNS it may make things a bit better than they are today.
I don't think you answered Gervs concerns about the usability problems
with the SSH model. Any system that produces scary crypto warnings as
part of its normal operation is undeployable to the masses.
> Is any of the CA roots in a country where authorities may get the master key?Yes, of course. Government is not really a part of the SSL threat
model. The moment you consider the government to be your foe, things
get a LOT harder and less consumer friendly.
> Are all CA roots absolutely secure - how many people haveThere are quite complex rules CAs must follow, I think some of them
> or had access to them ? Is the physical access strictly controlled ?
are about physical security yes.
I don't think you answered Gervs concerns about the usability problems
> At least SSH prevents MITM after the first request, and when combined with
> DNS it may make things a bit better than they are today.
with the SSH model. Any system that produces scary crypto warnings as
part of its normal operation is undeployable to the masses.
Requiring a spdy:// scheme on href's (from search engines or arbitrary
pages) will never happen. The site serving the page will need to know
that both the browser in use and the server they are linking to
support SPDY. You might be able to pull that off for something like
the major search engines as the re-crawl the pages fairly frequently
and everything is dynamic anyway but that's an awfully big ask to put
out there for all sites.
For it to be seamless it's basically going to have to be able to take
http:// requests and negotiate/discover the upgrade transparently.
Assuming or requiring anything else will seriously hold back it's
deployment.
- I'd love to see more thought about the societal upgrade process,
including
the UIs for showing SPDY "protected" pages. It's not quite like
SSL,
with the model of "if you're entering a password or credit card,
then look for
the yellow HTTPS"-- here it would have to start with "if you see
the gold
star, then it's an extra protected site" and then over the course
of N years
the gold stars become commonplace, then we hit a tipping point
where
pro site owners start to worry about getting rejections if they
don't support
SPDY and finally, the late adopters take a decade to upgrade.
- yes, forget changing http: to spdy: -- way too much app code that
assumes
http/https, and too much user retraining. Plus, too little
benefit.
- don't underestimate the pain of SSL:
- wildcard certs are needed for subdomains, which are used in many
apps.
Wildcard certs are not free/cheap, and in fact this thread just
saved my
company >$1000/yr vs. a design we were going to use (thanks!).
- for sites that don't already use it, it's can be a giant PITA to
get started.
- for sites that already use it, there can be serious
complications with
mixed content warnings (IE), bad caching effects, CPU cost, etc.
Personally, I'd be sad if SPDY's latency wins don't get to market
because
it requires SSL. 30+% is too good to pass up.
adam
(recent ex-googler, gadgets)
The problem is we already know that kind of UI doesn't work. Bad guys
can simply put a gold star in the web page content itself. Many users
don't seem to understand that some parts of the screen are trustable
and others aren't (not surprising really).
We don't know what kind of UI does work, it's still an open research question.
> - yes, forget changing http: to spdy: -- way too much app code that
> assumes http/https, and too much user retraining.
Sorry, I was unclear about this. I meant spdy:// as in a hint to the
browser to try SPDY first. Users would never see it. It'd be useful
only for links between sites ... for instance Google/Bing/Yahoo/etc
could discover SPDY supporting sites during the crawl and serve
spdy:// links for those, so bypassing any potentially slow discovery
process. If users type http:// directly then discovery would take
place as normal (whatever normal is)
On 20/12/2009, at 10:53 PM, Mike Hearn wrote:
>>
>> - yes, forget changing http: to spdy: -- way too much app code that
>> assumes http/https, and too much user retraining.
>
> Sorry, I was unclear about this. I meant spdy:// as in a hint to the
> browser to try SPDY first. Users would never see it. It'd be useful
> only for links between sites ... for instance Google/Bing/Yahoo/etc
> could discover SPDY supporting sites during the crawl and serve
> spdy:// links for those, so bypassing any potentially slow discovery
> process. If users type http:// directly then discovery would take
> place as normal (whatever normal is)
--
Mark Nottingham http://www.mnot.net/
--
GMail Engineering
Google Switzerland GmbH