SPDY and the SSL requirement

770 views
Skip to first unread message

Mike Hearn

unread,
Nov 22, 2009, 9:49:11 AM11/22/09
to spdy-dev, br...@bitconjurer.org
Hiya,

I finally got a chance to read the SPDY spec today - it all makes
total sense to me and seems like a great upgrade to the web. With the
exception of the SSL requirement.

Bram Cohen has already made some strong arguments against tying SPDY
to SSL. I'd like to make some more and tie them all together.

The first thing is to figure out what the goal is. The SPDY spec
doesn't discuss this beyond saying "we believe that the long-term
future of the web depends on a secure network connection". In the
discussion on chromium-discuss Mike Belshe elaborated:

"When we think of protocols of the future, we think it is intolerable
that you could connect to your bank and not have the site actually be
your bank .... The amount of money lost annually due to failing to
protect
communications is absurd."

So there's both encryption and authentication. I'm not sure lack of
encryption is actually a problem in practice. Anecdotally, I read a
lot of reports of people being defrauded, but these reports always
boil down to phishing, malware infections or server-side security
breaches. I don't remember the last time I read about a security
incident where the problem was "cc details sniffed in transit". In
many ways, I think SSL is to wire security what UNIX/DAC is to desktop
security .... a protocol designed for the challenges of a different
time and for the majority of people no longer as useful.

In particular SSL has little to say about the first part of that
problem statement, connecting to what you think is your bank but it
actually isn't. Reliably communicating site identity isn't a protocol
problem - SSL + EV certs has solved that for good, for the sites that
need it. The problem is a user interface design issue. As SPDY is a
protocol there isn't much it can do here .... it can't stop people
confusing bank0famerica.com with the real thing. Enabling SSL by
default won't help - some phishing campaigns already proactively
enable SSL to get the lock icon anyway.

SSL doesn't cause any problems for script kiddies - they already have
point and click solutions for hijacking peoples internet connections.
So SSL is primarily useful for stopping systematic snooping by the big
guys - telcos and governments. Whether you think the risk of this is
great enough to encrypt all traffic depends on your personal
politics.

But if SSL was free, none of that would matter. Why not do it?

SSL is not free. As already noted it breaks caching. Edge networks
aren't a replacement for HTTP level transparent caching. They can be
extremely expensive and there are only a few major players, placing
them out of reach for most providers. They take control out of those
who need it (people with restricted connections) and move it to those
who have better things to think about (the content providers). And
they don't solve the "last mile" problem. For instance Google Earth
has had problems with schools who can't use it because the traffic is
not cacheable, and a classroom full of children loading it
simultaneously crushed their uplinks. This is especially a problem
outside of North America and Europe, exactly the places SPDY should be
helping most. We have data on edge caching vs ISP proxy caching from
experiments with Google Maps, follow up with me internally if you want
pointers to this.

SSL can push users who are already CPU saturated over the edge and
make their experience unusable. I believe that was the conclusion from
studying the issue on one large Google product (again ask me on an
internal list for more details if you want them).

SSL is redundant in many cases. For instance if you access a website
over a 3G connection, that traffic is already encrypted to the base
station. To intercept it you'd need to tap the tower backhaul or
peering points .... so we're back to government being the threat
model. Ditto for cable or DSL.

SSL is a battery pig. Forcing SSL on for all traffic seriously
degrades battery life on smartphones because it is implemented on the
CPU (in contrast to the already present 3G/kasumi ciphering which is
done in hardware).

Finally, SSL is often unnecessary. Many sites (eg wikipedia) serves
the same pages to all users who are logged out and most users are
logged out. Encrypting that data achieves nothing.

I'm echoing Bram here but I'll say it again. Let's not tie a useful
upgrade to HTTP to something with significant cost that actually works
against the latency goals.

SSL has also been put forward as a backwards compatibility hack. I'm
not sure if the costs of SSL are really worth supporting users behind
buggy proxies - in particular if those buggy proxies are performing
content caching. It's possible that SPDY might make connections slower
for those users rather than faster! But hard data on the effectiveness
of upgrade mechanisms is needed.

thanks
-mike

Mike Belshe

unread,
Nov 22, 2009, 1:44:03 PM11/22/09
to spdy...@googlegroups.com, br...@bitconjurer.org
Hey Mike,

Thanks for the thoughtful comments.  A few comments below.

The good news is that it is a lot easier to back away from SSL than it is to inject it later.  The big feature of SSL is not encryption; the big feature of SSL is server authentication.  When you're connecting to a site, you want to know you are connecting to the site you thought you were connecting to.  Why do we put the burden of understanding the security on the user?  How many usability studies do we need to do before we realize that users are not capable of discerning secure from insecure communications?

Also - keep in mind we're designing a protocol for the next 20 years.  If Moore's law holds (lets not debate this here though :-)  we'll have more than 1000x more powerful computers by then.

But lastly, there is a real deployment problem for any new protocol if it isn't over SSL - see comments below.


On Sun, Nov 22, 2009 at 6:49 AM, Mike Hearn <he...@google.com> wrote:
Hiya,

I finally got a chance to read the SPDY spec today - it all makes
total sense to me and seems  like a great upgrade to the web. With the
exception of the SSL requirement.

That is good to hear :-)
 

Bram Cohen has already made some strong arguments against tying SPDY
to SSL. I'd like to make some more and tie them all together.

The first thing is to figure out what the goal is. The SPDY spec
doesn't discuss this beyond saying "we believe that the long-term
future of the web depends on a secure network connection".  In the
discussion on chromium-discuss Mike Belshe elaborated:

"When we think of protocols of the future, we think it is intolerable
that you could connect to your bank and not have the site actually be
your bank .... The amount of money lost annually due to failing to
protect
communications is absurd."

So there's both encryption and authentication. I'm not sure lack of
encryption is actually a problem in practice. Anecdotally, I read a
lot of reports of people being defrauded, but these reports always
boil down to phishing, malware infections or server-side security
breaches. I don't remember the last time I read about a security
incident where the problem was "cc details sniffed in transit". In
many ways, I think SSL is to wire security what UNIX/DAC is to desktop
security .... a protocol designed for the challenges of a different
time and for the majority of people no longer as useful.

There are many problems, indeed.

 

In particular SSL has little to say about the first part of that
problem statement, connecting to what you think is your bank but it
actually isn't. Reliably communicating site identity isn't a protocol
problem - SSL + EV certs has solved that for good, for the sites that
need it. The problem is a user interface design issue. As SPDY is a
protocol there isn't much it can do here .... it can't stop people
confusing bank0famerica.com with the real thing. Enabling SSL by
default won't help - some phishing campaigns already proactively
enable SSL to get the lock icon anyway.

Why should connecting to any site - even something like the NYTimes not be authenticated?  Because it is too expensive?

 

SSL doesn't cause any problems for script kiddies - they already have
point and click solutions for hijacking peoples internet connections.
So SSL is primarily useful for stopping systematic snooping by the big
guys - telcos and governments. Whether you think the risk of this is
great enough to encrypt all traffic depends on your personal
politics.

Nobody claimed this was one-stop-shopping to fix all security issues.
 

But if SSL was free, none of that would matter. Why not do it?

SSL is not free. As already noted it breaks caching. Edge networks
aren't a replacement for HTTP level transparent caching.

Edge networks and transparent proxies solve different problems.  Edge networks solve latency problems for content providers.  Transparent proxies can be used for all sorts of purposes (legit or not!).  
 
They can be
extremely expensive and there are only a few major players, placing
them out of reach for most providers. They take control out of those
who need it (people with restricted connections) and move it to those
who have better things to think about (the content providers). And
they don't solve the "last mile" problem. For instance Google Earth
has had problems with schools who can't use it because the traffic is
not cacheable, and a classroom full of children loading it
simultaneously crushed their uplinks. This is especially a problem
outside of North America and Europe, exactly the places SPDY should be
helping most. We have data on edge caching vs ISP proxy caching from
experiments with Google Maps, follow up with me internally if you want
pointers to this.

SSL can push users who are already CPU saturated over the edge and
make their experience unusable. I believe that was the conclusion from
studying the issue on one large Google product (again ask me on an
internal list for more details if you want them).

I don't believe this is a real problem.  If you've got real data on this, I'd love to see it.

 

SSL is redundant in many cases. For instance if you access a website
over a 3G connection, that traffic is already encrypted to the base
station. To intercept it you'd need to tap the tower backhaul or
peering points .... so we're back to government being the threat
model. Ditto for cable or DSL.

For encryption yes, server-auth no.
 

SSL is a battery pig. Forcing SSL on for all traffic seriously
degrades battery life on smartphones because it is implemented on the
CPU   (in contrast to the already present 3G/kasumi ciphering which is
done in hardware).

Data would be good; would a 1% reduction in battery life make it a "pig"?
 

Finally, SSL is often unnecessary. Many sites (eg wikipedia) serves
the same pages to all users who are logged out and most users are
logged out. Encrypting that data achieves nothing.

Server-auth.
 

I'm echoing Bram here but I'll say it again. Let's not tie a useful
upgrade to HTTP to something with significant cost that actually works
against the latency goals.

SSL has also been put forward as a backwards compatibility hack. I'm
not sure if the costs of SSL are really worth supporting users behind
buggy proxies - in particular if those buggy proxies are performing
content caching. It's possible that SPDY might make connections slower
for those users rather than faster! But hard data on the effectiveness
of upgrade mechanisms is needed.

It's unclear if you can deploy *any* protocol changes safely over port 80.  The reason pipelining isn't deployed in any major browser (IE, Firefox, Safari, Webkit) is because it breaks through certain proxies.  Trying to layer a new protocol over port 80 where existing transparent proxies will try to interpret the bits *will* fail.  We need more concrete data on how much it will fail; the websockets folks are doing some experimentation with upgrade headers now, but more data is needed.

But I agree on the last point, and we will get that data.

Mike
 

thanks
-mike

Bram Cohen

unread,
Nov 23, 2009, 4:25:28 PM11/23/09
to spdy-dev
Some basic thoughts -

SSL can do very little against phishing, and one should have no
delusions that it can. Thankfully there seems to mostly be agreement
here that SPDY shouldn't check certs, or should only report certs as
extra security, without warning users that unsigned certs are
insecure, which is the batty practice which web browsers do for https
today.

SSL is, however, rather useful for getting around broken transparent
proxies. The fact that the major web browsers currently don't do
pipelining for exactly that reason is an extremely alarming data point
which makes me think anything other than an absolutely foolproof
method of avoiding it is going to fail.

The general issue of performance should be taken seriously. I suspect
that if you stick to AES-128 and totally reasonable ECC then that's a
non-issue, although I don't know the current status of their support
in SSL. Hopefully the bullshit FUD around ECC has mostly faded now
that the RSA patent has run out and noone's got any reason to supply
it any more.

It's also important to be clear on how much encryption really is
happening. The current transition plan, as I understand it, involves
the very first request to a site being sent in the clear, which is a
completely reasonable practice which leaks a little but not a lot of
information. In practice I'm not sure how much more that leaks than
the IP addresses you're connecting to though, which can't be hidden
anyway.

Mark Nottingham

unread,
Nov 23, 2009, 4:51:36 PM11/23/09
to spdy...@googlegroups.com
Broken transparent proxies are just one thing in a laundry list of reasons why pipelining can't be used on the open Internet; no need to attribute *all* of the Web's evils to them :)

Cheers,


On 24/11/2009, at 8:25 AM, Bram Cohen wrote:

> SSL is, however, rather useful for getting around broken transparent
> proxies. The fact that the major web browsers currently don't do
> pipelining for exactly that reason is an extremely alarming data point
> which makes me think anything other than an absolutely foolproof
> method of avoiding it is going to fail.


--
Mark Nottingham http://www.mnot.net/

Bram Cohen

unread,
Nov 23, 2009, 5:45:50 PM11/23/09
to spdy-dev
On Nov 23, 1:51 pm, Mark Nottingham <m...@mnot.net> wrote:
> Broken transparent proxies are just one thing in a laundry list of reasons why pipelining can't be used on the open Internet; no need to attribute *all* of the Web's evils to them :)

I would like to hear what the other problems are. I once upon a time
created my own TCP-based protocol which uses pipelining, and it
transfers a fair bit of traffic quite successfully. (Although I
suppose you could be referring to the problem of not being able to
make low-latency requests if you already shoved a whole bunch of them
down a pipe, which is fair enough.)

Mark Nottingham

unread,
Nov 23, 2009, 10:23:02 PM11/23/09
to spdy...@googlegroups.com
Putting aside the question of "SSL or not" (I have concerns as well), the protocol draft currently doesn't mention SSL at all, and while I've heard Mike and Roberto say that SPDY runs over it repeatedly, I'll admit to being confused when the bytes on the wire didn't match up to my expectations.

Maybe explain the use of SSL in the "Connections" section?

Cheers,

Witold Baryluk

unread,
Nov 24, 2009, 11:19:37 AM11/24/09
to spdy...@googlegroups.com
Hi, is there a way to negotiate SSL connection with authentification
of all packets, but no encryption? This way, server replies, will be
publicly known (and can be cached), but they will be all signed,
so it will be secure to assume they are truseted. (even if they
are server from some cache/proxy).

I don't know any such usage of SSL.

I know we can add some speciall fields to SPDY which
will have signs hashes, but As probably all we know,
it is quite error prone to design, build and implement
cryptographic systems, even by experts. Especially
that encryption authentification schemes need to be
extensible, primarly by negotation beetwen server
and client,

So i'm mainly concerned with authentification of all packets.
It shouldn't be big performance impact.

About performance of encryption, of course for next 10 years, we need
to use AES-128 as primary encryption scheme. Probably
with DH and RSA for session keys negotation, but also others
schemes like ECC are getting attention, becuase of their
high performance in asymetric encryption.

AES is implemented in many CPUs, notably ARM, Via C7, Nano,
and Intel Core i9. And it brings really big improvment. It is also
possible, that many newer NICs will have embeded encryption
unit, just like TCP offload, etc.

James A. Morrison

unread,
Nov 24, 2009, 11:59:35 PM11/24/09
to spdy...@googlegroups.com
2009/11/24 Witold Baryluk <witold....@gmail.com>:
> Hi, is there a way to negotiate SSL connection with authentification
> of all packets, but no encryption?

I haven't looked at TLS enough to say no for sure, but I don't believe
so. I know with IPsec you can create a tunnel that only has
authentication or integrity.
--
Thanks,
Jim
http://phython.blogspot.com

Mark Nottingham

unread,
Nov 25, 2009, 4:49:49 AM11/25/09
to spdy...@googlegroups.com
You can use a null cipher with SSL, although some implementations don't allow this for security reasons IIRC.

That said, I'd like to explore in-channel authentication and integrity, mostly to avoid the setup costs of SSL.

Cheers,

Witold Baryluk

unread,
Nov 27, 2009, 1:04:36 PM11/27/09
to spdy...@googlegroups.com
2009/11/25 Mark Nottingham <mn...@mnot.net>:
> You can use a null cipher with SSL, although some implementations don't allow this for security reasons IIRC.
>
> That said, I'd like to explore in-channel authentication and integrity, mostly to avoid the setup costs of SSL.
>

Probably this will be helpful : http://dnscurve.org/crypto.html

And yes, in IPsec traffic you can perform authentification without
encryption. Quite useful I think (i.e. for file server operations,
with public data, but for which we don't want to malicious user delete
any content or spoof server replays and deliver wrong content), where
encryption overhead is too big, but we still need some integrity and
safety (but not nacassarly confidentiality).

Just like in most of HTTP traffic.

Mike Hearn

unread,
Dec 2, 2009, 7:04:16 PM12/2/09
to spdy-dev
> The good news is that it is a lot easier to back away from SSL than it is to
> inject it later.

If SSL is mandated as a requirement of a SPDY connection, how would it
be backed off? If the protocol allows the server to opt out of SSL
then an attacker can always downgrade it.

> The big feature of SSL is not encryption; the big feature
> of SSL is server authentication.  When you're connecting to a site, you want
> to know you are connecting to the site you thought you were connecting to.

Absolutely, but plain old SSL only works for people who understand
URLs and domain names, ie, not most people.

Why would anybody try and intercept raw HTTP requests when they can
instead just send you a mail with a link like

https://www.facebook.com.system.cn/login.php

and people click on it?

The problem of knowing who you are connected to is really an UI
problem. Hijacking unencrypted connections on the wire is just really
hard, it requires physical proximity to your victim for one.
Exploiting the poor usability of URLs is a much preferred approach.

But let's say that these problems are indeed solved in the next few
years. SSL would help establish site identity .. but only if paired
with EV certs. Gerv Markham has made some very strong arguments as to
why the SSH style key change model has problems that make it
undeployable. EV certs mean you can confidently present some kind of
human readable string as the sites "identity" to the user, as long as
you can figure out a good UI for it.

I think most of my objections would disappear if the mandate was for
SSL with a null cipher by default, and some kind of robust upgrade
process. If the authentication process was asynchronous that'd be even
better, because humans make trust decisions rather slowly so it's not
really useful to put authentication on the critical path. But removing
encryption and making authentication async sounds a lot like something
which isn't SSL at all.

> Also - keep in mind we're designing a protocol for the next 20 years.  If
> Moore's law holds (lets not debate this here though :-)  we'll have more
> than 1000x more powerful computers by then.

Well, I have infinite faith in Intel and AMD :) Sadly my faith in
battery designers is less strong. If only they followed Moores law
too ....

> Why should connecting to any site - even something like the NYTimes not be
> authenticated?  Because it is too expensive?

What does authentication even mean to the user? Having the URL bar
display "New York Times" instead of nytimes.com? Then sure, it is
expensive. You need infrastructure to ensure that only the
organization most people would recognize as the "New York Times" is
able to get a certificate containing that string, and that whoever is
applying for it really does have the authority to buy certificates on
that organizations behalf, and that the people verifying these facts
are audited to ensure they are actually doing these things.

Maybe one day EV certs will be ten a penny but today they cost
hundreds of dollars, because making those guarantees is expensive.

It also only has a point for sites where the user already has some
trust in the organization behind it. That's true for the NYT but not
for many pages. If I search for "monty python quotes" on Google and
click the first link, then having a cryptographic guarantee that I'm
talking to montyquotes.com is useless. I'm no better off than before.
Are they really quotes from monty python? Who knows! An assertion
about the integrity of DNS is just noise to me. I don't know the
people behind the site, so I have to find some other way to
authenticate what I'm seeing (like asking a friend).

I realize my argument seems like a case of "perfect is enemy of the
good". But mandating SSL has costs and it's not clear to me the
benefits outweigh them.

> I don't believe this is a real problem.  If you've got real data on this,
> I'd love to see it.
> Data would be good; would a 1% reduction in battery life make it a "pig"?

I'll try and get you the data I'm thinking of. The Android team have
hard stats on battery impact.

Adam Langley

unread,
Dec 2, 2009, 8:30:35 PM12/2/09
to spdy...@googlegroups.com
On Wed, Dec 2, 2009 at 4:04 PM, Mike Hearn <he...@google.com> wrote:
> If SSL is mandated as a requirement of a SPDY connection, how would it
> be backed off? If the protocol allows the server to opt out of SSL
> then an attacker can always downgrade it.

I think Mike is talking about a change to the protocol at a future date.

> I think most of my objections would disappear if the mandate was for
> SSL with a null cipher by default, and some kind of robust upgrade
> process.

If you type "http://..." as the URL, and we use SPDY to fetch it, the
certificate will be ignored in the current plans, although it will
still be encrypted with a random key.

> I'll try and get you the data I'm thinking of. The Android team have
> hard stats on battery impact.

I don't believe the battery impact is important on a device powerful
enough to render web pages like Gmail etc. If the Android team have
numbers that suggest otherwise, I'd love to see them. If I turn out to
be wrong, I'm sure there are things that we can do to address the
problem.


AGL

egnor

unread,
Dec 9, 2009, 11:24:14 AM12/9/09
to spdy-dev
On Dec 2, 5:30 pm, Adam Langley <a...@chromium.org> wrote:
> I don't believe the battery impact is important on a device powerful
> enough to render web pages like Gmail etc. If the Android team have
> numbers that suggest otherwise, I'd love to see them. If I turn out to
> be wrong, I'm sure there are things that we can do to address the
> problem.

SSL has a nontrivial impact, and we've been reluctant to use SSL for
all services as a result.

In the most recent power-meter test I did on the subject (a few months
ago now):

- fetch 10K over HTTP: 0.45 mAH, tx=892B, rx=10960B, cpu=0.12s
- fetch 10K over HTTPS: 0.60 mAH, tx=1378B, rx=12036B, cpu=2.81s
- fetch 100K over HTTP: 0.66 mAH, tx=2945B, rx=106136B, cpu=0.15s
- fetch 100K over HTTPS: 0.83 mAH, tx=6047B, rx=107267B, cpu=2.34s

Byte counts are measured at the interface, so include TCP ACK traffic
and so on. I believe this is *with* aggressive session caching. You
see it takes *seconds* of CPU time to set up SSL. This was on older
hardware (G1, I think), I think we've improved this since, and I'm
sure we could improve it some more, but this is a seriously nontrivial
impact. (I should re-run these power tests.)

The Android team is very interested in SPDY -- you've already heard
from our browser team, and you'll be hearing more from the services
team -- but we can't neglect the overhead of things like SSL.

That said, it's possible that if we tighten up our code stack (it's a
pile of Java teetering on top of a pile of OpenSSL, neither of which
is particularly well-written) and pay a lot of attention to
eliminating round trips and cipher computations in the common case,
then maybe we can get to the point where SSL is very cheap. I would
very much *like* to get there. Sure, 3G over-the-air traffic is
encrypted, but frankly we don't trust the carriers and their proxies,
and in any case devices use WiFi too, and random WiFi hotspots are
extremely untrustworthy. (Active malice is relatively uncommon
compared to weird "transparent" proxies and payment systems.)

SPDY also offers the possibility of much more aggressive connection
caching than HTTP or HTTPS, which can amortize the cost of connection
setup.

I'm not sure who I'm agreeing with here, but mostly I would say that I
would like SPDY to use SSL, but for a lot of attention to be paid to
optimizing the SSL negotation process. Transparent proxies are
basically evil anyway. *Non-transparent* proxies (e.g. the school in
Elbonia) ought to be possible with SPDY, no? In fact, SPDY could be
used selectively on either side of a proxy...

-- egnor

Roberto Peon

unread,
Dec 9, 2009, 12:56:23 PM12/9/09
to spdy...@googlegroups.com
Thats the idea. Only "authorized" proxies, heavily optimized SSL with lots of connection resharing.
-=R
 

-- egnor

Adam Langley

unread,
Dec 9, 2009, 1:09:00 PM12/9/09
to spdy...@googlegroups.com
On Wed, Dec 9, 2009 at 8:24 AM, egnor <eg...@google.com> wrote:
> In the most recent power-meter test I did on the subject (a few months
> ago now):
>
> - fetch 10K over HTTP: 0.45 mAH, tx=892B, rx=10960B, cpu=0.12s
> - fetch 10K over HTTPS: 0.60 mAH, tx=1378B, rx=12036B, cpu=2.81s
> - fetch 100K over HTTP: 0.66 mAH, tx=2945B, rx=106136B, cpu=0.15s
> - fetch 100K over HTTPS: 0.83 mAH, tx=6047B, rx=107267B, cpu=2.34s

Thanks so much for real numbers! It's clear that Android has a serious
problem with SSL. I can't promise anything anytime soon, but this is
obviously something that needs to be fixed.

(Note that, according to the EBACS benchmarks[1], a PII at 333MHz can
perform a 1024-bit RSA encryption (the public key operation needed to
setup a TLS connection) in under a millisecond. So I don't believe
that even a G1 should have any issues with TLS.)

[1] http://bench.cr.yp.to


AGL

Marko Vuksanovic

unread,
Dec 9, 2009, 1:58:21 PM12/9/09
to spdy...@googlegroups.com
Hi guys,

I am closely watching the "SPDY story" and personally I believe that it is something revolutionary. I was just wondering what do you think about the fact that SSL is not free - do you consider the fact that you have to pay for a certificate an obstacle. I live in Croatia, Europe, and spending a few hundreds of dollars here is considered to be a big investment. Most people have a monthly wage which is lower than the price of an SSL certificate (I compared to the price at Verisign and Thawte).  I'm not sure about other countries but I believe Croatia is not the only one out there... And if you want to have a high penetration to the market - you need to make the technology available to the masses, not just to make it booked for the rich ones... I hope you get my point...

The other thing is about the numbers ...

- fetch 10K over HTTP: 0.45 mAH, tx=892B, rx=10960B, cpu=0.12s
- fetch 10K over HTTPS: 0.60 mAH, tx=1378B, rx=12036B, cpu=2.81s
- fetch 100K over HTTP: 0.66 mAH, tx=2945B, rx=106136B, cpu=0.15s
- fetch 100K over HTTPS: 0.83 mAH, tx=6047B, rx=107267B, cpu=2.34s

What is "tx" and "rx"? what do that numbers present?
--
-Marko Vuksanovic

Mike Belshe

unread,
Dec 9, 2009, 2:09:16 PM12/9/09
to spdy...@googlegroups.com
On Wed, Dec 9, 2009 at 10:58 AM, Marko Vuksanovic
<markovu...@gmail.com> wrote:
> Hi guys,
> I am closely watching the "SPDY story" and personally I believe that it is
> something revolutionary. I was just wondering what do you think about the
> fact that SSL is not free - do you consider the fact that you have to pay
> for a certificate an obstacle. I live in Croatia, Europe, and spending a few
> hundreds of dollars here is considered to be a big investment. Most people
> have a monthly wage which is lower than the price of an SSL certificate (I
> compared to the price at Verisign and Thawte).  I'm not sure about other
> countries but I believe Croatia is not the only one out there... And if you
> want to have a high penetration to the market - you need to make the
> technology available to the masses, not just to make it booked for the rich
> ones... I hope you get my point...
> The other thing is about the numbers ...

I see costs going down over time. And, if this takes off, I would see
costs going down even faster, with new CAs springing up as well.

But more importantly, for the big sites, this is just not an issue.
They can afford certificates and most already have them. So for 80+%
(?) of the content, there is no problem.


>>
>> - fetch 10K over HTTP: 0.45 mAH, tx=892B, rx=10960B, cpu=0.12s
>> - fetch 10K over HTTPS: 0.60 mAH, tx=1378B, rx=12036B, cpu=2.81s
>> - fetch 100K over HTTP: 0.66 mAH, tx=2945B, rx=106136B, cpu=0.15s
>> - fetch 100K over HTTPS: 0.83 mAH, tx=6047B, rx=107267B, cpu=2.34s
>
> What is "tx" and "rx"? what do that numbers present?

transmit and receive.

Mike

Vitaliy Lvin

unread,
Dec 9, 2009, 2:13:35 PM12/9/09
to spdy-dev
On Wed, Dec 9, 2009 at 1:58 PM, Marko Vuksanovic <markovu...@gmail.com> wrote:
Hi guys,

I am closely watching the "SPDY story" and personally I believe that it is something revolutionary. I was just wondering what do you think about the fact that SSL is not free - do you consider the fact that you have to pay for a certificate an obstacle. I live in Croatia, Europe, and spending a few hundreds of dollars here is considered to be a big investment. Most people have a monthly wage which is lower than the price of an SSL certificate (I compared to the price at Verisign and Thawte).  I'm not sure about other countries but I believe Croatia is not the only one out there... And if you want to have a high penetration to the market - you need to make the technology available to the masses, not just to make it booked for the rich ones... I hope you get my point...

I think I get your point (I also grew up in a country with monthly wages lower then the cost of a cert). But it's been my experience that in Eastern Europe and developing world in general the costs of internet traffic are a lot higher than in the US. So hosting anything more then a personal website with a few pages out of Croatia is expensive. And for a small personal website serving static pages HTTP is generally pretty adequate.

The other thing to remember is economy of scale. Once certs become virtually mandatory (whether because of Spdy, or just plain HTTPS), the prices will go down. It's already happening. I think I got some sort of cert for free with domain name registration for my personal site.

Adam Langley

unread,
Dec 9, 2009, 2:26:28 PM12/9/09
to spdy...@googlegroups.com
On Wed, Dec 9, 2009 at 10:58 AM, Marko Vuksanovic
<markovu...@gmail.com> wrote:
SSL is not free - do you consider the fact that you have to pay for a
certificate an obstacle. I live in Croatia, Europe, and spending a few
hundreds of dollars here is considered to be a big investment.

Current CAs prices are more like $20-$50. (See GoDaddy or SSLMatic).

This is still too high and reflects the failures of the CA system as
currently formed.

Note also that we might be enabling SPDY use over self-signed certs,
although this is still in flux.



AGL

Mike Hearn

unread,
Dec 9, 2009, 4:43:20 PM12/9/09
to spdy-dev
You can get a free SSL cert from StartSSL. It's recently become
trusted by IE, so as far as I can tell there's now a "no catch" way of
getting an SSL cert.

StartSSL charge more for certs that represent higher trust levels - as
I said earlier EV certs say a whole lot more about identity than a
"class 1" cert does and cost more as a result. But a basic SSL setup
that asserts nothing more than the integrity of DNS should free or
cheap, at least, cheap relative to the cost of running the machine
that hosts the content.

That said, for sites that just serve static content to whoever
requests it, encryption buys you nothing and authentication opens up a
giant can of UI/trust model worms (look at the ssh vs ssl vs wot
debates that rage on even today). So it can still be seen as a
needless cost.

I agree with the idea that in the future, SSL should be so cheap cpu
and dollar wise that it's basically free. But there are two big
caveats:

1) SPDY will start appearing on peoples radars soon (next year?) not
in 10 years. So it has to be providing real benefit today. Mobile
latency is a huge pain point and people will want the benefits there
first. It appears that on high end devices deployed in the wild today,
the overhead of SSL could cripple the latency benefits of SPDY (i
wonder how the iphone compares?)

2) One of the key ideas for making SSL faster involve changes to the
protocol that "should" be backwards compatible. Won't that just run
smack into the same deployment issues that raw SPDY faces?

Mike Hearn

unread,
Dec 9, 2009, 5:00:02 PM12/9/09
to spdy-dev
> Thats the idea. Only "authorized" proxies, heavily optimized SSL with lots
> of connection resharing.

Hmm reminds me, how does one implement a for-pay wifi hotspot with
spdy? New protocol? Written instruction to visit a certain page
somewhere nearby?

Roberto Peon

unread,
Dec 9, 2009, 5:07:09 PM12/9/09
to spdy...@googlegroups.com
Good question. I don't believe anything changes substantially. For-pay wifi hotspots already have to deal with HTTPS, which as far as they're concerned, is the same bag of worms. At worst, you'll fall-back to HTTP for the initial page, then get directed to the appropriate HTTPS/SPDY site for auth, etc.
-=R

Marko Vuksanovic

unread,
Dec 9, 2009, 6:09:34 PM12/9/09
to spdy...@googlegroups.com
Note also that we might be enabling SPDY use over self-signed certs,
although this is still in flux.

Ok, self signed certs might be a way to go. But what about the users who are behind strict firewalls and proxies that do not allow access to web sites with self signed certs - Ericsson (at least the department in croatia) is an example of such a network.
--
-Marko Vuksanovic

Adam Langley

unread,
Dec 9, 2009, 6:11:45 PM12/9/09
to spdy...@googlegroups.com
On Wed, Dec 9, 2009 at 3:09 PM, Marko Vuksanovic
<markovu...@gmail.com> wrote:
> Ok, self signed certs might be a way to go. But what about the users who are
> behind strict firewalls and proxies that do not allow access to web sites
> with self signed certs - Ericsson (at least the department in croatia) is an
> example of such a network.

There are some networks that are so damaged that one cannot do
anything about them.


AGL

Roberto Peon

unread,
Dec 9, 2009, 6:14:54 PM12/9/09
to spdy...@googlegroups.com
The intention is to allow for "authorized" proxies. If the Ericsson proxy decides not to access web sites with self-signed certs, then so be it.
I imagine that schools, etc, may have similar issues with the same resolution.

-=R

Adam Langley

unread,
Dec 15, 2009, 7:51:40 PM12/15/09
to spdy...@googlegroups.com
On Wed, Dec 9, 2009 at 10:09 AM, Adam Langley <a...@chromium.org> wrote:
> Thanks so much for real numbers! It's clear that Android has a serious
> problem with SSL. I can't promise anything anytime soon, but this is
> obviously something that needs to be fixed.

I bench marked an Android phone performing an SSL setup and teardown with:

% time ./adb shell '/sqlite_stmt_journals/openssl s_client -connect
mail.google.com:443 < /dev/null >> /dev/null'
real 0m0.354s

Which is much more reasonable. `openssl rsa speed` suggests that the
phone can do >2000 1024-bit RSA public ops/sec as well.

So, if SSL connections are taking 2 seconds, the problem is above
OpenSSL. Unfortunately, I don't know anything about the upper layers
of Android.


AGL

Costin Manolache

unread,
Dec 15, 2009, 8:14:46 PM12/15/09
to spdy...@googlegroups.com
Did you try over Wifi or 3G ? G1 or Droid ? 
I believe the 2 sec is for the 10k http download over SSL, not just connect.

Costin
 

Adam Langley

unread,
Dec 15, 2009, 8:34:34 PM12/15/09
to spdy...@googlegroups.com
On Tue, Dec 15, 2009 at 5:14 PM, Costin Manolache <cos...@gmail.com> wrote:
> Did you try over Wifi or 3G ? G1 or Droid ?
> I believe the 2 sec is for the 10k http download over SSL, not just connect.

Wifi and Droid.

The difference between the 10K and 100K (non-SSL) numbers that egnor
gave suggest that bandwidth was not a major limiting factor. The
absolute value of the 10K non-SSL number (0.12 seconds) suggests that
the server was quite close (RTT < 60 milliseconds), thus it wasn't 3G.

mail.google.com is 50ms RTT from my test device.

Given all that, I think I matched the setup reasonably well.
Downloading an extra 10K wouldn't have significantly effected the
result. The numbers I got are basically accounted for entirely by
network overhead:

There are four round trips:

SYN + SYNACK: 50ms
ClientHello etc + Server Hello etc: 50ms
CKS + CCS + Finished (both ways): 50ms
FIN + FINACK: 50ms

... totaling 200ms and 150ms of overhead[1], giving 350ms. This
supports my guess that computation is not a major factor for the
client.


[1] overhead measured by: time ./adb shell
'/sqlite_stmt_journals/openssl s_client -connect 127.0.0.1:443 <
/dev/null >> /dev/null'

AGL

Costin Manolache

unread,
Dec 15, 2009, 9:25:29 PM12/15/09
to spdy...@googlegroups.com
Over T-mobile on a touch, 3 times:

time adb shell "/data/openssl s_client -connect  www.google.com:443 < /dev/null >> /dev/null"
depth=1 /C=US/O=Google Inc/CN=Google Internet Authority
verify error:num=20:unable to get local issuer certificate
verify return:0
DONE

real 0m0.979s
user 0m0.004s
sys 0m0.004s

time adb shell "/data/openssl s_client -connect  www.google.com:443 < /dev/null >> /dev/null"
depth=1 /C=US/O=Google Inc/CN=Google Internet Authority
verify error:num=20:unable to get local issuer certificate
verify return:0
DONE

real 0m2.708s
user 0m0.000s
sys 0m0.004s

 time adb shell "/data/openssl s_client -connect  www.google.com:443 < /dev/null >> /dev/null"
depth=1 /C=US/O=Google Inc/CN=Google Internet Authority
verify error:num=20:unable to get local issuer certificate
verify return:0
DONE

real 0m4.889s
user 0m0.004s
sys 0m0.000s

Costin

Costin Manolache

unread,
Dec 15, 2009, 9:30:15 PM12/15/09
to spdy...@googlegroups.com
For reference - 
adb shell "cat /data/a | /data/nc  mail.google.com 80" ( where 'a' contains a GET /\nHost:mail.google.com\n\n ) - 
real 0m0.441s
user 0m0.004s
sys 0m0.000s

real 0m0.430s
user 0m0.000s
sys 0m0.008s

real 0m0.571s
user 0m0.004s
sys 0m0.000s

I have 2 'bars' out of 4, but seems to be 3G.

Costin

Mike Hearn

unread,
Dec 16, 2009, 9:39:00 AM12/16/09
to spdy-dev
> Good question. I don't believe anything changes substantially. For-pay wifi
> hotspots already have to deal with HTTPS, which as far as they're concerned,
> is the same bag of worms. At worst, you'll fall-back to HTTP for the initial
> page, then get directed to the appropriate HTTPS/SPDY site for auth, etc.

The difference being, users don't type https:// in directly, so just
giving the user an error page if they try and use SSL before paying is
good enough.

I forgot what the bootstrapping mechanism is for SPDY, and now can't
find it in the specs. I think there was discussion of something in
DNS, or some ability to remember SPDY capable servers? At any rate,
it'll have to be transparent to the user. Which means that if
google.com starts supporting SPDY and the user opens their browser on
such a wifi network, they will be greeted with a cert error rather
than a payment page.

Mike Hearn

unread,
Dec 16, 2009, 9:46:47 AM12/16/09
to spdy-dev
> So, if SSL connections are taking 2 seconds, the problem is above
> OpenSSL. Unfortunately, I don't know anything about the upper layers
> of Android.

I wrote an Android app that happened to browse to an SSL protected
page as part of its operation. There was a lot of garbage collection
going on, I think the Java part of the stack is pretty inefficient.
I'm sure it could be improved, and though G1 era hardware is not
exactly "old" I guess you could write it off in the SPDY timeframe.

I still think looking at 3G is misleading. We can say, "we don't trust
carrier proxies" but they are there for policy reasons and I suspect
operators will not be impressed at attempts to forcibly take them
away. This is especially true as LTE/4G is expected to shift the
bottleneck from the radio interface to the tower backhauls, giving
operators big incentives to add multi-layer caching directly to tower
sites as a way to reduce deployment costs.

As operators are an unavoidable MITM I don't see a way to prevent
downgrades on SPDY sites. And the hop after the operator is the
backbone, to which the only real threat is governments. And government
isn't really a part of the SSL threat model anyway. That's why I think
SSL over 3G makes little sense.

Mike Belshe

unread,
Dec 16, 2009, 2:19:57 PM12/16/09
to spdy...@googlegroups.com
I'm not quite following you, but this is not true.

Currently we don't auto-convert http to https.  I'd like to find a good way to do it though :-)

Mike
 

Mike Belshe

unread,
Dec 16, 2009, 2:22:51 PM12/16/09
to spdy...@googlegroups.com
On Wed, Dec 16, 2009 at 6:46 AM, Mike Hearn <he...@google.com> wrote:
> So, if SSL connections are taking 2 seconds, the problem is above
> OpenSSL. Unfortunately, I don't know anything about the upper layers
> of Android.

I wrote an Android app that happened to browse to an SSL protected
page as part of its operation. There was a lot of garbage collection
going on, I think the Java part of the stack is pretty inefficient.
I'm sure it could be improved, and though G1 era hardware is not
exactly "old" I guess you could write it off in the SPDY timeframe.

It's not a matter of writing it off - it's about understanding if there are implementation issues or protocol issues.

If the problem is implementation, then we shouldn't worry about it while designing the protocol.
 

I still think looking at 3G is misleading. We can say, "we don't trust
carrier proxies" but they are there for policy reasons and I suspect
operators will not be impressed at attempts to forcibly take them
away. This is especially true as LTE/4G is expected to shift the
bottleneck from the radio interface to the tower backhauls, giving
operators big incentives to add multi-layer caching directly to tower
sites as a way to reduce deployment costs.

As operators are an unavoidable MITM I don't see a way to prevent
downgrades on SPDY sites. And the hop after the operator is the
backbone, to which the only real threat is governments. And government
isn't really a part of the SSL threat model anyway. That's why I think
SSL over 3G makes little sense.


SPDY doesn't prevent all proxies - it only prevents transparent proxies.  Network operators can still use explicit proxies (configured on the device).  So carriers can do transcoding of images, caching, etc.  If the origin servers move to SSL, however, they won't be able to do this (just like they can't do it today).

Mike
 

Costin Manolache

unread,
Dec 16, 2009, 4:27:35 PM12/16/09
to spdy...@googlegroups.com
On Wed, Dec 16, 2009 at 11:22 AM, Mike Belshe <mbe...@google.com> wrote:


On Wed, Dec 16, 2009 at 6:46 AM, Mike Hearn <he...@google.com> wrote:
> So, if SSL connections are taking 2 seconds, the problem is above
> OpenSSL. Unfortunately, I don't know anything about the upper layers
> of Android.

I wrote an Android app that happened to browse to an SSL protected
page as part of its operation. There was a lot of garbage collection
going on, I think the Java part of the stack is pretty inefficient.
I'm sure it could be improved, and though G1 era hardware is not
exactly "old" I guess you could write it off in the SPDY timeframe.

It's not a matter of writing it off - it's about understanding if there are implementation issues or protocol issues.

If the problem is implementation, then we shouldn't worry about it while designing the protocol.

The java part is mostly thin JNI layer on top of openssl ( for android ). The GC you see
 is probably due to a bunch of new classes and libraries loaded - the second connection you make will 
be much faster. There are also some extra file writes ( a cache for session reuse - second connection to the
same host will try to reuse the session, skip a rt ).

From what I've seen a lot of the time is due to network latencies and the extra roundtrips and data, 
in particular if you're on Edge or 3G with bad signal.  
 

 
 

I still think looking at 3G is misleading. We can say, "we don't trust
carrier proxies" but they are there for policy reasons and I suspect
operators will not be impressed at attempts to forcibly take them
away. This is especially true as LTE/4G is expected to shift the
bottleneck from the radio interface to the tower backhauls, giving
operators big incentives to add multi-layer caching directly to tower
sites as a way to reduce deployment costs.

As operators are an unavoidable MITM I don't see a way to prevent
downgrades on SPDY sites. And the hop after the operator is the
backbone, to which the only real threat is governments. And government
isn't really a part of the SSL threat model anyway. That's why I think
SSL over 3G makes little sense.


SPDY doesn't prevent all proxies - it only prevents transparent proxies.  Network operators can still use explicit proxies (configured on the device).  So carriers can do transcoding of images, caching, etc.  If the origin servers move to SSL, however, they won't be able to do this (just like they can't do it today).

If the network operator specifies a proxy - where would SPDY be used ? I assume the proxy set on the phone will be HTTP, and will 
talk HTTP with the server. 

Assuming operators deploy SPDY proxies - not sure what certificate will they present to the phone or how this will work. If you don't care
about certs - you may still have encryption between user and proxy and proxy to server, which may be a good thing. 


Maybe you could still implement the encryption and verification but at a higher level. The SSL handshake seems to be quite broken at the moment.
BTW - if SPDY will require SSL, i.e. all communication will happen using SSL records - maybe you could just reuse the 
TLS record protocol. Or maybe you could use SSH protocol - which is quite similar with SPDY in allowing multiple channels, and has alternatives  to signed certs. 


Costin



Mike
 


Roy T. Fielding

unread,
Dec 16, 2009, 6:39:45 PM12/16/09
to spdy...@googlegroups.com
On Dec 16, 2009, at 11:19 AM, Mike Belshe wrote:
> Currently we don't auto-convert http to https. I'd like to find a good way to do it though :-)

Why would you want to do that? There is no shared authority
relation between http and https -- they are entirely
different services on different ports that may be
handled by different machine, perhaps on different continents.

....Roy

Mike Belshe

unread,
Dec 16, 2009, 6:46:36 PM12/16/09
to spdy...@googlegroups.com
They would present a real cert, just like always.  It's just like having SSL to your proxy.  If you're doing SSL to the origin server, then you're tunneling SSL over SSL.  Cute :-)

Mike

Mike Hearn

unread,
Dec 17, 2009, 4:32:51 AM12/17/09
to spdy...@googlegroups.com
> The java part is mostly thin JNI layer on top of openssl ( for android ).
> The GC you see  is probably due to a bunch of new classes and
> libraries loaded

Could be. This was early in 2009 so pre-Cupcake even. My knowledge is
probably out of date by now.

Transparent proxies are generally used to simplify the users life -
most users won't know how to explicitly configure a proxy, and asking
people to run some binary from the ISP is tricky. If SPDY proxies
could be pushed via DHCP that'd probably help. But it's getting more
and more complicated.

Costin Manolache

unread,
Dec 17, 2009, 1:58:32 PM12/17/09
to spdy...@googlegroups.com

Costin Manolache

unread,
Dec 17, 2009, 2:31:32 PM12/17/09
to spdy...@googlegroups.com
Yes, to simplify users life and to enforce policies ( restrict/control/monitor access, force
caching, etc ). Replacing the HTTP protocol is a big task - but forcing companies and ISPs 
( and countries) to change their policies is far harder. 

If the initial negotiation for protocol switch is done correctly - for example using standard
HTTP headers that some proxies may recognize (Upgrade: SPDY + Connection: Upgrade ) than existing 
transparent proxies will at least be able to work by using HTTP ( if they indeed strip the Upgrade header ), and new
 proxies may be able to take advantage of  it and use spdy between user and proxy or proxy and servers. 

Using proto=HTTP/2.0 to initiate SPDY is pretty certain to be rejected by existing proxies and result in an extra 
roundtrip and lots of confusion about which sites can be used with SPDY, since browser won't be able to
remember if a site supports SPDY or not. I very much doubt that many proxies support the Upgrade header
and remove the headers specified in Connection: - probably SPDY servers should fall back to HTTP if there is 
a Via: header. 

Looking forward  for more details on the negotiation - it will be quite relevant in the SSL discussion as well. 

If you really want to force end-to-end SSL ( and limit adoption of SPDY ) - probably using port 443 would work with transparent
proxies ( but result in bans on browsers that support it). 

IMHO separating the server verification/encryption from SSL would be really great - you could add few headers to
 challenge/validate the SSL cert - this can be done for both HTTP and HTTPS requests, and you could do it without
 requiring signed certs. 
Than you can have the HTTPS frames encrypted with an end-to-end key, you could have HTTP frames in clear or encrypted with
a server-proxy key or optional end-to-end if both ends and proxies in between are ok, etc. 

BTW - I'm working on a spdy implementation for tomcat, right now I'm just looking at the first bytes in the connection, if it
starts with 0x80 I assume it's spdy, if it's 0x3 I assume it's SSL, if ascii - normal http. 

Costin


Mike Hearn

unread,
Dec 17, 2009, 3:08:52 PM12/17/09
to spdy...@googlegroups.com
I don't fully understand why DNSSEC isn't equivalent to the level of
SSL being proposed actually. If SPDY won't require EV certs then it
boils down to an assertion about the integrity of DNS, right?

DNSSEC would seem to solve that problem without the overhead of
encrypting all traffic.

That leaves backwards compatibility. The standard way to introduce a
new network protocol is a new port. Just using a different port is
going to work on any network that doesn't implement port-based default
deny rules. The ones that do probably will also have proxies that
won't support SPDY. If the admins decide to upgrade to a SPDY
compatible proxy, they can as well just open up the SPDY port at the
same time.

I think we can find out how many networks are blocking some newly
chosen port by running a google.com experiment. This should get good
coverage across network types, companies, residential networks etc.
It'd be interesting to correlate this with browser use. I'd wager that
networks which block some newly chosen port are dominated by old
versions of IE.

It is tempting to use SSL to try and evade conservative sysadmins, and
it has precedent - Skype tried the same thing. However that just made
sysadmins around the world hate skype. Here is an example of what one
admin did:

http://lists.grok.org.uk/pipermail/full-disclosure/2005-November/038646.html

Given that admins conservative enough to block all outbound non-web
traffic are certainly conservative enough to mandate a given browser
version, I'm skeptical real SPDY deployments would encounter too much
blocking.

Costin Manolache

unread,
Dec 17, 2009, 3:34:44 PM12/17/09
to spdy...@googlegroups.com
On Thu, Dec 17, 2009 at 12:08 PM, Mike Hearn <he...@google.com> wrote:
I don't fully understand why DNSSEC isn't equivalent to the level of
SSL being proposed actually. If SPDY won't require EV certs then it
boils down to an assertion about the integrity of DNS, right?

Or remembering the cert on the first connection, like SSH does. 

 

DNSSEC would seem to solve that problem without the overhead of
encrypting all traffic.

Solve server validation problems - yes. 
 

That leaves backwards compatibility. The standard way to introduce a
new network protocol is a new port. Just using a different port is
going to work on any network that doesn't implement port-based default
deny rules. The ones that do probably will also have proxies that
won't support SPDY. If the admins decide to upgrade to a SPDY
compatible proxy, they can as well just open up the SPDY port at the
same time.
 

I think we can find out how many networks are blocking some newly
chosen port by running a google.com experiment. This should get good
coverage across network types, companies, residential networks etc.
It'd be interesting to correlate this with browser use. I'd wager that
networks which block some newly chosen port are dominated by old
versions of IE.


I guess SIP is a good indication of how much fun requiring firewall changes is :-)
At least you're not using UDP...


HTTP does have a standard mechanism for switching the protocol - what is the 
problem with using it (besides the fact that it's not clear how many proxies
implement HTTP/1.1 correctly )? A different port will still require 2 TCP connection, maybe a 
long timeout depending on how blocking is done, you still can't remember if a server
supports SPDY or not ( it depends on where you access it from ). And the main 
SSL problem - the policy - doesn't go away.  If the goal is to fight the evil
 states/corporation/law enforcement/censorship - than  yes, requiring SSL may help, but
 expect them to fight back :-)


 
It is tempting to use SSL to try and evade conservative sysadmins, and
it has precedent - Skype tried the same thing. However that just made
sysadmins around the world hate skype. Here is an example of what one
admin did:

  http://lists.grok.org.uk/pipermail/full-disclosure/2005-November/038646.html

Given that admins conservative enough to block all outbound non-web
traffic are certainly conservative enough to mandate a given browser
version, I'm skeptical real SPDY deployments would encounter too much
blocking.

AFAIK most companies want to monitor the web traffic - it's not just 'conservative admins'. HTTPS
 is allowed - but I think the expectation is still that most traffic will 
be HTTP and monitor-able, and probably a lot of https traffic will trigger some alarms. There are
also few big countries, probably schools that have similar requirements - and this has nothing to do with 
transparent proxies and caching. I'm not sure even if SSL is legal in all countries..
 
 
Costin
 






Mike Hearn

unread,
Dec 17, 2009, 3:49:37 PM12/17/09
to spdy...@googlegroups.com
> Or remembering the cert on the first connection, like SSH does.

I think Gerv Markhams document says it better than I could. The SSH
model has UI issues that make it undeployable, see section 3:

http://www.gerv.net/security/self-signed-certs/

> I guess SIP is a good indication of how much fun requiring firewall changes
> is :-)

Home NAT/firewalls can be reconfigured automatically by the browser
using UPnP. Corp firewalls yes, not much fun, but trying to bypass
them with SSL will just lead to skype-like arms races.

> A different port will still require 2 TCP connection

Yeah but they can be done in parallel. If the SPDY connection comes
back within 50ms of HTTP use that. Setting up a TCP connection then
throwing it away with no content written is really cheap.

Mike Belshe

unread,
Dec 17, 2009, 4:49:20 PM12/17/09
to spdy...@googlegroups.com
On Thu, Dec 17, 2009 at 12:08 PM, Mike Hearn <he...@google.com> wrote:
I don't fully understand why DNSSEC isn't equivalent to the level of
SSL being proposed actually. If SPDY won't require EV certs then it
boils down to an assertion about the integrity of DNS, right?

DNSSEC would seem to solve that problem without the overhead of
encrypting all traffic.

That leaves backwards compatibility. The standard way to introduce a
new network protocol is a new port. Just using a different port is
going to work on any network that doesn't implement port-based default
deny rules. The ones that do probably will also have proxies that
won't support SPDY. If the admins decide to upgrade to a SPDY
compatible proxy, they can as well just open up the SPDY port at the
same time.

I think we can find out how many networks are blocking some newly
chosen port by running a google.com experiment. This should get good
coverage across network types, companies, residential networks etc.
It'd be interesting to correlate this with browser use. I'd wager that
networks which block some newly chosen port are dominated by old
versions of IE.

OK - let's say we did this experiment.  Would the result change your argument at all?  If 1% of users can't use alternate ports, is that enough to convince you that we should look for a way to reside in the existing port namespace?  {80, 443}  What if the failure rate for non-port-80-ports were 2%? 4%?

Unfortunately, you're proposing a protocol which leaves the user hung out to dry - he knows nothing about http/spdy/ports/whatever, and yet his connection just fails.  If the port is blocked on the user's router, a simple startup test could verify whether an alternate port works, and we could persist that information.  But, if ports are optionally blocked, depending on his traffic flows, then the user keeps bumping into timeouts as the browser learns where it can't go.  And when the user switches networks, the browser has to relearn this information, again leaving the user with lots of timeouts.  You could argue that the browser must do this conservatively, and only use the new protocol after it has done an in-the-background check if a port is available.  But this is not very cool at scale - as every browser on the planet would now be constantly pinging.  And it leaves a lot of webpages without optimization as the browser goes to that site for the first time.

Mike

Costin Manolache

unread,
Dec 17, 2009, 6:05:38 PM12/17/09
to spdy...@googlegroups.com
On Thu, Dec 17, 2009 at 12:49 PM, Mike Hearn <he...@google.com> wrote:
> Or remembering the cert on the first connection, like SSH does.

I think Gerv Markhams document says it better than I could. The SSH
model has UI issues that make it undeployable, see section 3:

http://www.gerv.net/security/self-signed-certs/


What he forgets to mention is that if someone gets a root CA key, or install 
a fake CA authority on your machine he can do far more harm than individual 
self-signed certs. Is any of the CA roots in a country where authorities may 
get the master key ? Are all CA roots absolutely secure - how many people have 
or had access to them ? Is the physical access strictly controlled ? 
The trust problem is IMO worse or as bad on a central authority case.
 
At least SSH prevents MITM after the first request, and when combined with 
DNS it may make things a bit better than they are today. 


 
> I guess SIP is a good indication of how much fun requiring firewall changes
> is :-)

Home NAT/firewalls can be reconfigured automatically by the browser
using UPnP. Corp firewalls yes, not much fun, but trying to bypass
them with SSL will just lead to skype-like arms races.

SPDY without required SSL is not doing anything different than plain HTTP - no need
for an arms race. 
 
When you change the goal from 'faster web' to 'fighting censorship/monitoring' you start the arms 
race, and a different port won't help. 




> A different port will still require 2 TCP connection

Yeah but they can be done in parallel. If the SPDY connection comes
back within 50ms of HTTP use that. Setting up a TCP connection then
throwing it away with no content written is really cheap.

So every time you connect to a web site you send 2 packets instead of 1 ? And in the positive cases 
both will return ACK - and you send another packet to close one of them ?
You still can't remember if a site supports SPDY - you may be on a different network, so this 
extra traffic has to go on each connection. 
BTW - how do you specify the port if a server doesn't run on the default one ( after all browser
will see http://host:8080 ) ? I guess a SRV record could help in this case - but it's more
complexity.

Anyway - the main issue here is what are your goals. Faster web or fight the bad 
corporations/govt ? 

Keep in mind that proxies may be your best help in making the web faster and getting 
adoption for SPDY. If you get a few mainstream proxies to implement SPDY you'll get a 
lot of the benefits of the muxed connection. You'll still have HTTP on the fast net between
 old browsers and proxy, but you'll save the TCP roundtrips and compress on the slower
 link to the web servers. 

So far I haven't heard any argument against using a Connection: header ( which a well-behaved proxy 
should remove if it doesn't support it ) combined with using Via: to detect proxy on server side. 
It only adds one extra header in the first Http request on a connection - and you do what the 
http spec recommends for changing the protocol. 

Another approach to deal with a lot of proxies is just to hide SPDY frames in HTTP chunks - 
use any of the techniques in use today to keep a HTTP connection open and used for 2-directional
communication, and use SPDY as payload. It may even work in other browsers - and server side
it's easier to implement too. I don't think you'll lose any of the benefits of SPDY - just a few extra bytes.


Costin

Witold Baryluk

unread,
Dec 17, 2009, 6:30:28 PM12/17/09
to spdy-dev
There is a good way of ensuring that you login into the good site.

It is called SRP protocol [1]. There is draft [2] which describes how
to implement it in the SSL/TLS.
It is already integrated into Firefox in some development branch,
also GnuTLS implemented it for server side.

It isn't good for everything, but have many advantages:
- is quite fast
- no public keys/certificates needed (beyond the initial setting of
password)
- phishing is not possible if you know that connection is using it

It is so, because SRP protects both against passive and active MITM
attacks.

Unfortunetly it is only usefull on the webpages (or othere services)
where you need to log-in. Good for your banks, facebook, gmail,
googe wave, or forum, but not nacassarly for site from which you
download some executables to run them on your computer.

Neverthles it is extremally usefull and secure. When i found
it I was first thinking that it is imposible to design protocol,
obeying all restriction they wanted. In fact they did it.

It is similar to TLS-PSK, but PSK is symetric.
Both sides have secret.
SRP is more resistant to brute force,
and server doesn't have really secret (password).
(Which is quite important if someone will stole the database of
passwords, similary like why we are using kind of hashing
password on the servers side. But it SRP is much more
secure than just adding random salt :D ).

===============
0. http://srp.stanford.edu/
1. http://en.wikipedia.org/wiki/Secure_Remote_Password_protocol
1a. RFC 2945
2. RFC 5054
3. http://tools.ietf.org/html/draft-ietf-pppext-eap-srp-03

Mike Hearn

unread,
Dec 18, 2009, 4:38:36 AM12/18/09
to spdy...@googlegroups.com
> OK - let's say we did this experiment.  Would the result change your
> argument at all?  If 1% of users can't use alternate ports, is that enough
> to convince you that we should look for a way to reside in the existing port
> namespace?  {80, 443}  What if the failure rate for non-port-80-ports were
> 2%? 4%?

OK, that's a good point. I don't know what level would be acceptable.
I'll think about this some more today.

> Unfortunately, you're proposing a protocol which leaves the user hung out to
> dry - he knows nothing about http/spdy/ports/whatever, and yet his
> connection just fails.

i don't think it has to be quite that dire. If you type in an address
for the first time like google.com, it could use a "half spdy" via
http whilst it tries to bring up the new multiplexed protocol in the
background. For instance the first request might look like a regular
GET / HTTP/1.1 with "application/spdy" in the accept encodings list.
When that is present, the browser can send back not just index.html
but the other resources too in a simple
<length><content><length><content> bundle.

As the browser knows what resources the user will require after seeing
their first request, this should be almost equivalent in speed to
going straight to SPDY (no header compression though). By the time the
webpage or user then makes extra less predictable requests the new
stream type is either up and running or timed out. It also has the
advantage that you don't need to fudge around with DNS to enable SPDY.
For instance hosting providers could enable SPDY for all their
websites in one go even if the DNS records for those sites aren't
under their control.

I'm not sure sending two SYN packets simultaneously would cause
scalability problems. Browsers have already bumped the number of
simultaneous HTTP connections they allow quite significantly, those
are much more expensive and the web did not collapse. Servers already
have to be able to handle lots of SYNs to deal with synflood attacks.
And the double SYN would only occur when

a) the user types in an address directly
b) the SPDYness of the site is unknown
c) and/or the SPDYness of the current network is unknown

For (a) many users get to sites via bookmarks or search engines
anyway. Those can embed the SPDYness of a site into the URL with a
spdy:// scheme. Yes the browser will have to notice that its IP
address or wifi MAC changed for c, but then browsers are already
reading this data for geolocation so the modification shouldn't be too
hard.

Mike Hearn

unread,
Dec 18, 2009, 6:03:47 AM12/18/09
to spdy...@googlegroups.com
> Is any of the CA roots in a country where authorities may  get the master key?

Yes, of course. Government is not really a part of the SSL threat
model. The moment you consider the government to be your foe, things
get a LOT harder and less consumer friendly.

> Are all CA roots absolutely secure - how many people have
> or had access to them ? Is the physical access strictly controlled ?

There are quite complex rules CAs must follow, I think some of them
are about physical security yes.

> At least SSH prevents MITM after the first request, and when combined with
> DNS it may make things a bit better than they are today.

I don't think you answered Gervs concerns about the usability problems
with the SSH model. Any system that produces scary crypto warnings as
part of its normal operation is undeployable to the masses.

Costin Manolache

unread,
Dec 18, 2009, 1:58:43 PM12/18/09
to spdy...@googlegroups.com
On Fri, Dec 18, 2009 at 3:03 AM, Mike Hearn <he...@google.com> wrote:
> Is any of the CA roots in a country where authorities may  get the master key?

Yes, of course. Government is not really a part of the SSL threat
model. The moment you consider the government to be your foe, things
get a LOT harder and less consumer friendly.

EV relies on trusting several governments -  getting any of the keys allows you to MITM
everywhere,  a dozen of corporations and probably hundreds of people who had
or have access to the key. Plus you must trust that the physical security and 
all the people are better than any of the organizations that have capability to break 
in. A lot of foes I think - since there are a lot of benefits for someone who manage to steal 
a CA key.

 
> Are all CA roots absolutely secure - how many people have
> or had access to them ? Is the physical access strictly controlled ?

There are quite complex rules CAs must follow, I think some of them
are about physical security yes.

So you must also trust that all those organizations do follow the rules. What are the 
penalties for not following the rules ? They'll go out of business - if anyone finds 
out. So extra incentives to keep quiet even if they know someone has broken in. 

And for those who can't get the key - incentives to just block SPDY and force HTTP.
Are you going to have scary warnings every time the browser can't initiate a SPDY 
connection ? 

Sorry - I don't think you can convince me that EV is the perfect solution for security, but 
that's not the point of this thread. 

The main argument is against SPDY forcing everyone to use EV and doing it
using SSL. The fact that EV has flaws, like all other schemes is just one reason not to do it, there
are quite a few others - and even if you do, it won't stop hackers - will only stop SPDY adoption.


 

> At least SSH prevents MITM after the first request, and when combined with
> DNS it may make things a bit better than they are today.

I don't think you answered Gervs concerns about the usability problems
with the SSH model. Any system that produces scary crypto warnings as
part of its normal operation is undeployable to the masses.

You mean like all the browsers and OSes and phones on the market ? Yes, nobody would
deploy such a thing to the masses... 

Costin

Patrick Meenan

unread,
Dec 18, 2009, 4:51:30 PM12/18/09
to spdy-dev

> For (a) many users get to sites via bookmarks or search engines
> anyway. Those can embed the SPDYness of a site into the URL with a
> spdy:// scheme. Yes the browser will have to notice that its IP
> address or wifi MAC changed for c, but then browsers are already
> reading this data for geolocation so the modification shouldn't be too
> hard.

Requiring a spdy:// scheme on href's (from search engines or arbitrary
pages) will never happen. The site serving the page will need to know
that both the browser in use and the server they are linking to
support SPDY. You might be able to pull that off for something like
the major search engines as the re-crawl the pages fairly frequently
and everything is dynamic anyway but that's an awfully big ask to put
out there for all sites.

For it to be seamless it's basically going to have to be able to take
http:// requests and negotiate/discover the upgrade transparently.
Assuming or requiring anything else will seriously hold back it's
deployment.

Adam Sah

unread,
Dec 20, 2009, 12:25:34 AM12/20/09
to spdy-dev
misc comments on the thread:

- I'd love to see more thought about the societal upgrade process,
including
the UIs for showing SPDY "protected" pages. It's not quite like
SSL,
with the model of "if you're entering a password or credit card,
then look for
the yellow HTTPS"-- here it would have to start with "if you see
the gold
star, then it's an extra protected site" and then over the course
of N years
the gold stars become commonplace, then we hit a tipping point
where
pro site owners start to worry about getting rejections if they
don't support
SPDY and finally, the late adopters take a decade to upgrade.

- yes, forget changing http: to spdy: -- way too much app code that
assumes
http/https, and too much user retraining. Plus, too little
benefit.

- don't underestimate the pain of SSL:
- wildcard certs are needed for subdomains, which are used in many
apps.
Wildcard certs are not free/cheap, and in fact this thread just
saved my
company >$1000/yr vs. a design we were going to use (thanks!).
- for sites that don't already use it, it's can be a giant PITA to
get started.
- for sites that already use it, there can be serious
complications with
mixed content warnings (IE), bad caching effects, CPU cost, etc.

Personally, I'd be sad if SPDY's latency wins don't get to market
because
it requires SSL. 30+% is too good to pass up.

adam
(recent ex-googler, gadgets)

Mike Hearn

unread,
Dec 20, 2009, 6:53:24 AM12/20/09
to spdy...@googlegroups.com
> "if you see the gold star, then it's an extra protected site"

The problem is we already know that kind of UI doesn't work. Bad guys
can simply put a gold star in the web page content itself. Many users
don't seem to understand that some parts of the screen are trustable
and others aren't (not surprising really).

We don't know what kind of UI does work, it's still an open research question.

>  - yes, forget changing http: to spdy: -- way too much app code that
> assumes http/https, and too much user retraining.

Sorry, I was unclear about this. I meant spdy:// as in a hint to the
browser to try SPDY first. Users would never see it. It'd be useful
only for links between sites ... for instance Google/Bing/Yahoo/etc
could discover SPDY supporting sites during the crawl and serve
spdy:// links for those, so bypassing any potentially slow discovery
process. If users type http:// directly then discovery would take
place as normal (whatever normal is)

Mark Nottingham

unread,
Dec 20, 2009, 6:06:43 PM12/20/09
to spdy...@googlegroups.com
Sure, but that metadata can be put in places other than the URL...


On 20/12/2009, at 10:53 PM, Mike Hearn wrote:

>>
>> - yes, forget changing http: to spdy: -- way too much app code that
>> assumes http/https, and too much user retraining.
>
> Sorry, I was unclear about this. I meant spdy:// as in a hint to the
> browser to try SPDY first. Users would never see it. It'd be useful
> only for links between sites ... for instance Google/Bing/Yahoo/etc
> could discover SPDY supporting sites during the crawl and serve
> spdy:// links for those, so bypassing any potentially slow discovery
> process. If users type http:// directly then discovery would take
> place as normal (whatever normal is)


--
Mark Nottingham http://www.mnot.net/

Mike Hearn

unread,
Dec 21, 2009, 7:36:12 AM12/21/09
to spdy...@googlegroups.com
Yes, but why? SPDY is a protocol. The first part of the URL specifies
the protocol. If it's not in the URL the spydness of the target site
won't survive copy/paste and other operations.

--
GMail Engineering
Google Switzerland GmbH

Reply all
Reply to author
Forward
0 new messages