Compression contexts and privacy considerations

1,106 views
Skip to first unread message

William Chan (陈智昌)

unread,
Aug 11, 2011, 2:11:31 AM8/11/11
to spdy...@googlegroups.com
http://mbelshe.github.com/SPDY-Specification/draft-mbelshe-spdy-00.xml#CompressionContexts
discusses this.

I didn't notice it in detail before, but I thought about it a bit more
now because Jim Roskind talked to me about it today. I find the
following line a bit ambiguous: "Whenever an endpoint sends data on a
session which targets a new domain, it MUST use this flag to reset the
compression state. This avoids all possibility of privacy leakage."

What does this mean for the following sequence of SYN_STREAMs?
1) a.foo.com
2) b.foo.com
3) a.foo.com
4) c.foo.com

Do 2, 3, and 4 all have FLAG_RESET_COMPRESSION set? Or just 2 and 4?
If the former, then I'm worried that we'll have too many
FLAG_RESET_COMPRESSIONs and this will negate lots of the advantage of
the header compression. But given the explanation in "If we use the
same stateful compression for requests destined to different security
origins, it is possible for an attacker to learn about the contents
sent to an origin for which it should not.", it seems to me like the
former is indeed the intended interpretation. What's the intended
interpretation here?

Simone Bordet

unread,
Aug 11, 2011, 5:01:57 AM8/11/11
to spdy...@googlegroups.com
Hi,

Also, I am a bit confused by section 3.1 that states: "Clients SHOULD
NOT open more than one SPDY session to a given origin".

So, a.foo.com and b.foo.com are 2 different origins and should stay on
2 different SPDY sessions, right ?
This would make the RESET_COMPRESSION not needed, or I am missing something ?

Thanks,

Simon
--
http://bordet.blogspot.com
---
Finally, no matter how good the architecture and design are,
to deliver bug-free software with optimal performance and reliability,
the implementation technique must be flawless.   Victoria Livschitz

William Chan (陈智昌)

unread,
Aug 11, 2011, 11:15:34 AM8/11/11
to spdy...@googlegroups.com

Normally speaking, yes. But, let's say they present a wildcard
certificate for *.foo.com and a.foo.com maps to 1.1.1.1 and b.foo.com
also maps to 1.1.1.1. Then it would be reasonable to reuse the
authenticated SPDY session that was originally connected for a.foo.com
for b.foo.com too. Chrome does this already.

Jim Roskind

unread,
Aug 11, 2011, 5:50:10 PM8/11/11
to spdy-dev
A careful read of the quote you cited shows that it says that:

Clients MUST not open "more than one," but the spec allows the one
that they open to be shared!

Once either a.foo or b.foo has a SPDY connection (shared or otherwise)
from a single client, it can't have a second SPDY connection (shared
or otherwise) from the same client.

Will mentioned, the current Chromium implementation supports this only
in a specific security situation. He explains by example that the
requirement is that the SSL cert that was used to establish the SPDY
SSL connection MUST match both a.foo and b.foo, in order for the
connection to be sharable (between them). In addition, connections
are only sharable if DNS suggested a common IP address for both
a.foo.com and b.foo.com, and THAT address was what was used for
establishing the SPDY connection.

Simply put, if a server has a private cert that can be used to
complete an SSL handshake for both domains, then bundling both
connections together in SPDY over SSL is safe and reasonable, from a
security perspective. If in addition DNS tells the client to use the
same IP address, then sharing can be performed, (and is done
currently), as the server can (as advertised in DNS) handle traffic
for both domains.


On Aug 11, 2:01 am, Simone Bordet <sbor...@intalio.com> wrote:
> Hi,
>
> On Thu, Aug 11, 2011 at 08:11, William Chan (陈智昌) <willc...@chromium.org> wrote:
>
>
>
>
>
>
>
>
>
> >http://mbelshe.github.com/SPDY-Specification/draft-mbelshe-spdy-00.xm...
> --http://bordet.blogspot.com

Jim Roskind

unread,
Aug 11, 2011, 8:13:31 PM8/11/11
to spdy-dev
I didn't think the current draft was ambiguous. I think (in your
example) that it calls for sending the FLAG_RESET_COMPRESSION with the
headers for 2, 3, and 4. If there were any other interpretation, then
the motivation to avoid "privacy leakage" by sending any RESET flags
would not be satisfied.

I'm not really that convinced of the security issue. I'm guessing
there is a timing attack that is vaguely plausible, but as noted on
another thread by Adam Langley, SPDY had already put a lot of trust
into allowing both streams to reach a singular server on one
connection.

If we posit that the security issue is significant, then I'm concerned
about the loss of compression that we'd see when we shared a
connection between multiple hosts.

PLAUSIBLE ALTERNATIVES

If we can't agree the security issue is insignificant (and reuse a
compression context), it would seem more than reasonable to have
distinct compression contexts, one per domain within a connection.
That would certainly provide equivalent security isolation (and
probably better compression, since cookies for each site would be
distinct).

Multiple concurrent compression contexts would be carry some cost
(client and server side), but considering that we're replacing
multiple distinct connection groups (groups of would-be HTML
connections to each domain), I would have expected the cost to be
"reasonable." The one down side is that these contexts may be long-
lived, as the context may need to be maintained (at the receiving end)
until the (shared) connection is dropped. There is currently no
apparent notion of "unsharing" or ceasing to use a connection for
streams to a domain... but that is because there was no apparent state
previously centered on multiple domains.

It would be easy for a sender to discard contexts, as it can do so at
any time, and just use the RESET_COMPRESSION flag if ever it went back
to the domain. This would make it easy to keep the sender-side
compression context usage well bounded. I could also imagine a
SETTINGS value that would somehow indicate to the sender that the
decoding side does not want to maintain "too many" contexts, or
sending a message that the decoder wants the sender to discard a
specific context. One odd issue is that SETTINGS are per-domain, and
hence there is no nicely defined place (I think) to set a global limit
on total connection contexts.


On Aug 10, 11:11 pm, William Chan (陈智昌) <willc...@chromium.org> wrote:
> http://mbelshe.github.com/SPDY-Specification/draft-mbelshe-spdy-00.xm...

William Chan (陈智昌)

unread,
Aug 11, 2011, 9:05:59 PM8/11/11
to spdy...@googlegroups.com
On Thu, Aug 11, 2011 at 5:13 PM, Jim Roskind <j...@google.com> wrote:
> I didn't think the current draft was ambiguous.  I think (in your
> example) that it calls for sending the FLAG_RESET_COMPRESSION with the
> headers for 2, 3, and 4.  If there were any other interpretation, then
> the motivation to avoid "privacy leakage" by sending any RESET flags
> would not be satisfied.
>
> I'm not really that convinced of the security issue.  I'm guessing
> there is a timing attack that is vaguely plausible, but as noted on
> another thread by Adam Langley, SPDY had already put a lot of trust
> into allowing both streams to reach a singular server on one
> connection.

Completely agreed.

>
> If we posit that the security issue is significant, then I'm concerned
> about the loss of compression that we'd see when we shared a
> connection between multiple hosts.
>
> PLAUSIBLE ALTERNATIVES
>
> If we can't agree the security issue is insignificant (and reuse a
> compression context), it would seem more than reasonable to have
> distinct compression contexts, one per domain within a connection.
> That would certainly provide equivalent security isolation (and
> probably better compression, since cookies for each site would be
> distinct).

Let's agree the security issue is insignificant since it requires
compromising SSL.

Jim Roskind

unread,
Aug 12, 2011, 2:34:37 PM8/12/11
to spdy-dev
On Aug 11, 6:05 pm, William Chan (陈智昌) <willc...@chromium.org> wrote:
>
> Let's agree the security issue is insignificant since it requires
> compromising SSL.

I'd like to agree the security issue is insignificant, but it not
based on "compromising SSL." Based on the (very questionable)
analysis below, I think it is insignificant because SSL already
induces a shared compression context.

I had asked Roberto or Mike to include more details on the plausible
attack, but all I could guess is that there is some vague hint of a
timing attack. My presumption is that SPDY is correctly implemented,
and not leaking raw bits, as otherwise sharing or not-sharing stream
compression contexts would be inconsequential. I'll try to throw up
the strawman that I thought could be the (hint of the) attack. I'll
then knock it down, but I really don't know if this is the attack :-/.

My strawman is that the "leaked" information allows one user-agent to
first-host stream know a little bit about another user-agent to second-
host stream. In a browser, there is great care taken to make it hard
for content from one site (evil.com) to deduce that you are also
connected to a specific site (such as victimbank.com), including any
information about data flow etc., and especially about data content!
The question is: Does a shared memory context provide such cross-site
info leakage??

In this mythical attack we can assume that both victimbank.com and
evil.com have chosen to have virtual hosting at some shared IP
address. We can further assume that the virtual host has acquired a
certificate that will support SSL connections to either
victimebank.com or evil.com, so SPDY might put both streams on one
connection. Nothing so far is due to an SSL compromise. This is all
just run-of-the-mill shared hosting at a virtual host. Specifically,
some CA issued the cert because Victim Bank has *paid* the virtual
host money, and delegated trust to carry their traffic. Similarly,
Evil Incorporated, the owner of domain evil.com, has cleverly sought
out and paid to get hosted there as well, and also authorized the
issuance of the SSL cert for the evil.com domain.

With the above slighly hokey setup, the question is: What can evil.com
learn about a user by carefully observing timing stats for a
connection that arrives over SPDY? If the compression contexts are
shared, then there might be a slight detectable change in
serializaiton latency (compression induced latency) when a shared
connection was used (yes, this is a stretch), and when evil.com chose
headers that matched (to various degrees) headers used by
victimbank.com. For example, evil.com might look to match parts of
cookies, such as authentication cookies. Perhaps it could then deduce
sections of such items, and probabalistically reduce an atack space to
guess an authentication cookie.

For those not familiar with "timing" attacks, I'll point out that
really clever attackers have looked at using timing to detect
incredibly subtle stuff. For example, if an attacker gets to run on a
machine that is just running a software SSL server, it used to be
possible with some implementations (and was demonstrated I believe)
for an attacker to learn about the private key by watching the timing
of cache line evictions <geeze!>. Historically, there were attacks
that tried to deduce SSL keys remotely by measuring how quickly a
message (connection) was authenticated, as there used to (in some
implementations) be short-cuts taken in some decodes when certain bit
patterns matched. For example: When multiplying A by B by C, if you
notice A is zero, you can skip the rest of the processing... but that
subtle change in time can help reveal that this intermediate value,
which may perhaps hint that a masked section of a key was zero!.

Anyway... these timing attacks really stretch the imagination... but
they have surfaced, and proved viable in the past... so folks have
very vivid and paranoid imagination these days.

To knock down the strawman (in case it is *the* plausible attack):

First off, *if* the SSL channel containing the SPDY connection used
compression, then that would constitude a shared compression context,
and probably still be visible when SPDY tried to not share stream
compression contexts. The replicated (RESET) compression contexts
would indeed be compressed with a single global context. Simply put:
SSL induces a shared compression context!?! If subtle timing reveals
similarity in content, then we have trouble based on SSL and any
shared connection already.

Secondly, even if we didn't have shared compression contexts, but did
have a shared SDPY connection, I'd expect some detectable change in
underlying TCP/IP congestion window when both sites are accessed
sequentially. This might be an even bigger timing-channel than shared
stream compression contexts, but it would only tend to reveal "is
connecting" and not "what data is is the connections."

Lastly, even without SPDY, if evil.com and victimbank.com are co-
hosted, then the connection traffic could interfere with one and other
in some visible way, as it would tend to take the same paths. Traffic
to one, contemporaneously, would congest the other path to the other
domain. This suggests that some slight timing correlation. Again,
leaking only "is probably connecting" but not "what is in traffic."

Bottom line: My imagination was stretched too thin when trying to
justify a security problem here.... but there are always more
imaginative folks out there. The realization that SSL provides
compression, which is shared, really makes it hard for me to
understand why sharing stream compression contexts is problematic.

I'd really like to understand the nature of the feared attack: Mike?
Roberto?

William Chan (陈智昌)

unread,
Aug 12, 2011, 6:09:20 PM8/12/11
to spdy...@googlegroups.com
tl;dr

I chatted with Adam Langley yesterday and neither of us understand the
motivation behind this. I think we should remove this change.

William Chan (陈智昌)

unread,
Aug 15, 2011, 7:24:24 PM8/15/11
to spdy...@googlegroups.com
Ok, I chatted with Adam Barth to understand his privacy leakage
concerns. The conceived scenario is one where we use IP connection
pooling to have a reverse proxy server that serves both a.foo.com and
b.foo.com and both a.foo.com and b.foo.com resolve to the same IP
address and present certs for *.foo.com.

Under this scenario, an attacker who is able to determine how much his
request got compressed to a.foo.com can learn things about requests to
the other hosts (i.e. cookies). It's a bit tricky to conceive of a way
for the attacker to determine how much of his request was compressed,
since the SPDY reverse proxy has already decompressed it by the time
a.foo.com receives it. It's possible there's a timing channel that
could reveal this information.

It's not clear to me how concerned to be about this scenario. If it's
truly an issue, then I would rather see us use per-origin compression
contexts, rather than FLAG_RESET_COMPRESSION.

Simone Bordet

unread,
Aug 15, 2011, 7:58:40 PM8/15/11
to spdy...@googlegroups.com
Hi,

On Tue, Aug 16, 2011 at 01:24, William Chan (陈智昌) <will...@chromium.org> wrote:
> It's not clear to me how concerned to be about this scenario. If it's
> truly an issue, then I would rather see us use per-origin compression
> contexts, rather than FLAG_RESET_COMPRESSION.

So, would not this be the same of a session per origin and a
compression context per session ?
Is it so much a problem to open one session per origin, compared to
reuse the same session for different origins, but complicate other
parts such as compression contexts ?
I feel it'll be much simpler to just reinforce one session per origin
like the spec suggests.
But perhaps there are gains I do not see ?

Simon
--

William Chan (陈智昌)

unread,
Aug 15, 2011, 9:09:43 PM8/15/11
to spdy...@googlegroups.com
On Mon, Aug 15, 2011 at 4:58 PM, Simone Bordet <sbo...@intalio.com> wrote:
> Hi,
>
> On Tue, Aug 16, 2011 at 01:24, William Chan (陈智昌) <will...@chromium.org> wrote:
>> It's not clear to me how concerned to be about this scenario. If it's
>> truly an issue, then I would rather see us use per-origin compression
>> contexts, rather than FLAG_RESET_COMPRESSION.
>
> So, would not this be the same of a session per origin and a
> compression context per session ?
> Is it so much a problem to open one session per origin, compared to
> reuse the same session for different origins, but complicate other
> parts such as compression contexts ?
> I feel it'll be much simpler to just reinforce one session per origin
> like the spec suggests.
> But perhaps there are gains I do not see ?

See SPDY draft 3 spec section 1 for reasons to reduce the number of connections.

Simone Bordet

unread,
Aug 16, 2011, 4:00:17 AM8/16/11
to spdy...@googlegroups.com
Hi,

On Tue, Aug 16, 2011 at 03:09, William Chan (陈智昌) <will...@chromium.org> wrote:
> On Mon, Aug 15, 2011 at 4:58 PM, Simone Bordet <sbo...@intalio.com> wrote:
>> Hi,
>>
>> On Tue, Aug 16, 2011 at 01:24, William Chan (陈智昌) <will...@chromium.org> wrote:
>>> It's not clear to me how concerned to be about this scenario. If it's
>>> truly an issue, then I would rather see us use per-origin compression
>>> contexts, rather than FLAG_RESET_COMPRESSION.
>>
>> So, would not this be the same of a session per origin and a
>> compression context per session ?
>> Is it so much a problem to open one session per origin, compared to
>> reuse the same session for different origins, but complicate other
>> parts such as compression contexts ?
>> I feel it'll be much simpler to just reinforce one session per origin
>> like the spec suggests.
>> But perhaps there are gains I do not see ?
>
> See SPDY draft 3 spec section 1 for reasons to reduce the number of connections.

I had interpreted that as reducing the number of connections per
origin, even if the wording in the spec is "per server".
While I can see the benefits of a single connection per origin, I was
wondering if it was not pushed too much.

Thanks,

Adam Langley

unread,
Aug 16, 2011, 1:34:12 PM8/16/11
to spdy...@googlegroups.com
On Mon, Aug 15, 2011 at 7:24 PM, William Chan (陈智昌)
<will...@chromium.org> wrote:
> Under this scenario, an attacker who is able to determine how much his
> request got compressed to a.foo.com can learn things about requests to
> the other hosts (i.e. cookies). It's a bit tricky to conceive of a way
> for the attacker to determine how much of his request was compressed,
> since the SPDY reverse proxy has already decompressed it by the time
> a.foo.com receives it. It's possible there's a timing channel that
> could reveal this information.

This is both perfectly reasonable and completely silly. It's
reasonable because it works, but silly because it works even when only
a single origin is in play and is an old (although unaddressed) issue
with compressing data from mixed sources.

Here's the (old, standard) attack:

The attacker is running script in evil.com. Concurrently, the same
client has a compressed connection open to victim.com and is logged
in, with a secret cookie. evil.com can induce requests to victim.com
by, say, adding <img> tags with a src pointing to victim.com. (The
compressed connection can either be SPDY or TLS+zlib.)

Each of those requests has a secret part (the cookie), a public part
(the browser's typical headers) and an attacker controlled part (the
URL). The problem is that compression ties all those parts together.
The attacker can watch the wire and measure the size of the requests
that are sent. By altering the URL, the attacker could attempt to
minimise the request size: i.e. when the URL matches the cookie.

I've just tried this with an HTTP request for fun and it's pretty easy
to get the first 5 characters in a base64 encoded cookie. The problems
are that the Huffman encoding makes 'e' very short and you have to be
careful not to compress against yourself (i.e. a URL of xyxyxy...).
With a good model of zlib, I think you could extract a ~40 byte cookie
with ~13K requests. That's a practical attack and would make a great
paper if someone has the time.

However, we've never tried to defeat traffic analysis in HTTPS or
SPDY. Maybe cookie extraction is a more interesting result than
telling what page you're reading and so we should try harder?

One option is to compress attacker controlled data with a different
context. Certainly the URL is attacker controlled. In the case of a
shared connection, headers from other requests may be attacker
controlled too although I'm much less worried about this (to the point
where I don't believe it's worth any complexity). We have to be
careful not to increase the memory overhead if we do this.

Another is to pad to a standard granularity. I don't believe that will
actually be effective. An attacker will be able to get right up to
padding boundary and still get a single bit of data out.

We can pad randomly, (preferably inside the compression). This doesn't
stop the attack, but it slows it down to an arbitrary amount.


Cheers

AGL

William Chan (陈智昌)

unread,
Aug 16, 2011, 2:04:02 PM8/16/11
to spdy...@googlegroups.com
Thanks Adam. Unless I hear more comments from others, I'm going to
move ahead with the deletion of the relevant section from the spec.

commit: https://github.com/willchan/SPDY-Specification/commit/cd2f05d34861836d8d0a2dd46d21b8919688e95e
updated spec: http://willchan.github.com/SPDY-Specification/draft-mbelshe-spdy-00.xml

I'll push this commit to Mike's repo tomorrow.

Roberto Peon

unread,
Aug 16, 2011, 2:09:13 PM8/16/11
to spdy...@googlegroups.com
Will-
That sounds fine.
We'd talked in the past about using noop (no longer existing), settings, or other frames to add padding and/or fool timing based attacks, but we've not worked on it actively.
If anyone out there has time to work on it, however, the protocol is amenable to such things, even without changes.

-=R

William Chan (陈智昌)

unread,
Aug 17, 2011, 2:10:56 PM8/17/11
to spdy...@googlegroups.com
I've pushed the commit to mbelshe's repo. The relevant section has
been removed from the spec.

kumar.a

unread,
Aug 22, 2011, 1:40:06 AM8/22/11
to spdy...@googlegroups.com
As of today, browers opens say two/four connection to the server.  With SPDY, it will have one session and required number of streams, with in the session.  With SPDY, the server will continue to see 'as many no' of unique client, as with HTTP.   The no. of connections seen are reduced, but not the no. of clients.
 
Now, with Header compression, particularly with 'stateful' compression, aren't we increasing the memory required on the server side for per-client, multi-fold?. Won't this become a bottleneck eventually?
 
We should have the option of resetting the compression context.  Without it, it could become a scalablity issue in the server side.
 
Think of the spdy-proxy/gateway, they should manage 'n' time more SPDY session than a given backend server.  But with the stateful compression, this could limit the proxy capacity.
 
The 'preset dictionary' based GLIB is an awesome idea for the header compression.  That gives the best compression for the HTTP header except say cookie/referrer header.  If you leave out the cookie/referrer header, one could get better/cheap compression with the 'stateless glib' with 'preset dictionary' used.
 
Surely with the stateful compression, the cookie/referrer header value gets compressed, very well.  But this comes with the cost of 'stateful' compression.  Can this be resolved through simple protocol change similar to the 'Req/res line changes' that are done in the SPDY. 
 
Cookies/Referrer headers adds up good amount of size to the REQ, and anything one could do to reduce their size, would surely help. We should see whether there are any other solution to it.  Using the stateful compression is ok, in most of the cases in the client side, but it could be a big burden on the server side and any intermediate proxy/gateways. 
 
If we can't come up with alternatives, we should have options to reset the compression context.
 
-thanks, Kumar.a

Pratik Mohanty

unread,
Aug 24, 2011, 3:22:29 AM8/24/11
to spdy...@googlegroups.com
hi
I am new to spdy...and i want to create my own spdy flip server and spdy client.I have all the source.Could anyone help me with implementation?

Thanks
Pratik

kumar a

unread,
Aug 30, 2011, 1:36:17 AM8/30/11
to spdy...@googlegroups.com
Any comment or thoughts on this?
-thanks, Kumar.a

On Mon, Aug 22, 2011 at 11:10 AM, kumar.a <kumara...@gmail.com> wrote:

William Chan (陈智昌)

unread,
Aug 30, 2011, 2:25:14 AM8/30/11
to spdy...@googlegroups.com
On Mon, Aug 22, 2011 at 11:10 AM, kumar.a <kumara...@gmail.com> wrote:
> As of today, browers opens say two/four connection to the server.  With

More like 6 connections for modern browsers
(http://www.stevesouders.com/blog/2008/03/20/roundup-on-parallel-connections/)

IIUC, since zlib uses a fixed size sliding window for the compression
context, resetting the compression context doesn't reduce the memory
footprint. I don't think a reset compression context frame is useful.
See http://www.gzip.org/zlib/zlib_tech.html for a discussion of memory
use wrt zlib.

>
> -thanks, Kumar.a

Ashok Kumar

unread,
Sep 13, 2011, 3:27:32 AM9/13/11
to spdy...@googlegroups.com
> footprint. I don't think a reset compression context frame is useful.
> See http://www.gzip.org/zlib/zlib_tech.html for a discussion of memory
> use wrt zlib.

Sorry again for reviving old threads. 

Since Kumar.a brought proxies into picture, I was wondering what would be typical parallel number of connections that any of the SPDY proxy implementer are expecting. 3,000 parallel connections need around a GB of memory just for stateful header compression contexts!! (Assuming 300 KB + some overhead for compression/decompression context). Will the proxy scale in production deployments?

-Ashok

kumar.a

unread,
Sep 25, 2011, 1:42:49 PM9/25/11
to spdy...@googlegroups.com
Hello William, thanks for the response.
 
>>
See http://www.gzip.org/zlib/zlib_tech.html for a discussion of memory
use wrt zlib.
>>
 
From the above link,
 
deflate memory usage (bytes) = (1 << (windowBits+2)) + (1 << (memLevel+9))
 
with 15, 8 for this parameter as default in glib, it comes to 256K.
 
inflate memory usage (bytes) = (1 << windowBits) + 1440*2*sizeof(int)
 
For windowBits=15, this values comes around 44K.
 
For the response header compression, holding up a cmp session which is of size > 256K, for every connection, would be very costly.  And, the savings out of this would be very less.  In most of the cases, the responses headers are small.
 
With SPDY, we are reducing the no. of connection, which is awesome.  But the cost of the connections increased, say 200 times, from memory point of view and from CPU point of view, there are significant overhead too.
 
Everyone would end up keep this much resource(client/proxies/L7 firewall/IDS/.../server), in the backend.
 
-thanks, kumar.a

William Chan (陈智昌)

unread,
Sep 25, 2011, 2:57:42 PM9/25/11
to spdy...@googlegroups.com
Yes, there are indeed costs with compression, which primarily impact servers. We're willing to make those tradeoffs to reduce latency of web page loads, which is the primary motivation behind SPDY (See http://www.chromium.org/spdy/spdy-whitepaper). For a more concrete discussion of scalability in production, I defer to the Google server folks.

Roberto Peon

unread,
Sep 26, 2011, 1:25:58 PM9/26/11
to spdy...@googlegroups.com
As the number of sessions shared goes up on a SPDY connection, the amortized cost of a gzip context goes down.
With HTTP/HTTPS, the cost increases linearly, including system (i.e. TCP) buffers, which most people forget about.

The same goes for CPU vs HTTP. Connects are quite expensive and chew up a decent amount of kernel resources. The more streams you have over a single SPDY connection, the cheaper the SPDY connection becomes in comparison to HTTP. 

It isn't clear that the memory or CPU cost of SPDY is more or less at this point, it will depend on usage.

-=R

Mike Belshe

unread,
Sep 28, 2011, 11:22:39 PM9/28/11
to spdy...@googlegroups.com
On Thu, Aug 11, 2011 at 6:05 PM, William Chan (陈智昌) <will...@chromium.org> wrote:
On Thu, Aug 11, 2011 at 5:13 PM, Jim Roskind <j...@google.com> wrote:
> I didn't think the current draft was ambiguous.  I think (in your
> example) that it calls for sending the FLAG_RESET_COMPRESSION with the
> headers for 2, 3, and 4.  If there were any other interpretation, then
> the motivation to avoid "privacy leakage" by sending any RESET flags
> would not be satisfied.
>
> I'm not really that convinced of the security issue.  I'm guessing
> there is a timing attack that is vaguely plausible, but as noted on
> another thread by Adam Langley, SPDY had already put a lot of trust
> into allowing both streams to reach a singular server on one
> connection.

Completely agreed.

I wrote this section a long time ago, and I am in agreement.  The concern that came up was that someone could infer previous requests based on sizes of requests to crafted URLs.  For sites concerned about this, don't share your SPDY sessions is a reasonable answer.

The flag seems pretty unambiguous to me - it simply means when you see it or send it, you're resetting the context, synchronously, at that point in the stream.  So I didn't think it was horrible to add.

Mike
 

Mike Belshe

unread,
Sep 28, 2011, 11:23:45 PM9/28/11
to spdy...@googlegroups.com
On Mon, Aug 15, 2011 at 4:24 PM, William Chan (陈智昌) <will...@chromium.org> wrote:
Ok, I chatted with Adam Barth to understand his privacy leakage
concerns. The conceived scenario is one where we use IP connection
pooling to have a reverse proxy server that serves both a.foo.com and
b.foo.com and both a.foo.com and b.foo.com resolve to the same IP
address and present certs for *.foo.com.

Under this scenario, an attacker who is able to determine how much his
request got compressed to a.foo.com can learn things about requests to
the other hosts (i.e. cookies). It's a bit tricky to conceive of a way
for the attacker to determine how much of his request was compressed,
since the SPDY reverse proxy has already decompressed it by the time
a.foo.com receives it. It's possible there's a timing channel that
could reveal this information.

It's not clear to me how concerned to be about this scenario. If it's
truly an issue, then I would rather see us use per-origin compression
contexts, rather than FLAG_RESET_COMPRESSION.

I don't have a strong feeling, but that could be a lot of contexts in some environments.

Mike

 
Reply all
Reply to author
Forward
0 new messages