SPDY is not enough!

124 views
Skip to first unread message

loopback

unread,
Apr 11, 2011, 10:46:16 AM4/11/11
to spdy-dev
Hi.

Maybe I'm not understand in detail the issue, but IMHO should move
towards protocol extensions.

Now that google decided to replace HTTP, you need to go through.

Very large amount of time spent on customer requests for additional
information. And in fact, no matter how the transfer will be initiated
- by a new TCP connection or a new stream of SCTP.

Likewise, to ensure maximum efficiency of transmission, it is
necessary to reduce the maximum amount of transmitted information
blocks and use bulk transfer.

For example html should be translated in ASN.1, js, css, etc.
translate to bytecode. All this must be combined in bulk and pass as
possible as the entire volume of data.

Or at least, to group data into logical blocks.

The client request is already transmitting (or must pass) a full
description of what it supports. Hence the server can immediately
generate the necessary full response to an answer package rather than
wait until a client asks for something.

This should speed up the web at times.

Adam Langley

unread,
Apr 11, 2011, 10:52:53 AM4/11/11
to spdy...@googlegroups.com
On Mon, Apr 11, 2011 at 10:46 AM, loopback <2000lo...@gmail.com> wrote:
> For example html should be translated in ASN.1, js, css, etc.
> translate to bytecode. All this must be combined in bulk and pass as
> possible as the entire volume of data.

SPDY requires that the client support gzip compression of payloads.
The hope is that gzip quickly, simply and automatically gets pretty
good compression of the payload.

For more complex transformations of the HTML, CSS etc we have
http://code.google.com/speed/page-speed/docs/module.html


Cheers

AGL

Mike Belshe

unread,
Apr 11, 2011, 11:31:37 AM4/11/11
to spdy...@googlegroups.com, loopback
I'm not entirely sure what you are asking for - new content encodings?  I am skeptical that this will have an impact if you don't reduce the bytes transferred, and I'm not seeing how you reduce the number of bytes just by changing the encoding.

But, if you can prove it - i.e. implement a benchmark to show a real improvement, I'm easily convinced.

Mike
 

loopback

unread,
Apr 11, 2011, 5:39:25 PM4/11/11
to spdy-dev
Gzip problem is that it requires a temporary storage area for storing
the result.

HTML requires a lot of resources for parsing and interpretation.

That all also affects the speed of the browser.

But no. The main idea is not a method of encoding or compression. ASN.
1, I gave as an example of coding.

I was mean to batching of information.

For example, if a page contains 5 frames and 12 images, the classic
browser works as follows:
1) Download Frame 1
2) ...
3) Download Frame 5
4) Download Image 1
5) ...
6) Download Image 12

Of course using SPDY (SCTP) the browser can do this in several
threads. But in order to download images from the frame of 5 it needs
first download the frame 5 itself.
But still to get every single object browser must make a separate
request.


I propose the following approach:

1) The browser makes only one request to the server
2) The server provides a single package that contains all 5 frames at
once, and all 12 images.
The result can be easily predicted, and compiled for a single server.

Thus, when a large number of loaded objects and their complex
hierarchy it method can reduce the download and interpret time at
times.

And so to bring this approach its necessary right coding.

So encoding by which you can immediately display the result (but not
gzip).

Mark Watson

unread,
Apr 11, 2011, 5:48:33 PM4/11/11
to spdy...@googlegroups.com

Mike Belshe

unread,
Apr 11, 2011, 5:51:58 PM4/11/11
to spdy...@googlegroups.com, loopback
Ah - you might be interested in this:  http://limi.net/articles/resource-packages-spec-ready-for-prototyping/    It is a similar proposal.

I have two objections to that approach.
a) It breaks web cache controls by creating two naming schemes for each resource.  In addition to "http://www.foo.com/foo.jpg", there is "http://www.foo.com/bundle.pkg/foo.jpg".  As long as you are grabbing *everything* together, it might work okay.
b) The bundler needs to get the resources in the right order to not delay on-paint.  This sounds simple, but is surprisingly complex.  

With SPDY, you can keep naming consistent - every resource has one name - so cache controls all work.  Priorities are completely dynamic, so the server doesn't have to worry about it.

Mike

loopback

unread,
Apr 11, 2011, 7:14:54 PM4/11/11
to spdy-dev
Thanks for the link.

Yes, it is very similar to what I say .

In LIMI there is one disadvantage, which generates your objections.

It is called "backward compatibility".

However, if so, a new protocol that it does not already provide, no
resources are needed. So the parallel naming scheme will not be.

::>>The bundler needs to get the resources in the right order to not
delay on-paint. This sounds simple, but is surprisingly complex.
The browser handles this task easily. I do not think that the server
will be more difficult.
Dynamics of course somewhat complicates the task. But the real
dynamics of either non-existent or very little.

In fact, it is a compilation of results. Static and quasi dynamics can
be cached (precompiled).

The problem with external links, by the way, we can solve by proxying
requests (for example and not in all cases).


By the way I do not propose to throw SPDY. I propose to expand it.

Mike Belshe

unread,
Apr 11, 2011, 7:42:20 PM4/11/11
to spdy...@googlegroups.com, loopback
On Mon, Apr 11, 2011 at 4:14 PM, loopback <2000lo...@gmail.com> wrote:
Thanks for the link.

Yes, it is very similar to what I say .

In LIMI there is one disadvantage, which generates your objections.

It is called "backward compatibility".

However, if so, a new protocol that it does not already provide, no
resources are needed. So the parallel naming scheme will not be.

I'm not following - are you proposing to restructure the nature of how we write HTML?  HTML uses URIs.  I don't really want to rewrite the entire browser :-)
 

::>>The bundler needs to get the resources in the right order to not
delay on-paint. This sounds simple, but is surprisingly complex.
The browser handles this task easily. I do not think that the server
will be more difficult.
Dynamics of course somewhat complicates the task. But the real
dynamics of either non-existent or very little.

In fact, it is a compilation of results. Static and quasi dynamics can
be cached (precompiled).

The problem with external links, by the way, we can solve by proxying
requests (for example and not in all cases).


By the way I do not propose to throw SPDY. I propose to expand it.

The SPDY mechanics for this would be to use server push.  You request resource A, and the server knows to 'push' many resources in response to that request.  So you get all of the data flowing down, without the round trips from the browser.  The problem with this is that the browser may not need a resource, and pushing it would be wasteful.

I'm not sure how your proposal would deal with that - how does the server know when it needs to send a resource or when the client already has it?

Mike
 

loopback

unread,
Apr 12, 2011, 5:03:36 PM4/12/11
to spdy-dev
:>>I'm not following - are you proposing to restructure the nature of
how we
:>>write HTML? HTML uses URIs. I don't really want to rewrite the
entire
:>>browser :-)

:)
I propose a method of encoding(which can imply the restructuring of
HTML). Examples of such attempts can be seen in the specifications of
WAP, MMS.
There also are examples of URL encoding, given that the resource is
already in the packet.


:>>The SPDY mechanics for this would be to use server push. You
request
:>>resource A, and the server knows to 'push' many resources in
response to
:>>that request. So you get all of the data flowing down, without the
round
:>>trips from the browser.

Indeed, SPDY push actually provides the bulk transfer.
True, SPDY push, suggests the possibility of spoofing resources, since
the transfer is made by different streams.
This can be a very serious security hole.


>>The problem with this is that the browser may not
>>need a resource, and pushing it would be wasteful.
>>I'm not sure how your proposal would deal with that - how does the server
>>know when it needs to send a resource or when the client already has it?

As a solution to the problems you specified, I propose to specify in
the request a list of available client-side resources with the time of
receipt (or/and checksum).
Already expired resources will cut off by the client.
Therefore, the server can then decide what should send to the client
and what it already has.

Just in case, I looked at the cache on multiple workstations.
This is a list of tens strings. Not representative, of course.

Reply all
Reply to author
Forward
0 new messages