Looking at this, it seems that a server based OSML implementation could push via the tag and achieve a faster ‘first render’, which is what the question appeared to be about.
You are correct that there is a different level of flexibility by doing everything at the client, but you pay for the extra flexibility by getting to ‘first render’ a bit later due to a few more round trips to the server. The cost of getting the data out depends on a lot of factors, so I can’t agree nor disagree with your point about scale. Scaling at the server vs. client has a lot to do with the architecture in use. For example, ShinDig appears to be built to scale at the client. (I might be misreading the code and architectural docs though…)
I think from the reactions we can conclude that proposal #2 is seen by everyone as the best solution, to quote:So the content is fetched from the dev's server, and cached on the container while respecting TTL's.
"2. Fetch/Cache/Invalidate Model: In this model, data are fetched and
cached. TTL are respected in determining the life of the cache
record.
The container is also required to provide an api to allow developers
invalidate certain cache record. On top of this, the container can
decide wheither and when to do a pre-fetch."
Since we already use standard HTTP TTL headers, that seems to be the only consistent way for the dev to define the TT.
Defining the fetch part is part of the data pipelining / proxied content discussion, so I don't think we have to make that part of this discussion.
The Invalidation does need a spec ... Since we already have a REST API that is a MUST in the OpenSocial Specification, I would suggest we add an end-point to it for this purpose.
I guess the one open issue is to come up with what goes into the DELETE action URL, when looking at things from just the gadget point of view, a app id, mod id (and optionally a group/user id or is this action by definition for 'all' ?).
John@Yahoo has a detailed write up about the invalidation protocol. It will be posted in next couple of days.
-Charlie
From:
opensocial-an...@googlegroups.com
[mailto:opensocial-an...@googlegroups.com] On Behalf Of Kevin Brown
Sent: Sunday, November 02, 2008
11:31 PM
To:
opensocial-an...@googlegroups.com
Subject:
[opensocial-and-gadgets-spec] Re: Push vs. Fetch/Cache/Invalidate
On Sun, Nov 2, 2008 at 6:12 AM, Chris Chabot <cha...@google.com> wrote:
http://ietfreport.isoc.org/idref/draft-nottingham-http-cache-channels/
http://www.mnot.net/cache_channels/
Another data point which isn't called a caching technology, but is structurally similar is FriendFeed's simple update protocol:
I just want to clarify that I am not proposing cache channels as the invalidation mechanism, it is just an illustrative example of a proposed standard in this space. I'm also quite concerned with the overhead that might be introduced with a large and distributed cache as we'll be encountering this in our implementation. Whether keys are developer or container defined, the implementation strategy works out the same. First, some of the assumptions I'm working from:1. When a key is invalidated, the developer is expressing the possibility that every resource in that set has changed and none of the resources outside of the set have changed.2. A cache must not serve the current version of at least the set of resources identified by a key.3. Most of the contents of a cache will not be accessed in the future, so as long as false invalidations aren't clustered it's unlikely to have an impact.I apologize for being pedantic, but the important conclusion is that it isn't neccesary to maintain a mapping from a key to a resource. It is neccessary to maintain a mapping from a resource to key, but only an approximation of a key.Lets open up the use case for a system that could only invalidate on the basis of [gadget URL+opensocial id+view] - a reasonable implementation strategy would be:* On receiving a cacheable document via proxy, makeRequest or data pipeline:- Calculate the hash of the gadget URL+opensocial ID+view- Get or store that hash with a version number- Add the hash and the version to the resource headers (like Etag)* On receiving an invalidation notification- Store a changed version number with the has of the invalidation key* On validating a resource in cache- Lookup the version number of the hash stored with the document if it matches the document is valid.The storage space required for tracking validation state grows linearly with the number of documents (not the document size) and sub-linearly to track key versions.A container would probably have further keys for administrative purposes, as a developer or a container might wish to flush by: url, user, user + url, gadget url, view, view + user, io.makeRequest domain, language, static resources (images or css), app id, developer id, gadget url + user, access token. Data pipelining introduces indirect dependencies: if a gadget server posts the result of a people request to a proxy view, then a change in that people request should also invalidate the proxy view itself.