[0.9 Proposal] Lightweight JS APIs

3 views
Skip to first unread message

Evan Gilbert

unread,
Nov 7, 2008, 9:18:49 PM11/7/08
to OpenSocial - OpenSocial and Gadgets Specification Discussion
I wasn't sure I should submit this proposal for 0.9 as it doesn't introduce new functionality (although it is fairly easy to implement).

However I think our current JS APIs could benefit from a simplification, and if we don't add the new version for 0.9 we may not be able to deprecate the old version for 1.0. Even if we can't get agreement in the 0.9 time frame I felt the discussion would be productive and possibly pave the way for a 1.0 enhancement.

Detailed proposal here: http://docs.google.com/Doc?id=dg6csr2s_7fmp67rd3&hl=en, and summary below.

Evan

Summary -

Now that we have the REST APIs, the strongly typed JS data access APIs in OpenSocial don't provide as cleara  value.
This proposal is to add a lightweight JS API that maps to REST equivalents using JSON as input and output.

For compatibility purposes, we would leave existing JS APIs in place, and consider deprecating them in the 1.0 timeframe if developers move to these new APIs.

Changes
  • Introduces standard method signature for data requests - JSON in, JSON out
  • Input and output parameters match REST calls 1:1 and ideally would share documentation. Enums would be provided optionally for documentation purposes but would not be used in code.
  • Data requests look like functions on "service" objects, instead of being created via opensocial.DataRequest.newXXXRequest()
  • Data requests can be called individually or as part of a batch.
  • Minor syntax changes - moving to opensocial.data namespace, having the batch object be called a batch + a few more

Before/After Example

Before

var req = opensocial.newDataRequest();
var params = {};
params[opensocial.DataRequest.PeopleRequestFields.PROFILE_DETAILS] =
    [opensocial.person.Field.NAME, opensocial.person.field.PROFILE_URL];

req.add(req.newFetchPersonRequest(opensocial.IdSpec.PersonId.VIEWER,
    params), 'viewer');


var viewerFriends = opensocial.newIdSpec({ "userId" : "VIEWER", "groupId" : "FRIENDS" });
var params = {};
params[opensocial.DataRequest.PeopleRequestFields.MAX] = 100;
params[opensocial.DataRequest.PeopleRequestFields.FILTER] =
    opensocial.DataRequest.FilterType.HAS_APP;


req.add(req.newFetchPeopleRequest(viewerFriends, params), 'viewerFriends');

var viewer = opensocial.newIdSpec({ "userId" : opensocial.IdSpec.PersonId.VIEWER});
req.add(req.newFetchPersonAppDataRequest(viewer, 'gifts'), 'data');

req.add(req.newFetchPersonAppDataRequest(viewerFriends, 'gifts', opt_params), 'viewerFriendData');
req.send(onLoadFriends);


After
var batch = opensocia.data.newBatch();
opensocial.data.getPeopleservice().getPerson(
    {userid: '@viewer', fields: ['name', 'profileUrl']}, batch, 'viewer);

opensocial.data.getPeopleservice().getPeople({userid: '@viewer', groupid: '@friends',
    max: 100, filter: 'hasApp'}, batch, 'viewerFriends');

opensocial.data.getAppDataService().getData({userid: '@viewer', keys: ['gifts']}, data);
batch.execute(onLoadFriends);

Kevin Brown

unread,
Nov 8, 2008, 2:10:00 AM11/8/08
to opensocial-an...@googlegroups.com
Given that we have at least 2 JSON-based formats for social data, why not just use one more or less directly? Why do I need opensocial.data.getPeopleservice().getPeople when I can just use the entire defined request structure?

Really the only thing stopping using the data APIs directly from, say, makeRequest, is the URL pattern and authentication scheme. To me that means you can sum up the entire API as:

var request = [
  {first request},
  {second request},
  {and so on and so forth}
];

opensocial.getData(request, callback);

Keeping something like getPeopleservice still has lots of strong typing involved. What if I want to extend the data API that I support? Do I now have to create a new JS API as well?

Arne Roomann-Kurrik

unread,
Nov 9, 2008, 5:06:34 PM11/9/08
to opensocial-an...@googlegroups.com
I really like the idea of giving a simplified JS library, but agree with Kevin's feedback.  Might this proposal be simplified to the addition of the opensocial.getData function?  (or opensocial.makeRPC ?)

~Arne
--
OpenSocial IRC - irc://irc.freenode.net/opensocial

Evan Gilbert

unread,
Nov 10, 2008, 1:30:34 AM11/10/08
to opensocial-an...@googlegroups.com
Thanks for the feedback:

Definitely need a version that calls to an arbitrary URL or JSON-RPC endpoint. Added opensocial.data.makeRequest() to support both these cases.

Still feel that providing a batch syntax with named functions is a cleaner API for developers than formatting a big block of JSON with all of the requests. The goal of the API is to make it look like the developer is just calling functions, and then add the minimum overhead necessary to support batching semantics. So my preference would be to still have these ~6-10 convenience functions on top of the raw data layer APIs.

I also had a couple of discussions with people about the "service" layer. I think it is valuable but it also seems to be extra typing in the common use cases. Some ideas:
- We could simplify by leaving out the services for common calls, i.e. opensocial.getPeople() vs. opensocial.getPersonService().getPeople(). This is not extensible to all calls, as we end up with too many functions in a top-level namespace, but we can capture the 90% case.
- For plugability, think that the service layer is valuable. Although these are meant to be lightweight APIs on top of requests that look like RESTful calls, we want to make it easy for containers to provide custom implementations. For example, you migh always be able to handle a person request from the other page via gadgets.rpc, so you could replace the people service with a custom handler that just sends the data to the parent page.

Evan

Lane LiaBraaten (Google)

unread,
Nov 10, 2008, 11:10:00 AM11/10/08
to OpenSocial - OpenSocial and Gadgets Specification Discussion
Taking into account the (approved) proposal[1] to integrate person and
app data, the 'after' case might look something like this:

var batch = opensocia.data.newBatch();
var service = opensocial.data.getPeopleservice();
service.getPerson(
{ userid: '@viewer', fields: ['name', 'profileUrl'], keys:
['gifts'] }, batch, 'viewer');
service.getPeople({userid: '@viewer', groupid:
'@friends',
max: 100, filter: 'hasApp'}, batch, 'viewerFriends');
batch.execute(onLoadFriends);

I like where this is headed.

Another note...how does this affect the supportsField method[2]?

/*before*/
var env = opensocial.getEnvironment();
var supportsEmail = env.supportsField(
opensocial.Person.Field.EMAIL,
opensocial.Environment.ObjectType.EMAIL);
var supportsMedia = env.supportsField(
opensocial.Activity.Field.MEDIA_ITEM,
opensocial.Environment.ObjectType.MEDIA_ITEM);

/*after*/
var supportsEmail =
opensocial.data.getPeopleservice().supportsField('email');
var supportsMedia =
opensocial.data.getActivityservice().supportsField('mediaItems');

Does this proposal mean we could deprecate the
opensocial.Envionment.ObjectType classes? I've always thought those
made the supportsField method kinda clunky.

-Lane

[1] http://groups.google.com/group/opensocial-and-gadgets-spec/browse_thread/thread/a15eedd63ca31879?hl=en
[2] http://wiki.opensocial.org/index.php?title=Opensocial.Environment_%28v0.8%29#opensocial.Environment.supportsField
> >>> *Changes*
>
> >>>    - Introduces standard method signature for data requests - JSON in,
> >>>    JSON out
> >>>    - Input and output parameters match REST calls 1:1 and ideally would
> >>>    share documentation. Enums would be provided optionally for documentation
> >>>    purposes but would not be used in code.
> >>>    - Data requests look like functions on "service" objects, instead of
> >>>    being created via opensocial.DataRequest.newXXXRequest()
> >>>    - Data requests can be called individually or as part of a batch.
> >>>    - Minor syntax changes - moving to opensocial.data namespace, having
> >>>    the batch object be called a batch + a few more
>
> >>> *Before/After Example*
> >>> *
> >>> Before*
> >>>  var req = opensocial.newDataRequest();
> >>> var params = {};
> >>> params[opensocial.DataRequest.PeopleRequestFields.PROFILE_DETAILS] =
> >>>     [opensocial.person.Field.NAME, opensocial.person.field.PROFILE_URL];
> >>>  req.add(req.newFetchPersonRequest(opensocial.IdSpec.PersonId.VIEWER,
> >>>     params), 'viewer');
>
> >>> var viewerFriends = opensocial.newIdSpec({ "userId" : "VIEWER", "groupId"
> >>> : "FRIENDS" });
> >>>  var params = {};
> >>>  params[opensocial.DataRequest.PeopleRequestFields.MAX] = 100;
> >>> params[opensocial.DataRequest.PeopleRequestFields.FILTER] =
> >>>     opensocial.DataRequest.FilterType.HAS_APP;
>
> >>> req.add(req.newFetchPeopleRequest(viewerFriends, params),
> >>> 'viewerFriends');
>
> >>>  var viewer = opensocial.newIdSpec({ "userId" :
> >>> opensocial.IdSpec.PersonId.VIEWER});
> >>>  req.add(req.newFetchPersonAppDataRequest(viewer, 'gifts'), 'data');
>
> >>>  req.add(req.newFetchPersonAppDataRequest(viewerFriends, 'gifts',
> >>> opt_params), 'viewerFriendData');
> >>>  req.send(onLoadFriends);
>
> >>> *After*

John Hayes

unread,
Nov 10, 2008, 6:09:00 PM11/10/08
to opensocial-an...@googlegroups.com
I'd like to suggest a rearrangement of the batch API to separate JS->Container batching (an performance mechanism) from JS->Developer batching (a convenience mechanism).
 
1. All data APIs take a callback function and are guaranteed to never synchronously enter than function. If the data is already available, it must set a timeout to wait for JS to be idle.
 
Example:
 
function hasViewer( var v ) { ... }
 
var service = opensocial.data.getPeopleservice();
service.getPerson(
   { userid: '@viewer', fields: ['name', 'profileUrl'], keys: ['gifts'] }, hasViewer );

2. A user can organize several calls into a batch - it will defer callback until all callbacks have completed and assign the callback results into Object.
 
function hasAllData( var data ) { data.viewer, data.viewerFriends ... }
 
var batch = opensocial.util.makeBatch( hasAllData );

var service = opensocial.data.getPeopleservice();
service.getPerson(
   { userid: '@viewer', fields: ['name', 'profileUrl'], keys: ['gifts'] }, batch( 'viewer' ) );
service.getPerson(
   { userid: '@viewer', fields: ['name', 'profileUrl'], keys: ['gifts'] }, batch( 'viewerFriends' ) );
 
Consequences:
 
The decision for which operations should be batched into a single HTTP call is inside the opensocial library, but it doesn't need to track client batches. The client batching can be independent of the request generator (and cajoled as a result).
John

Evan Gilbert

unread,
Nov 10, 2008, 7:58:57 PM11/10/08
to opensocial-an...@googlegroups.com
This syntax is definitely more elegant - it means that the second parameter to calls can always be a function. I have a slight preference for this suggestion, but would love to hear from others.

On Mon, Nov 10, 2008 at 3:09 PM, John Hayes <john.mar...@gmail.com> wrote:
I'd like to suggest a rearrangement of the batch API to separate JS->Container batching (an performance mechanism) from JS->Developer batching (a convenience mechanism).
 
1. All data APIs take a callback function and are guaranteed to never synchronously enter than function. If the data is already available, it must set a timeout to wait for JS to be idle.

The timeout is a latency issue and I think we need the option for synchronous processing, as even setTimeout(0) can can end up waiting for a long time as the browser does other work. Wiith preloading, this data often is already available and so we are introducing latency with the requirement for async.

If it's problematic to support in all call contexts, we can state that we only support synchronous callback while processing initial load handlers.
 
 
Example:
 
function hasViewer( var v ) { ... }
 
var service = opensocial.data.getPeopleservice();
service.getPerson(
   { userid: '@viewer', fields: ['name', 'profileUrl'], keys: ['gifts'] }, hasViewer );

2. A user can organize several calls into a batch - it will defer callback until all callbacks have completed and assign the callback results into Object.
 
function hasAllData( var data ) { data.viewer, data.viewerFriends ... }
 
var batch = opensocial.util.makeBatch( hasAllData );

var service = opensocial.data.getPeopleservice();
service.getPerson(
   { userid: '@viewer', fields: ['name', 'profileUrl'], keys: ['gifts'] }, batch( 'viewer' ) );
service.getPerson(
   { userid: '@viewer', fields: ['name', 'profileUrl'], keys: ['gifts'] }, batch( 'viewerFriends' ) );
 
Consequences:
 
The decision for which operations should be batched into a single HTTP call is inside the opensocial library, but it doesn't need to track client batches. The client batching can be independent of the request generator (and cajoled as a result).

It wasn't clear to me why this syntax makes a functional difference.

Seems that
service.getPerson({userid: '@viewer'}, batch('viewer')); and
service.getPerson({userid: '@viewer'}, batch, 'viewer')?

can be implemented almost equivalently.

Is this a convenience so that service implementers don't have to check whether the 2nd param is a batch?
 

John Hayes

unread,
Nov 10, 2008, 8:41:15 PM11/10/08
to opensocial-an...@googlegroups.com
Comments inline:

On Mon, Nov 10, 2008 at 4:58 PM, Evan Gilbert <uid...@google.com> wrote:
This syntax is definitely more elegant - it means that the second parameter to calls can always be a function. I have a slight preference for this suggestion, but would love to hear from others.

On Mon, Nov 10, 2008 at 3:09 PM, John Hayes <john.mar...@gmail.com> wrote:
I'd like to suggest a rearrangement of the batch API to separate JS->Container batching (an performance mechanism) from JS->Developer batching (a convenience mechanism).
 
1. All data APIs take a callback function and are guaranteed to never synchronously enter than function. If the data is already available, it must set a timeout to wait for JS to be idle.

The timeout is a latency issue and I think we need the option for synchronous processing, as even setTimeout(0) can can end up waiting for a long time as the browser does other work. Wiith preloading, this data often is already available and so we are introducing latency with the requirement for async.

If it's problematic to support in all call contexts, we can state that we only support synchronous callback while processing initial load handlers
 
I think this is an important issue for behavioral consistency. I've found with other APIs that spurious synchronous callbacks made the code generally less stable (since developers often didn't test one of the cases) and more complex to unit test since the number of combinations of callbacks can be quite large.
 
If you're concerned about making it faster without a timer, I'd recommend generating code at strategic points. After the a gadget's source, make a call to immediately complete events. Caja transformed code provides a flushing opportunity at each event handler. All of these are implementation specific and probably shouldn't be mentioned in the spec expect as implementation guidance.
 
Example:
 
function hasViewer( var v ) { ... }
 
var service = opensocial.data.getPeopleservice();
service.getPerson(
   { userid: '@viewer', fields: ['name', 'profileUrl'], keys: ['gifts'] }, hasViewer );

2. A user can organize several calls into a batch - it will defer callback until all callbacks have completed and assign the callback results into Object.
 
function hasAllData( var data ) { data.viewer, data.viewerFriends ... }
 
var batch = opensocial.util.makeBatch( hasAllData );

var service = opensocial.data.getPeopleservice();
service.getPerson(
   { userid: '@viewer', fields: ['name', 'profileUrl'], keys: ['gifts'] }, batch( 'viewer' ) );
service.getPerson(
   { userid: '@viewer', fields: ['name', 'profileUrl'], keys: ['gifts'] }, batch( 'viewerFriends' ) );
 
Consequences:
 
The decision for which operations should be batched into a single HTTP call is inside the opensocial library, but it doesn't need to track client batches. The client batching can be independent of the request generator (and cajoled as a result).

It wasn't clear to me why this syntax makes a functional difference.

Seems that
service.getPerson({userid: '@viewer'}, batch('viewer')); and
service.getPerson({userid: '@viewer'}, batch, 'viewer')?

can be implemented almost equivalently.

Is this a convenience so that service implementers don't have to check whether the 2nd param is a batch?
 
The parity I'm seeking is between:
 
service.getPerson({ .. }, gotMyViewer); and
service.getPerson({ ... }, batch( 'viewerInBatch' ) );
 
The service provide is not required to know whether a client-batch is being used, how that batch is assembled or when the downstream callback may occur. To make it more concrete, Here's a sample implementation of gadget.util.makeBatch:
 
gadget.util.makeBatch = function( var callback ) {
  var params = {};
  var count = 0;
 
  return function( var paramName ) {
    count = count + 1;
    return function( var result ) {
      params[ paraName ] = result;
      count = count - 1;
      if( count == 0 )
        callback( params );
  }
}
 
Inside the service is always just calls callback( result ). The client only writes an extra line to create a batch if they want to delay delivery until all of the requests are complete. The client could also create deliberate delays for delivery - for example preload data and wait for a click:
 
var clickShow = gadget.util.makeBatch( function( var result ) {
  document.getElementById( 'page2' ).innerHTML = result.pageContents;
} );
 
service.dataRequest( { url: 'http://myapp.com/pages/2' }, clickShow( 'pageContents' ) );
document.getElementById( 'tab2' ).addEventListener( 'click', clickShow( 'clicked' ) );
 
The download starts immediately, but the document is not rendered until the page has been selected. If the page is selected early it shows nothing, then the content when it's available.
 
John
 

Kevin Brown

unread,
Nov 10, 2008, 9:11:25 PM11/10/08
to opensocial-an...@googlegroups.com
On Mon, Nov 10, 2008 at 5:41 PM, John Hayes <john.mar...@gmail.com> wrote:
Comments inline:

On Mon, Nov 10, 2008 at 4:58 PM, Evan Gilbert <uid...@google.com> wrote:
This syntax is definitely more elegant - it means that the second parameter to calls can always be a function. I have a slight preference for this suggestion, but would love to hear from others.

On Mon, Nov 10, 2008 at 3:09 PM, John Hayes <john.mar...@gmail.com> wrote:
I'd like to suggest a rearrangement of the batch API to separate JS->Container batching (an performance mechanism) from JS->Developer batching (a convenience mechanism).
 
1. All data APIs take a callback function and are guaranteed to never synchronously enter than function. If the data is already available, it must set a timeout to wait for JS to be idle.

The timeout is a latency issue and I think we need the option for synchronous processing, as even setTimeout(0) can can end up waiting for a long time as the browser does other work. Wiith preloading, this data often is already available and so we are introducing latency with the requirement for async.

If it's problematic to support in all call contexts, we can state that we only support synchronous callback while processing initial load handlers
 
I think this is an important issue for behavioral consistency. I've found with other APIs that spurious synchronous callbacks made the code generally less stable (since developers often didn't test one of the cases) and more complex to unit test since the number of combinations of callbacks can be quite large.

See also Louis Ryan's proposal to require callbacks to be async. Sync callbacks have caused a lot of problems in real deployments.
 

Evan Gilbert

unread,
Nov 11, 2008, 1:13:17 AM11/11/08
to opensocial-an...@googlegroups.com
Still liking this enhancement, couple of notes below.

I'll update the proposal with these changes soon.

On Mon, Nov 10, 2008 at 5:41 PM, John Hayes <john.mar...@gmail.com> wrote:
Comments inline:

On Mon, Nov 10, 2008 at 4:58 PM, Evan Gilbert <uid...@google.com> wrote:
This syntax is definitely more elegant - it means that the second parameter to calls can always be a function. I have a slight preference for this suggestion, but would love to hear from others.

On Mon, Nov 10, 2008 at 3:09 PM, John Hayes <john.mar...@gmail.com> wrote:
I'd like to suggest a rearrangement of the batch API to separate JS->Container batching (an performance mechanism) from JS->Developer batching (a convenience mechanism).
 
1. All data APIs take a callback function and are guaranteed to never synchronously enter than function. If the data is already available, it must set a timeout to wait for JS to be idle.

The timeout is a latency issue and I think we need the option for synchronous processing, as even setTimeout(0) can can end up waiting for a long time as the browser does other work. Wiith preloading, this data often is already available and so we are introducing latency with the requirement for async.

If it's problematic to support in all call contexts, we can state that we only support synchronous callback while processing initial load handlers
 
I think this is an important issue for behavioral consistency. I've found with other APIs that spurious synchronous callbacks made the code generally less stable (since developers often didn't test one of the cases) and more complex to unit test since the number of combinations of callbacks can be quite large.
 
If you're concerned about making it faster without a timer, I'd recommend generating code at strategic points. After the a gadget's source, make a call to immediately complete events. Caja transformed code provides a flushing opportunity at each event handler. All of these are implementation specific and probably shouldn't be mentioned in the spec expect as implementation guidance.

Yep, this sounds like a great idea. We can set it up so that initial batch can be called immediately after gadget load handlers via standard code, so that we don't have to set a timeout in JavaScript. This is the only use case where I think the latency matters, and the goal isn't to be synchronous, it's just to avoid calling setTimeout().

 
 
Example:
 
function hasViewer( var v ) { ... }
 
var service = opensocial.data.getPeopleservice();
service.getPerson(
   { userid: '@viewer', fields: ['name', 'profileUrl'], keys: ['gifts'] }, hasViewer );

2. A user can organize several calls into a batch - it will defer callback until all callbacks have completed and assign the callback results into Object.
 
function hasAllData( var data ) { data.viewer, data.viewerFriends ... }
 
var batch = opensocial.util.makeBatch( hasAllData );

var service = opensocial.data.getPeopleservice();
service.getPerson(
   { userid: '@viewer', fields: ['name', 'profileUrl'], keys: ['gifts'] }, batch( 'viewer' ) );
service.getPerson(
   { userid: '@viewer', fields: ['name', 'profileUrl'], keys: ['gifts'] }, batch( 'viewerFriends' ) );
 
Consequences:
 
The decision for which operations should be batched into a single HTTP call is inside the opensocial library, but it doesn't need to track client batches. The client batching can be independent of the request generator (and cajoled as a result).

It wasn't clear to me why this syntax makes a functional difference.

Seems that
service.getPerson({userid: '@viewer'}, batch('viewer')); and
service.getPerson({userid: '@viewer'}, batch, 'viewer')?

can be implemented almost equivalently.

Is this a convenience so that service implementers don't have to check whether the 2nd param is a batch?
 
The parity I'm seeking is between:
 
service.getPerson({ .. }, gotMyViewer); and
service.getPerson({ ... }, batch( 'viewerInBatch' ) );

I think it might be good to have the batch be an object instead of function - having the only call be batch(string) might be limiting. Some possible functions:
- batch.send(opt_priority, opt_timeout). Developers might want finer grained control over sending batches - this would allow for controlling request flow for multiple batches.
- batch.key('viewerInBatch') would then be the parameter. This could still be batch('viewerInBatch') if we really liked the syntax.
- I could also see some control over caching behavior, etc.

Note that the result of batch.key('viewerInBatch') would still be a function.
 
 
The service provide is not required to know whether a client-batch is being used, how that batch is assembled or when the downstream callback may occur. To make it more concrete, Here's a sample implementation of gadget.util.makeBatch:
 
gadget.util.makeBatch = function( var callback ) {
  var params = {};
  var count = 0;
 
  return function( var paramName ) {
    count = count + 1;
    return function( var result ) {
      params[ paraName ] = result;
      count = count - 1;
      if( count == 0 )
        callback( params );
  }
}
 
Inside the service is always just calls callback( result ).

Many services will have to know about batches so that we can combine multiple calls into a single HTTP request.

I still like this, as it is nice to allow services to be batch unaware - for example, if a call can be handled with local data, there's no reason it needs to know about batching.

A service that takes advantage of batching would do something like:

PeopleService.getPeople = function(params, callbackOrBatch) {
  if (callbackOrBatch.add) {
    // This is a batch
    callbackOrBatch.add(new Request(PeopleService.baseUrl, params));
  } else {
    // Do single call
    makePeopleRequest(params, callbackOrBatch);
  }
}

and a service that returns local data would do something like:
QuestionService.getAnswer= function(params, callback) {
  doCallbackAsync(callback, {result: 42});
}


 

Louis Ryan

unread,
Nov 11, 2008, 3:09:52 AM11/11/08
to opensocial-an...@googlegroups.com
I like where this proposal is heading. A couple of points...

- While having the ability to either batch or not batch is potentially more elegant the mere fact that batching is forced on developers today encourages them to think about their data fetching in a latency sensitive way. Without HTTP batching the latency of many applications on Orkut would be intolerable in regions where high network latency is common. Making it conceptually optional is increasing the likelihood that some developers will batch and some wont. Id prefer to always use a batching interface and allow containers to implement how they see fit underneath.

- I think we can eliminate the distinction between getPerson and getPeople

- If these apis were to support something like updating a person entry what would the parameter list look like. Theres a big difference between the opensocial.Person interface and the protocol level JSON encoding of person

- Callbacks must be async even if we have to eat the latency overhead of setTimeout(0) on occasion. This has already bitten us.

- Do we want to go the last step and drop opensocial.Person, opensocial.Activity ... and just expose the JSON encoded protocol types.  I really don't know how much these strong types are helping anyone. Eliminating them would help containers focus on making their protocol implementations more compatible. This is ultimately what I would like to see be the direction of opensocial in-browser. High quality easily consumable protocols and a thin JS API so developers don't have to do per-container setup to establish connections.

How about something like

var batch = opensocial.newBatch();
batch.people.get({ userid: '@viewer', fields: ['name', 'profileUrl'], keys: ['gifts'] }, "viewer");
batch.people.get({ userid: '@viewer', groupId: @friends, fields: ['name', 'profileUrl'], keys: ['gifts'] }, "viewerFriends");
batch.http.get("http://www.example.org/somefeed", "myFeed");
batch.execute(function callback(data) {
     data.viewer...
     data.viewerFriends...
     data.myFeed....
   }
);

- batch is explicit and required
- batch.<type> where <type> matches the type names defined in the rest spec [people, activities, data, messages]
- batch.<type>.<operation> where operation matches get|put/update|post/create|delete (included crud synonyms for REST)
- I threw in an 'http' service to simulate makeRequest calls
- Developers can introspect the set of available services via if (batch.http) {} etc..


On Mon, Nov 10, 2008 at 3:09 PM, John Hayes <john.mar...@gmail.com> wrote:

Evan Gilbert

unread,
Nov 11, 2008, 8:41:13 AM11/11/08
to opensocial-an...@googlegroups.com
On Tue, Nov 11, 2008 at 12:09 AM, Louis Ryan <lr...@google.com> wrote:
I like where this proposal is heading. A couple of points...

- While having the ability to either batch or not batch is potentially more elegant the mere fact that batching is forced on developers today encourages them to think about their data fetching in a latency sensitive way. Without HTTP batching the latency of many applications on Orkut would be intolerable in regions where high network latency is common. Making it conceptually optional is increasing the likelihood that some developers will batch and some wont. Id prefer to always use a batching interface and allow containers to implement how they see fit underneath.

I'm in favor of the calls that look like individual functions, for a few reasons:
1. It's a nicer API for developers if we can swing it.
2. They work much more cleanly for the update use cases.
3. Nothing is preventing us from batching underneath this API. We can keep a list of pending requests, and call setTimeout(0, flushPendingRequestsInBatch). Note that you may need to do this even with explicit batching, as you would want to combine requests from multiple gadgets.
4. It's not clear that the right strategy will always be to actually make an HTTP batch. For example, you might want to make 2 requests - one to the home server for social calls, and the other to arbitrary sites, so that you can take advantage multiple threads in the browser.

 

- I think we can eliminate the distinction between getPerson and getPeople

I've never liked calls that return an array for when you ask for one item by ID, especially for the common use cases, but could be swayed either way on this one.

One possible variation:
getPeople()
getViewer()
getOwner()

This has the benefit of having explicit functions for the three most common cases, and getViewer() / getOwner() would often not need parameters (only if you need to choose fields).
 

- If these apis were to support something like updating a person entry what would the parameter list look like. Theres a big difference between the opensocial.Person interface and the protocol level JSON encoding of person

Definitely the JSON version. All input and output parameters are JSON
 

- Callbacks must be async even if we have to eat the latency overhead of setTimeout(0) on occasion. This has already bitten us.

Yep. Per notes above, I'm in agreement as long as we can execute after on load handlers. I just added this to the proposal.
 

- Do we want to go the last step and drop opensocial.Person, opensocial.Activity ... and just expose the JSON encoded protocol types.  I really don't know how much these strong types are helping anyone. Eliminating them would help containers focus on making their protocol implementations more compatible. This is ultimately what I would like to see be the direction of opensocial in-browser. High quality easily consumable protocols and a thin JS API so developers don't have to do per-container setup to establish connections.

 Yes yes yes.


How about something like

var batch = opensocial.newBatch();
batch.people.get({ userid: '@viewer', fields: ['name', 'profileUrl'], keys: ['gifts'] }, "viewer");
batch.people.get({ userid: '@viewer', groupId: @friends, fields: ['name', 'profileUrl'], keys: ['gifts'] }, "viewerFriends");
batch.http.get("http://www.example.org/somefeed", "myFeed");
batch.execute(function callback(data) {
     data.viewer...
     data.viewerFriends...
     data.myFeed....
   }
);

- batch is explicit and required
- batch.<type> where <type> matches the type names defined in the rest spec [people, activities, data, messages]
- batch.<type>.<operation> where operation matches get|put/update|post/create|delete (included crud synonyms for REST)
- I threw in an 'http' service to simulate makeRequest calls
- Developers can introspect the set of available services via if (batch.http) {} etc..

I'm not sure I like that all functions are on the a single class in this case. This class gets arbitrary complex as we add services, similar to the current DataRequest in OpenSocial. Also, think that the obejct that does batching should be different from the object(s) that create requests. Combining these all makes it harder to override specific services or the batching mechanism.

Per notes above, would prefer to keep the callback option. Is there a reason why we couldn't execute the following in a batch?

var services = opensocial.getServices();  // Returns an object, accessed via .foo or ['foo']
services.people.get({ userid: '@viewer', fields: ['name', 'profileUrl'], keys: ['gifts'] }, function(viewer) { app.viewer = viewer[0]; });  // Note the [0] because we don't have getPerson() or getViewer()
services['people'].get({ userid: '@viewer', groupId: @friends, fields: ['name', 'profileUrl'], keys: ['gifts'] }, function(viewerFriends) { app.viewerFriends= viewerFriends; });
services.http.get("http://www.example.org/somefeed", function(feed) {app.feed = feed;});


Chris Chabot

unread,
Nov 11, 2008, 11:23:13 AM11/11/08
to opensocial-an...@googlegroups.com
one point that might be important to keep in mind during this (quite promising) discussion, the json-rpc interface is currently *optional* in the 0.8.1 spec, as such anything that would depend on it would by extend also have to be optional..

for instance, i believe myspace hasn't implemented it and last time i talked to them about this, they had no intend too either (but maybe that changed?)

Louis Ryan

unread,
Nov 11, 2008, 12:01:38 PM11/11/08
to opensocial-an...@googlegroups.com
On Tue, Nov 11, 2008 at 5:41 AM, Evan Gilbert <uid...@google.com> wrote:


On Tue, Nov 11, 2008 at 12:09 AM, Louis Ryan <lr...@google.com> wrote:
I like where this proposal is heading. A couple of points...

- While having the ability to either batch or not batch is potentially more elegant the mere fact that batching is forced on developers today encourages them to think about their data fetching in a latency sensitive way. Without HTTP batching the latency of many applications on Orkut would be intolerable in regions where high network latency is common. Making it conceptually optional is increasing the likelihood that some developers will batch and some wont. Id prefer to always use a batching interface and allow containers to implement how they see fit underneath.

I'm in favor of the calls that look like individual functions, for a few reasons:
1. It's a nicer API for developers if we can swing it.

See comments below...
 

2. They work much more cleanly for the update use cases.
3. Nothing is preventing us from batching underneath this API. We can keep a list of pending requests, and call setTimeout(0, flushPendingRequestsInBatch). Note that you may need to do this even with explicit batching, as you would want to combine requests from multiple gadgets.

the example below still makes batching a container implementation detail as opposed to an explicit & mandatory construct of the API. This is the part I have a problem with. Its entirely optional in your example and so people will write code that does not leverage batching and will have a hard time refactoring their code later to use it. More on this below. As for combining requests from multiple gadgets that is not impacted if we have explicit batching.
 

4. It's not clear that the right strategy will always be to actually make an HTTP batch. For example, you might want to make 2 requests - one to the home server for social calls, and the other to arbitrary sites, so that you can take advantage multiple threads in the browser.

Again this is an internal implementation detail in the container. A batch construct can still accommodate this.
 


 

- I think we can eliminate the distinction between getPerson and getPeople

I've never liked calls that return an array for when you ask for one item by ID, especially for the common use cases, but could be swayed either way on this one.

Conveniences are fine though they might mess with the API a bit here. Consider

people.get({userId : @viewer, fields : [a,b,c], ...},...) and

people.getViewer ({ fields[a,b,c], ....})

not sure this gives you much bang for the buck....
Actually I thought people.<service> took care of this? 
 
Also, think that the obejct that does batching should be different from the object(s) that create requests. Combining these all makes it harder to override specific services or the batching mechanism.

This was very intentional, no calling without batching. I dont think it hampers containers overriding services, lots of ways to make that work in JS, nor does it specifically constrain how the container actually implements the batching.  


Per notes above, would prefer to keep the callback option. Is there a reason why we couldn't execute the following in a batch?

That Im fine with. To alter my example above how about something like

batch.<service>.<operation>( <data>, < function | identifier > )

then batch.execute( <optional callback> )

calls which have individual callbacks on them are called in order they we added to the batch and then the batch callback function is called with the aggregate data response with the parts that were associated with data identifiers only. This way you can use a callback per operation model or a callback per batch model.

Scott Seely

unread,
Nov 11, 2008, 2:49:53 PM11/11/08
to opensocial-an...@googlegroups.com

Instead of calling the mechanism ‘batch’ we could also call it ‘queue’. I think the proposal is already veering towards queue semantics. This would offer containers the ability to send one request or many, depending on their processing model and how they want to service requests.

 

FWIW, MySpace has stayed away from the batch processing mechanisms because we believe that the Data Pipelining/OSML mechanisms offer superior performance capabilities for what I’ll call the 56K modem use cases.

Chris Chabot

unread,
Nov 11, 2008, 2:59:47 PM11/11/08
to opensocial-an...@googlegroups.com
there is one really big difference between a queue and batch

batch you send of a collection of requests, and expect them to return in one collection of responses as well

with a queue, it might be a collection of requests, but having multiple responses, possibly out of order, coming back

From an usability, predictability, efficiency, latency and sanity point of view, batching offers a lot of benefits here, right?

Evan Gilbert

unread,
Nov 11, 2008, 3:10:30 PM11/11/08
to opensocial-an...@googlegroups.com
I really liked John's take on this issue.. There are actually two batches/queues involved in these discussions:
1. Developer batches. A developer is making N separate data requests, and can't do anything useful until all of them come back.
2. Container batching/queuing. Underneath this, the container may choose to execute the requests in one big batch, as separate requests, by making calls to Gears for local data, by providing preloaded data in the container, etc.

The API should speak to #1. I'm open to any naming of this - "batch" makes sense to me, but there are probably better names.

How #2 works is an implementation detail, albeit an important one. We only need to expose it to developers if we can't handle batching/queueing otherwise.

Louis Ryan

unread,
Nov 11, 2008, 3:20:03 PM11/11/08
to opensocial-an...@googlegroups.com
I don't think Data Pipelining is anything other than a declarative batch? It seems odd to restrict the utility of batching to a subset of valid use cases.

As for queue vs. batch, I think the semantics of queue are a subset of batch (i.e. ordered per-request callbacks in the example I gave above) so I would prefer to use batch as it makes more sense to have batch.execute(callback) than queue.execute(callback) ?

On Tue, Nov 11, 2008 at 11:49 AM, Scott Seely <sSe...@myspace.com> wrote:

Scott Seely

unread,
Nov 11, 2008, 3:22:03 PM11/11/08
to opensocial-an...@googlegroups.com

As interesting as the batching/queuing discussion is, I’d prefer to not worry about this part.

 

With the current set of proposals, we can approximate the feature and get all the benefits for low bandwidth. DataPipelining and named views (another pair of proposals) make this possible. In a world where

 

requestNavigateTo("Canvas.PersonalItemsView")

 

can be executed, with Data Pipelining tags available, do we need to define another batching/queuing mechanism?

 

I like everything about this proposal except for the batching piece. Is it possible to drop the ideas around batching until v.Next?

 

FWIW, I’m not saying that batching is never going to be helpful. I’d just prefer to get some experience with using Data Pipelining and the extended view name item we are pushing in 0.9. If, in v.Next, batching looks valuable, let’s add it in then. Right now, batching is derailing the conversation and all the good we could create with a simplified API.

Louis Ryan

unread,
Nov 11, 2008, 3:38:55 PM11/11/08
to opensocial-an...@googlegroups.com
Sorry, I don't want to overload the conversation here. My point is simply that Orkut has found batching to be a big help from a performance standpoint and that taking it out and adding it back later in v.Next is a performance net-loss from my perspective. I don't think anything in any of the proposed APIs makes a queue based implementation impossible, I would just like to see batching semantics accomodated for containers that rely on them today.

Scott Seely

unread,
Nov 11, 2008, 3:46:41 PM11/11/08
to opensocial-an...@googlegroups.com

The lightweight API doesn’t ‘exist’ yet—there is nothing to remove.

 

If we must incorporate batching into the spec, it needs to be in a separate, optional section?

Adam Winer

unread,
Nov 11, 2008, 9:56:08 PM11/11/08
to opensocial-an...@googlegroups.com
On Tue, Nov 11, 2008 at 12:46 PM, Scott Seely <sSe...@myspace.com> wrote:

The lightweight API doesn't 'exist' yet—there is nothing to remove.


I disagree completely.  The lightweight API is being proposed as an easier-to-use replacement for the existing DataRequest API.  Batching is a feature Orkut has found critical - we've gone to developers not batching properly and had them fix their app to use fewer batches, which improved both user latency and our resource utilization.

From Orkut's perspective, batching is a required feature for any data access API, and v.Next doesn't cut it.

-- Adam Winer

John Hayes

unread,
Nov 11, 2008, 10:15:38 PM11/11/08
to opensocial-an...@googlegroups.com
I think it's worth clarifying that batching provides good performance in the implementation. The API only requires that there exist a method to detect batches - and I propose that the implementation can do a fine job detecting batches no matter what the developer does.
 
When a developer or set of unrelated developers submits requests to load, the implementation needs to figure out the right time to send all queued requests that may be reasonably combined. The simplest method is to watch for idle (setTimeout(0)), other methods which might help reduce latency are queuing (having a maximum of two outstanding requests), or deliberately flushing the batch during startup or after delivering results.
 
The best part is that developers don't have to organize their requests into batches to get optimal performance and the API can remain simple. For pages with multiple unrelated applications, the system can combine all of their requests. If batching is a required feature, implement batching.
John

Louis Ryan

unread,
Nov 12, 2008, 2:08:26 AM11/12/08
to opensocial-an...@googlegroups.com
I think there are some concrete differences between explicit batching and inferred batching

- Using setTimeout(0) to initiate the XHR request can cause additional latency. Consider code such as

function() {
   ...create a bunch of requests...
   ...do some slow DOM manipulation
}

in this case the XHR wont be initiated until after the slow DOM manipulation. Interposing a batch.execute call allows these to be pipelined. Variance in the inferred dispatching mechanism (queue, timeout, ...) between containers will cause variance in gadget performance characteristics. 

- Explicit batching allows for the single callback per batch model. Most developers already have their code set up this way because of how opensocial.DataRequest works. I dont think you can reasonably have a callback on an inferred batch and have clear semantics.

- It makes developers think about what is coalesceable. Inferred batching can not nor should not merge two different requests for the same user but with different fields as the failure mode can vary between containers that merge and those that do not. A gadget developer on the other hand can and should merge these requests. Our model should encourage developers to keep code that forms and executes requests centralized. Inferred batching does not encourage that behavior.

I dont really care too much about calling it a 'batch', 'queue' or 'DataRequest' (yuk!). Im using X from now on...

To be clear I have no objection to an API that supports callbacks per-request in the X, I do think those callbacks should be called in the order the requests were added to the X even if in some implementations the responses are actually availabe out of order.


Unrelated to X, Evan and I chatted and I think we can take things a little further
- All the supported fields stuff can be dropped from the delivered JS. If developers care about it they can preload the results of the equivalent REST call though it may need to be enhanced a little
- Consider calling this "oslite" and allow it to be "required" in gadgets separately
- Need to find a home and nomenclature for the requestXXX methods, probably on the services

Syntactic sugar questions:

1. Which do people like more functions or properties?
oslite.people.get(....) vs oslite.people().get(....)

2. If X were to be an instance based mechanism should the services live on X or on oslite?
var x = oslite.newX();
x.person.get(...., id)

vs.

var x = oslite.newX();
oslite.person.get(...., x, id);


My votes:
1. Id choose properties because I dont think functions add anything and the erasure is the same and dynamic binding works equally well
2. Marginal preference for having them on X as it would be the primary point of interaction in the api anyway, but I wouldnt consider this blocking in any way.

Evan Gilbert

unread,
Nov 12, 2008, 1:38:58 PM11/12/08
to opensocial-an...@googlegroups.com
I moved the proposal to http://wiki.opensocial.org/index.php?title=Lightweight_JS_APIs and incorporated a number of suggestions.

Changes:
  • Removed getService() calls. All services are now static - i.e. opensocial.data.people.get()
  • Changed so that second parameter to service calls is always a callback function, and batches use the batch('key') notation as suggested by John
  • Removed getPerson(), but added getViewer() and getOwner() (Louis - not sure if you'll like this more or less). Without this, you will always have data.Viewer[0].Name or have to move the viewer to a different variable (and same with owner).
  • Removed appdata.get(), as a different proposal has put app data onto the person object.

Open Issues:
  • To batch or not to batch...
    • Developers are still allowed to batch requests so that they can have a callback that only executes after all the requests return.
    • This is a useful developer convenience, and also has the benefit that batch.execute() is a signal to an HTTP batching container that it can send pending requests.
    • "Batch" seems like a good name for this, but open to alternatives. It does not imply that the requests are sent in an HTTP batch, just that the developer gets called back with all the results.
    • From an implementation, perspective, this can easily be implemented as a queue usign John's example code.
  • Migrate request*() APIs
    • It's fairly simple to translate all of the request* APIs (requestShareApp(), etc) to this new format, but wanted to check with the group before making this substantial a change.
  • New top level namesapce
    • This new format is fairly independent of existing opensocial JS, and it would be easy to support a new namespace "oslite" or similar.
    • This has the added benefit of even less typing - oslite.people.get() instead of opensocial.data.people.get() (or the original proposal opensocial.data.getPeopleService().getPeople())
Evan
Reply all
Reply to author
Forward
0 new messages