It seems to me that there's a real tension between wanting not to
touch the "natural" JSON that people are using to represent their
data, and wanting to provide metadata about that JSON, such as URIs
and links and so on. This is particularly the case with collections,
which are "naturally" arrays, but need to be objects in order to
represent metadata.
So what about using content negotiation to provide both in parallel? A
request with Accept: application/json gets the "natural" JSON for the
resource/collection (an array in the case of a collection) whereas a
request with Accept: application/metadata+json gets something like the
web collection format that John Cowan suggested, with pointers to
related URIs. An application could request whichever was needed, or
switch between the two as required.
An example follows that isn't particularly fleshed-out. Before I spend
time doing so, is it an idea that's worth pursuing?
Jeni
---
Say that I have a calendar application, in which I have multiple
calendars, which have multiple events, and I want to provide
collections that filter the events based on their date, as well as
paging through those events.
Some URI templates are:
/calendar - a collection of calendars
/calendar/{label} - a particular calendar
/calendar/{label}/event - a collection of events
/calendar/{label}/event/{event} - a particular event
/calendar/{label}/from/{date} - a collection of events starting
from that day
All collections of events also take a ?page={number} query.
If I do:
GET /calendar/personal HTTP/1.1
Accept: application/json
then I get:
Content-Location: /calendar/personal.json
{"label": "personal",
"events": [{"label": "do something",
"date": "2008-09-19",
...},
{"label": "do something else",
"date": "2008-09-20",
...},
...]}
The 'events' property holds an array; there's no way here of telling
what this collection is or how to add events to it.
If I do:
GET /calendar/personal HTTP/1.1
Accept: application/metadata+json
then I get:
Content-Location: /calendar/personal.meta.json
{"href": "/calendar/personal.json",
"id": "/calendar/personal",
"version": "22361dd2a7147732b1b8f042d8dbcd2953c28652",
"schema": "/schema/calendar",
"precis": {"label", "personal"},
"properties": {"events": {"id": "/calendar/personal/from/2008-09-19?
page=1"}}}
This tells me the version of *resource* (as opposed to the version of
the entity, which is what the ETag on the request would give me). It
also provides a schema so that I know the kind of thing I need to PUT
here, and a precis (as this might be initial request an application
makes), and describes the collections found in the properties of the
"natural" JSON. Here we find out that the "events" property of this
particular version of the resource actually holds the first page of
today's events.
Similarly, if I do:
GET /calendar/personal/from/2008-09-19?page=1 HTTP/1.1
Accept: application/json
then I get the array:
Content-Location: /calendar/personal/from/2008-09-19.json?page=1
[{"label": "do something",
"date": "2008-09-19",
...},
{"label": "do something else",
"date": "2008-09-20",
...},
...]}
which is exactly what I got from the original request on the calendar
(though it needn't be). And if I do:
GET /calendar/personal/from/2008-09-19?page=1 HTTP/1.1
Accept: application/metadata+json
then I get:
Content-Location: /calendar/personal/from/2008-09-19.meta.json?page=1
{"href": "/calendar/personal/from/2008-09-19.json?page=1",
"id": "/calendar/personal/from/2008-09-19?page=1",
"version": "e48f2c216bc0dd68823ed8808f98df6303114f42",
"members": [{"id": "/calendar/personal/event/432",
"precis": {"label": "do something"}},
{"id": "/calendar/personal/event/943",
"precis": {"label": "do something else"}},
...],
"schema": "/schema/event",
"first": "/calendar/personal/from/2008-09-19?page=1",
"next": "/calendar/personal/from/2008-09-19?page=2",
"last": "/calendar/personal/from/2008-09-19?page=4",
"edit": "/calendar/personal/event"}
In this metadata, I can find out the URIs of the events that are
listed in the collection, and links to other pages. I can also find
out where to POST new events (the 'edit' link) and the schema they
need to comply with.
--
Jeni Tennison
http://www.jenitennison.com
I had the same idea, but with the reverse default.
The main reason that someone wants an envelope with meta-data, is in
environments where it is impossible or just inconvenient to get/set
headers. For example, quickly browsing around with "curl". Its
possible to set/get headers, but my concern, is in cases where the
data is being limited by the server, the user may not notice it and
think this is all the data, leading to much frustration.
So, my idea was when no Accept header is passed or "Accept:
application/json", then it sends it in an object envelope with the
meta-data. But if the user gave "Accept: application/json;
no_envelope=1" it would send the natural JSON. The idea being, that
if the client can prove that they can frob HTTP request headers, it
means they don't need any stickin' envelope! Otherwise, send it out.
Happy Hacking!
David Snopek.
--
Open Source Hacker and Language Learner
http://www.hackyourlife.org/
> So, my idea was when no Accept header is passed or "Accept:
> application/json", then it sends it in an object envelope with the
> meta-data. But if the user gave "Accept: application/json;
> no_envelope=1" it would send the natural JSON. The idea being, that
> if the client can prove that they can frob HTTP request headers, it
> means they don't need any stickin' envelope! Otherwise, send it out.
Sounds better to me.
--
GMail doesn't have rotating .sigs, but you can see mine at
http://www.ccil.org/~cowan/signatures
I wasn't intending to propose one or another as the default; I just
wanted to raise the possibility of using content negotiation.
Having said that...
> The main reason that someone wants an envelope with meta-data, is in
> environments where it is impossible or just inconvenient to get/set
> headers. For example, quickly browsing around with "curl". Its
> possible to set/get headers, but my concern, is in cases where the
> data is being limited by the server, the user may not notice it and
> think this is all the data, leading to much frustration.
>
> So, my idea was when no Accept header is passed or "Accept:
> application/json", then it sends it in an object envelope with the
> meta-data. But if the user gave "Accept: application/json;
> no_envelope=1" it would send the natural JSON. The idea being, that
> if the client can prove that they can frob HTTP request headers, it
> means they don't need any stickin' envelope! Otherwise, send it out.
There are two issues:
1. Should a missing Accept header mean "natural JSON" or "JSON
metadata"?
2. Should application/json mean "natural JSON" or "JSON metadata"?
On the first, I was imagining a not unrealistic world in which /
calendar/personal would return HTML by default; getting JSON at all
would require an Accept header of some kind. The default provided when
the Accept header is missing is really down to the individual
application rather than any standard that we might develop. We
certainly can't dictate that all URIs must return application/json by
default.
On the second, my guess would be that people already using application/
json would prefer that their apps don't radically change, whereas
anyone adopting a new standard would use whatever mime type that
standard required (eg application/metadata+json). As such,
pragmatically it would seem better to have the natural JSON use
application/json, even if technically some of us think that getting
the envelope would be the more usual thing to do.
Cheers,
Jeni
I believe that there are still a large number of RESTful semantics that
could shared between two or three different format types. For example,
paging via Range, method/verb behavior, and partial updates could have
probably have very similiar mechanisms for the different formats.
Obviously I am most interested #1 (although #2 is a decent alternative), as
it is certainly the most appropriate for the information structures that I
am working with, although I think there is certainly value in having #3, and
would be useful for many folks in this group that are coming from AtomPub
background.
Anyway, great suggestion Jeni, I agree! And I think content negotiation via
Accept is certainly good as well.
Thanks,
Kris
As soon as a proposal misses the gap I move on. I'm asked to solve real
world problems with real world resources. End of story. If I can't hire from
a broad pool I've made an epic fail.
My natural tension fails for those creating the latest marketing buzzword
and raft for services provided by a "sticky" few.
I look for a cross-platform serialization object graph that conforms to
ROA/REST. Anything less and at best I'll ignore it in the marketplace of
ideas and almost certainly obstruct it during standards deliberations.
The markup I've seen to date is laughable, at best a return to pre-XML with
no observable efficiency in payload. The world does not need another custom
SGML.
--------------------------------------------------
From: "Jeni Tennison" <je...@jenitennison.com>
Sent: Friday, September 19, 2008 4:14 PM
sTo: <restfu...@googlegroups.com>
Subject: [restful-json] Content negotiation for JSON metadata
With an embedded XSD we don't need the overhead. Once I'm in for X+2 bytes
I'm fully invested but without readability where byte compression is not
handled by the transport.
I'm still waiting for a compelling story why non-XML SGML has place in the
marketplace. SGML goes back forty years - XML took from those two
generations of object lesson.
--------------------------------------------------
From: "Kris Zyp" <kri...@gmail.com>
Sent: Friday, September 19, 2008 6:16 PM
To: <restfu...@googlegroups.com>
Subject: [restful-json] Re: Content negotiation for JSON metadata
Kris, is anyone besides you using this stuff? By which I mean other
developers writing compatible independent implementations. Every time
I've googled these proposals, it just looks like a big promotional
echo chamber.
--Pete
I heartily agree. I definitely think that most of this stuff is too early
for any formal standardization. However, I was under the impression
(certainly could be wrong, please correct me if I am) that currently this
group was simply discussing possible techniques to be used for interoperable
RESTful JSON, attempting to learn from current implementations, with
standardization something further down the road after more successful proven
implementations. My ideas have been put forth under such a belief.
> Kris, is anyone besides you using this stuff? By which I mean other
> developers writing compatible independent implementations.
Well the overall proposal I made (JSON resources) was based on various
implementations (certainly not all mine) that I have enumerated before, and
HTTP specifications applied to JSON. Certainly there was a mix of mechanims
that are very widely implemented and proven (like HTTP verb usage), and some
that are more off the top of my head just to put an idea in front of the
group based on some of the group's goals (like partial property sets based
on Range, no one has ever implemented that, AFAIK). Some of this I have
implemented, some I haven't. I certainly don't expect this to be a last call
for standardization, but I thought there was value in community discussions
on these topics whether or not they have been implemented. Were you wanting
to know the level of proveness (vs experimental) and extent of
implementation of each of the various elements that have been proposed?
Thanks,
Kris
I hope it didn't sound like I was disagreeing with you! I agree and
think this right solution in to this problem.
> Having said that...
>
>> The main reason that someone wants an envelope with meta-data, is in
>> environments where it is impossible or just inconvenient to get/set
>> headers. For example, quickly browsing around with "curl". Its
>> possible to set/get headers, but my concern, is in cases where the
>> data is being limited by the server, the user may not notice it and
>> think this is all the data, leading to much frustration.
>>
>> So, my idea was when no Accept header is passed or "Accept:
>> application/json", then it sends it in an object envelope with the
>> meta-data. But if the user gave "Accept: application/json;
>> no_envelope=1" it would send the natural JSON. The idea being, that
>> if the client can prove that they can frob HTTP request headers, it
>> means they don't need any stickin' envelope! Otherwise, send it out.
>
> There are two issues:
>
> 1. Should a missing Accept header mean "natural JSON" or "JSON metadata"?
> 2. Should application/json mean "natural JSON" or "JSON metadata"?
>
> On the first, I was imagining a not unrealistic world in which
> /calendar/personal would return HTML by default; getting JSON at all would
> require an Accept header of some kind. The default provided when the Accept
> header is missing is really down to the individual application rather than
> any standard that we might develop. We certainly can't dictate that all URIs
> must return application/json by default.
Right, and this is where we can go back and forth about details. ;-)
All web-browsers can be relied upon to send an Accept header that
includes text/html. So, if the web-browser is the use-case, even with
"application/json" as the default, the user will get HTML.
The reason I want application/json (and with the envelope) as the
default, is an attempt to make it possible to consume the RESTful
resources in situations where its impossible to get/set headers and do
so without a proxy. For example, as Kris brings up, with JSONP. This
might not save you much, because implementing JSONP requires the
server to perform a special dance as well, so it wouldn't be hard to
just make that end-point return JSON with an envelope always. So this
may prove to be a non-issue in practice. :-)
> On the second, my guess would be that people already using application/json
> would prefer that their apps don't radically change, whereas anyone adopting
> a new standard would use whatever mime type that standard required (eg
> application/metadata+json). As such, pragmatically it would seem better to
> have the natural JSON use application/json, even if technically some of us
> think that getting the envelope would be the more usual thing to do.
Ah, yes, if we are considering an upgrade path to the standard, this
is a very valid concern!
Happy Hacking!
David.
Three points that I've been thinking about, re content negotiation:
Pro: There is the possibility of interop with existing clients that
expect "natural" JSON.
Con: If the web collection entry has a value field (as in the Web
Collections proposal), then clients that want the natural JSON object
could simply GET the entry and then access entry.value.
Con: Requiring support for content negotiation places an additional
burden on service implementations, which may lead to fewer (or
nonconformingly implemented) services.
Manuel
Unfortunately this is not correct -- IE sends Accept headers with no
html mime type in sight. Sometimes all it sends is */*.
For web services that can deliver either JSON or HTML depending on the
client's Accept header, you can deal with IE by also checking the
User-agent and acting accordingly.
--steve
On 21 Sep 2008, at 17:12, Manuel Simoni wrote:
> Three points that I've been thinking about, re content negotiation:
>
> Pro: There is the possibility of interop with existing clients that
> expect "natural" JSON.
>
> Con: If the web collection entry has a value field (as in the Web
> Collections proposal), then clients that want the natural JSON object
> could simply GET the entry and then access entry.value.
That's true, but I'm not sure what makes it a 'con'? It just means
that clients that understand application/metadata+json (or whatever)
could use that and not bother with the "natural" JSON, if they're
prepared to live with the rather larger representations that it entails.
I imagined that clients would mostly request natural JSON and go to
the augumented version when they needed particular information (such
as what URL to POST to to add to a collection, or the next page in a
paged collection), but that doesn't preclude other patterns of use.
> Con: Requiring support for content negotiation places an additional
> burden on service implementations, which may lead to fewer (or
> nonconformingly implemented) services.
Yes. It requires registering an extra mime type, but in my (admittedly
limited) experience, that isn't usually a hard thing to do. There's
certainly more support for that baked into frameworks that I've used
than there is for supporting things like additional ranges specifiers
which Kris suggested for paging.
It also requires support for generating two parallel representations
of the same set of resources, which might seem like double the work. I
simply don't know whether the benefits you get from not having to
touch the original JSON are worth this extra cost.
Another con from the client side is:
Con: If the data *isn't* available in the augmented JSON, clients may
find it hard to integrate the two parallel sets of information. (This
is true of any approach in which the natural JSON is left untouched,
with metadata provided from elsewhere.)
Cheers,
Jeni
I think we need to think about JSON the way we think about XML - as a
grammar. IOW, application/json is about as useful as application/xml,
and each format based on JSON defined here should get a media type, the
way Atom and HTML did.
> Con: If the web collection entry has a value field (as in the Web
> Collections proposal), then clients that want the natural JSON object
> could simply GET the entry and then access entry.value.
So I think when you say "natural JSON", you have a specific media type
in mind, and not just JSON.
> Con: Requiring support for content negotiation places an additional
> burden on service implementations, which may lead to fewer (or
> nonconformingly implemented) services.
We don't have to require support for conneg. We have to define the media
types, what they mean, and any associated protocol semantics.
Bill
I think what will be too subtle a distinction, and semantically, it
might be wrong - is the metdata a seperate resource? - if so, conneg is
the wrong tool for the job.
IMO, any solution that depends on conneg needs to define a processing
ladder of alternatives (eg, depending solely on Accept: won't fly in the
part of the mobile web I currently work in, unfortunately).
Bill
The value-add here is in linking media types to JSON semantics.
On 21 Sep 2008, at 20:39, Bill de hOra wrote:
> Jeni Tennison wrote:
>> It seems to me that there's a real tension between wanting not to
>> touch the "natural" JSON that people are using to represent their
>> data, and wanting to provide metadata about that JSON, such as URIs
>> and links and so on. This is particularly the case with
>> collections, which are "naturally" arrays, but need to be objects
>> in order to represent metadata.
>> So what about using content negotiation to provide both in
>> parallel? A request with Accept: application/json gets the
>> "natural" JSON for the resource/collection (an array in the case of
>> a collection) whereas a request with Accept: application/metadata
>> +json gets something like the web collection format that John Cowan
>> suggested, with pointers to related URIs. An application could
>> request whichever was needed, or switch between the two as required.
> >
>> An example follows that isn't particularly fleshed-out. Before I
>> spend time doing so, is it an idea that's worth pursuing?
>
> I think what will be too subtle a distinction, and semantically, it
> might be wrong - is the metdata a seperate resource? - if so, conneg
> is the wrong tool for the job.
I imagined that the representation-including-metadata would be a
representation of the same resource, just augmented with metadata (and
including a "summary" version of the resource). So I *think* that
conneg fits.
I agree that if it were *just* supplying metadata then the semantics
would be wrong, and you'd need some kind of explicit link (either in
the JSON or within an extension HTTP header to point from the
representation to the metadata).
> IMO, any solution that depends on conneg needs to define a
> processing ladder of alternatives (eg, depending solely on Accept:
> won't fly in the part of the mobile web I currently work in,
> unfortunately).
Can you describe the issues there in more detail?
Jeni
On 21 Sep 2008, at 20:31, Bill de hOra wrote:
> Manuel Simoni wrote:
>> Three points that I've been thinking about, re content negotiation:
>> Pro: There is the possibility of interop with existing clients that
>> expect "natural" JSON.
>
> I think we need to think about JSON the way we think about XML - as
> a grammar. IOW, application/json is about as useful as application/
> xml, and each format based on JSON defined here should get a media
> type, the way Atom and HTML did.
>
>> Con: If the web collection entry has a value field (as in the Web
>> Collections proposal), then clients that want the natural JSON object
>> could simply GET the entry and then access entry.value.
>
> So I think when you say "natural JSON", you have a specific media
> type in mind, and not just JSON.
When I say "natural JSON", I mean the grammar that is appropriate for
a particular application. We don't define new media types for *every*
XML-based markup language we use to exchange information. Similarly,
we can't expect new media types for *every* possible JSON-based format
that people might use. "Natural JSON" means the application-specific
JSON that it's not worth defining a media type for. Hence application/
json.
>> Con: Requiring support for content negotiation places an additional
>> burden on service implementations, which may lead to fewer (or
>> nonconformingly implemented) services.
>
> We don't have to require support for conneg. We have to define the
> media types, what they mean, and any associated protocol semantics.
Right, but part of the protocol might be that whenever a resource has
a "natural JSON" representation, servers should also provide a
representation in some defined media type (eg application/metadata
+json, or whatever).
(This was a particular reaction to Kris's suggestion that URIs *with*
a / at the end would denote a collection while those *without* the /
at the end would return the schema for that collection. Behaviour
that's based on URI mangling makes me very uncomfortable; those that
are based on conneg seem more reasonable; but something that was based
purely on linking would make me happier still.)
Jeni
Absolutely. This is my situation: I want to build a web app with an
API that other web apps, and my own AJAX code can use. I would use
Atom/APP, but (a) JSON is lighter-weight than XML and a better fit for
AJAX and (b) my web app's not based on posting articles, which means
that Atom isn't really a good fit.
What I'm really after are some guidelines about how to construct a
resource-oriented, RESTful API for JSON. APP is great for Atom, and I
can see it being adapted for JSON, but I still have lots of questions,
like what metadata the JSON should contain and how it should be
served; which headers should be used and when; whether there are any
constraints on the URIs and so on.
I don't know where to go for the answers. The APIs I see around either
aren't very RESTful (such as Twitter's, or Flickr's "REST" API, which
has been copied elsewhere such as by Remember the Milk) or are highly
bound to the particular application (such as Amazon S3 or CouchDB).
It seems to me that there's quite a bit of experience around, from
sites that have used APP for data exchange (such as the Google Data
APIs) and sites that use JSON already, to begin to identify some
requirements and good, bad, and common practice. I'd hope that we'd be
able to brainstorm and discuss possible approaches here, in order to
help work out what it's worth even trying to implement, without being
accused of premature standardisation.
Jeni
Ah, sorry, I wasn't aware of that! Thanks for the info.
I listed it as a 'con' because I think the size of the "envelope" will
usually be negligible. If that's the case (I'm not totally sure), then
conneg seems unneeded, because clients can always use the enveloped
representation, and quicky access entry.value. But see below...
> I imagined that clients would mostly request natural JSON and go to the
> augumented version when they needed particular information (such as what URL
> to POST to to add to a collection, or the next page in a paged collection),
> but that doesn't preclude other patterns of use.
OK, then we're coming from 180 degree different directions :)
I imagined that clients would mostly deal with a hierarchic structure
of collections and entries. The entries contain metadata and the
"natural JSON" as content (or AtomPub-like pointers to the content, if
the resource is e.g. binary).
(I think because JSON is so convenient and close to programming
languages' built-in data structures, the inconvenience caused by
envelopes is _very_ small, and in fact, it may be convenient to always
have the envelopes around.)
Manuel
> I imagined that the representation-including-metadata would be a
> representation of the same resource, just augmented with metadata (and
> including a "summary" version of the resource). So I *think* that conneg
> fits.
Works for me; I just think there needs to be an alternate mechanism to
ask for the data.
> Bill:
>> IMO, any solution that depends on conneg needs to define a processing
>> ladder of alternatives (eg, depending solely on Accept: won't fly in
>> the part of the mobile web I currently work in, unfortunately).
>
>
> Can you describe the issues there in more detail?
The origin server never gets the Accept header. Granted it's a corner
case. In mobile I tend to think about torture/robustness tests around
dropped or mangled headers - can the protocol survive without a
particular header? In this case, without an accept header the protocol
won't function (the logical is similar to Google and others using
X-HttpMethodOverride).
Bill
Thanks for that, Jeni; I completely agree on all points.
As an example of sharing common practice, I just ported a feature from
our Old Way Of Doing Things, which involved Users (class) and Sellers
(subclass), and found that whoever wrote this feature decided to pass
the Seller.id around, instead of User.id like every other feature does.
My intuition tells me that the Web interface should hide that
implementation detail (that we're using a separate DB table for seller
info), and that the new URI's I'm minting should expose only User.id.
But I'm struggling to justify that intuition to our team without
community discussion or (eventually) something more formal.
HTTP, for example, is not highly interoperable just because everyone
uses it, but because it obeys architectural constraints which enable
that. I'd like to see us discover and expose more of those kinds of
constraints, in a JSON context, that will make my app more robust.
Robert Brewer
fuma...@aminus.org