--
You received this message because you are subscribed to the Google Groups "API Craft" group.
To unsubscribe from this group and stop receiving emails from it, send an email to api-craft+...@googlegroups.com.
Visit this group at http://groups.google.com/group/api-craft?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.
"form": {
"uri": "/search/all",
"method": "GET",
"fields": {
"text": {
"type": "search"
},
"page_size": {
"type": "integer",
"default": 60,
"max": 100
},
"page": {
"type": "integer",
"default": 1
}
}
},
We're in the middle of also adding in max & min for page numbers, as well as some client-side validation rules.
HTH, Cameron
--
You received this message because you are subscribed to the Google Groups "API Craft" group.
To unsubscribe from this group and stop receiving emails from it, send an email to api-craft+...@googlegroups.com.
Visit this group at http://groups.google.com/group/api-craft?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.
@Cameron: That's certainly very interesting!I have never embedded forms in my responses at all. Could you perhaps post a sample response containing the form, so I can see how this is done? :)
--
You received this message because you are subscribed to the Google Groups "API Craft" group.
To unsubscribe from this group and stop receiving emails from it, send an email to api-craft+...@googlegroups.com.
Visit this group at http://groups.google.com/group/api-craft?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.
To unsubscribe from this group and stop receiving emails from it, send an email to api-craft+unsubscribe@googlegroups.com.
Visit this group at http://groups.google.com/group/api-craft?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.
--
Gruß,
Felipe
If the templated link's relation is a URL that exposes docs that describe the variables then the knowledge is not out of band
--
You received this message because you are subscribed to the Google Groups "API Craft" group.
To unsubscribe from this group and stop receiving emails from it, send an email to api-craft+...@googlegroups.com.
Yes, exactly. Is that controversial?
Cheers,
M
On 6 Feb 2013 01:33, "Mike Schinkel" <mi...@newclarity.net> wrote:
On Feb 5, 2013, at 3:07 AM, Mike Kelly <mikeke...@gmail.com> wrote:
> If the templated link's relation is a URL that exposes docs that describe the variables then the knowledge is not out of band
Trying to understand, are you saying if the representation contains a link to docs then the knowledge in the docs is not out-of-band?
On Feb 6, 2013, at 2:07 AM, Mike Kelly <mikeke...@gmail.com> wrote:
Yes, exactly. Is that controversial?
--
Remember that hypermedia/REST is "is software design on the scale of decades" ... Some of the benefits you get are not obvious in the short term.
I understand that's the theory. But I have strong questions about if it's really true given our current state of practice.
Hypermedia adds "one level of indirection".
Fair point.
It reduces the amount of the knowledge a client needs. If all you document is a link relation identifier then the server is free to change its URL structure over time without breaking clients. That is a well known benefit.
True. But from what I've seen the complexity required of clients to support hypermedia is at least an order of magnitude greater than with URL construction so every time I consider the question it seems that the benefits are a pyrrhic victory.The analogy that comes to mind is the difference between offering a ferry to cross a wide river vs. building a bridge that spans 1/2 the river's width. Sure the latter gets you 1/2 the way there but the rest of the way is a lot more effort than just taking the ferry from the start. If you are going to promote the use of a bridge, but sure to build it all the way across.
So hypermedia is also an important part of making distributed systems on a global scale work together.
Many people say that, and everyone points to the web as proof. To which it clear and certain. But what evidence is there than any API has achieved global scale because of it's use of hypermedia? I am not aware of any. Maybe you can address my ignorance on this matter?I strongly believe in the theory but I don't think that the hypermedia theory currently has much practical value. To be clear I do think the remaining parts of the REST theory are highly practical today.I think for hypermedia to have real value we'll need many layers of new standards for APIs, but I see no evidence that there is any tangible movement in that direction. I see a proponents for representation formats but it seems every potential standard format is represented primarily by one individual (or Microsoft) and that does not for global standards make. So far I don't see collations forming with the goal to get past personal preference and form standards.Besides formats I think we need standards for:
- Naming GET and POST variables- Methods of assigning data values to variables- Naming resources and actions- Defining and applying workflow rules- Methods for overriding variable values and rules when a declarative approach just isn't good enough.
But before we can get to hypermedia nirvana I think we'll have to see a lot of sub-standards emerge that are industry specific to validate the technical approaches and then later more generic standards could emerge.
It is only after some time you will discover that, duh, it could be nice to be able to change the URL for some resources - and discover that you cannot do that without creating a whole new version of the API.
I think you conflate two concerns. More on that below.
Later on you may want to "out source" some of the work to a different service - and cannot do that because the clients are hard coded with relative URL templates and a fixed host name.
Can't HTTP 301 and 302 status codes address this concern from a practical matter? New URL construction rules can be published with redirects are put in place for older API clients. Why is that not viable?
My pet example is Flickr's API (as of a few years ago) - it contained image IDs and the developer docs specified how to create the image URLs from those image IDs. Flickr would have made it much easier for everybody, including them self, if they had included the actual URLs in their responses.
On this I agree very much. There are definitely strong reasons why a full URL in a representation makes great sense, but that is the opposite end of the hypermedia constraint. This is much like it how it is great to have a vehicle that can run off an electric motor but given today's fueling station infrastructure having a vehicle that can only run on electricity can create more problems than it solves for the driver.With Flickr they could have included full URLs but still have URL construction rules. It it were following the hypermedia constraint Flickr API would not be allowed to publish URL construction rules so those for whom it would be much easier to construct URLs would be forced to use hypermedia instead.
You can apply the same kind of thinking to "in bound" URL templates. If the clients consume those at runtime, then you, as the API owner, are free to change those templates. You can change the path or host name and add other stuff to the template. For instance going from "http://example.com/orders/{ordernumber}" to "http://api.commerce.example.com/orders?customer-id=1234&ordnum={ordernumber}".
Again, agreed. But we are still left with the fact that it takes OOB knowledge to find the data for {ordernumber}. That's a great example of what I meant when I said a bridge 1/2 way across the river.
So I question if those promoting hypermedia as a solution for today's problems are not doing a disservice to those who might choose to implement hypermedia. Every time I ponder that question I arrive at the same answer.I think I'm mostly disheartened by promotion of hypermedia as a panacea when there is such a lack of tangible efforts to create standards for response formats, variables, variable data access, resource and action naming, workflow rules and exception handling. Of course back in 2006 I said pretty much the same thing, but I digress.I know on this list I'm being the contrarian. But I feel like if nobody else will then maybe it needs to be me.
--
I have to disagree with you on this. Especially since you agree with this example:
Acknowledged.
Do you really mean that the complexity of following an image link is magnitudes greater than constructing that URL in code on the client side?
No, that mischaracterizes what I meant. To build an API client that correctly manages navigating a complex hypermedia API which can result in several HTTP requests to get to the ultimate "prize" is in my experience an order of magnitude more difficult than the few lines of code it can take to access a single service on an API using URL construction, especially when the service uses API key-based authentication over SSL. (sorry for the run-on sentence.)
If you like that example then there is more to come. Twitter's "REST" API is full of it. The response you get from looking up a tweet contains the ID of the user that created it - why not include a link to said user? And the list goes on ...
I'm totally agree it is great to include URLs in many cases (except for I assume when coding for low-bandwidth mobile devices, but then that's only conjecture on my part as I don't work with mobile.)Instead the issue is if hypermedia is the *only* way and URL construction is *not* allowed; that's the point I tried to make clear earlier but it seems I failed in that effort.
One of the problems lie with the so called "REST" client libraries out there. On my job we are currently working on an (internal) API for an iPad application. That API is hyper media based - we return lots of links to other parts of the API. But it turns out that the popular iPad client libraries MUST be hard coded with relative URL templates and a fixed host name. Sick! It turned out to be more troublesome to follow an absolute URL than following a hard coded URL template. That is truly a sad state of affairs for client side libs.
Interesting. But ironically I think that shouldn't be so hard to code around; if you know the root just subtract if from your absolute URLs. Even so that's not that use-case we are discussing as it's clear you understood.
In my personal crusade for creating the best possible REST client library for C# I started Ramone (https://github.com/JornWildt/Ramone) - it supports both link templates (absolute and relative) as well as following hyper media in links as well as forms (see http://soabits.blogspot.com/2012/04/ramone-consuming-hyper-media-rest.html).
C# is over my head w/o a lot of time to study it. I decided to give up on C# a long time ago and spend my days with purely dynamic languages because I personally don't aspire to be a world-class programmer. I want to use the most productive language possible, hence probably why I push this issue. Yes I'm a bit of an odd-bird to hang out on lists like this.Speaking of personal crusades I've been working on two PHP libraries that work together but can be used separately:1.) RESTian and 2.) Sidecar; the former is used to map URLs to create clients for REST-ish APIs and the later is a framework for building WordPress plugins quickly and robustly (yes I know some people on this list may find that latter phrase oxymoronic, but I digress. :)
Basically RESTian allows a use to subclass the main RESTian class and then map a client API's resource, actions and variables so that the PHP programmer doesn't even have to think about authentication or HTTP. But more importantly I'm trying to identify all the patterns that are used by REST-ish APIs in the wild so that I can potentially describe an API declaratively. Because if I can do that they hypermedia-based API clients might be easier to implement. My plan it to start publishing "RESTian Clients" for popular APIs, i.e. API clients for PHP based on RESTian.Back to hypermedia in general: the problem I've found is that many who advocate the hypermedia constraint as a panacea hand-wave over all those other issues I mentioned. (I think) the net result is that many people who consider hypermedia get caught in a quagmire trying to implement it and say "screw this" and go about it the easy way.The upshot is the true believers still believe and the pragmatists just keep on keeping on and very little forward progress is made. What would I think is forward progress? A collective recognition of the issues and a concerted effort by everyone in the "community" to come together with a goal of creating standards so that standardized open-source tooling could be created to support the nirvana that is hypermedia in a perfect world. But again, I digress. :)
What do you mean by "opposite end of the hyper media constraint"? What is in the other end?
But at least you agree on full URLs in representations. So why not start with that and lets all have a hyper media feast :-) Something as simple as that makes a lot of API consumption easier - and allows for a lot of flexibility for the server.
One end is:
"It's great to have (useful) URLs in representations, but when it's easier let's just compose URLs."
The other end is:
"You must have URLs in representations and you must not ever, ever, ever compose URLs."
Again, I tried really hard to explain that before but obviously I failed.
> I think for hypermedia to have real value we'll need many layers of new standards for APIsYes. Maybe not many layers - but certainly some standards (like HAL and Sirene is trying).
By many layers I was referring to standard that don't cover too much. So each of those things I mentioned I would see as a separate initiative resulting in separate standards.
> Besides formats I think we need standards for:This is one of the places where you loose me ... why? What is your scenario?
> - Naming GET and POST variables
If I request a representation and it has URI templates then there are variables in those templates. Is there any relationship between those variables and the variables in the GET that that retrieved the representation? After reflecting this may actually be the easiest problem to solve of all I listed, but it still something we all need to agree on if we are to experience global interoperability. More on this later.BTW, back in 2006 when I first started looking at this there were many people who were adamantly opposed to the idea of URI templates. At least we have progress there, but unfortunately Roy and maybe others preferred the spec to be cryptic rather than easy to understand. They wanted to keep the bits over the wire to a minimum. My personal opinion is that because of their choices URI templates will see much less adoption than it otherwise would have. I see CMSes and frameworks and not many of the popular ones I've seen are embracing URI templates.As an aside, once I heard of someone who had a problem and he thought to himself "Hey, I could use URI templates to solve my problem!" Soon after he realized he now had two problems... ;-)Frankly I'd love to see a "URI Templates Lite" follow up spec to address 80% of use-cases but with much more obvious to understand syntax. But I doubt that will happen.
> What you cannot do is to redirect based on the state of the previous resource.
Isn't HTTP and REST supposed to be stateless?
Had the client followed a link then the representation containing that link could served different links to different clients depending on what ever rules you might think of. That is not possible with redirects. Using hyper media adds flexibility you won't get otherwise.
True, but hypermedia also adds costs in terms of complexity on the client end. If I need something simple from an API it's major overkill. I'm reminded of the title of this book:
Allow me to repeat myself - REST is programming for the decades - you don't get all the benefits here and now. You can do lots of hand waving and workarounds - but in the end I am betting on hyper media as the winner ;-)
I'll agree that it's for decades, but only if we can address those other issues I mentioned otherwise we've got just a bridge that extends over only 1/2 the river. That bridge might be good for decades, but it's not usable from a practical standpoint. :)And also many people who advocate it don't make the decades part clear; they just say it must be hypermedia for all use-cases.
> > You can apply the same kind of thinking to "in bound" URL templates> Again, agreed. But we are still left with the fact that it takes OOB knowledge to find the data for {ordernumber}
Please elaborate a bit on this. What kind of OOB are you talking about? As I said earlier - the client and server has to agree on something: they agree on link relation identifiers for finding links - do you consider such identifiers as OOB? In the same way they agree on variable names for data - like agreeing on, yes, the variable "ordernumber" exists and means such and such ...
What I'm starting with is HTTP 1.1, GET/POST/PUT/DELETE and maybe HEAD & PATCH, URLs, URI templates, and JSON. All that is standard and defined, and works globally, right?Next with have Hal, Siren, Collection+JSON, and probably 5-10 others. Those are all trying to address the representation format but for my tastes they are all competing too much with each other rather than coming together to create one main agreed-upon standard.We are not using more than one standard for HTTP or URIs, are we? (versioning doesn't count.) It we want global interoperability we need to collectively find a way to get past personal preferences and try to find the format that can handle ~80 percentile use-cases or better with ways to address the remaining ~20.Next we need to address how data that comes from representations returned gets applied to variables in URI templates and/or the equivalent for POST variables in subsequent GETs and POSTs.Beyond that we need to explore standards for resource names and actions. On the web we have "prev" and "next" which often means the previous document and the next document but there hasn't really been anything that enforced standard usage. Still those are two of the best examples I have.What I envision for action and resource names is something like media-types; a well-known list of resource types and actions that API clients can be programmed to understand. As I write this my first issue is becoming clearer to me; I think we need some way to declare/define "entities" and give them names. For example, consider this hypothetical JSON (which is probably syntactically invalid):
{"_entities": ["order" : "http://api-standards.org/orders/v1.0","label" : "text","price" : "http://api-standards.org/currency/usd/v1.0",]}
With that now we might have standards that say that the following "actions" refer to actions on entities:"new-{entity}""update-{entity}""delete-{entity}"With these "standards" we'd then be able to write generic client APIs that would know what do to with this (assuming our standard format support these):{
"_actions": [
"add-order" : {"href" : "http://api.example.com/orders/new","method" : "POST",},"update-order" : {"href" : "http://api.example.com/orders/{order_id}","method" : "PUT",},"delete-order" : {"href" : "http://api.example.com/orders/{order_id}","method" : "DELETE",},]
}I'll leave the concept of declarative workflow and workflow exceptions to some other time.So if we could define entities and how they get their values in a standard way and actions in a standard way we could start building infrastructure to make it easy to work with those. We could but software that makes assumptions about these things and we can reduce effort to use hypermedia and make it effectively as easy us URL construction with an HTTP GET. Browser vendors could even build hypermedia API processing into their browsers!Remember 10+ years ago doing an HTTP GET wasn't exactly easy on all platforms. You couldn't do it using Microsoft's ASP and VBScript (which is what I used for ~10 years) and I'm sure that wasn't the only one with said issues. I'm anxious to see things improve over the next 10 years, but I'm disappointed because I don't really see the people currently involved being interested in driving towards single standards and I don't see the major internet players involved either.
I don't see you as such. I believe it is important to discuss these matters - maybe one day we will look back on these kinds of discussing and think, oh yes, those were the times when we didn't grok hyper media for APIs :-)
Thanks for the support. Sincerely.
Lets say you want to create a new Student - such a student has a property "studentName" in the domain definition. But that doesn't say anything about the actual POST variable name to use "on the wire" when interacting with the actual concrete API implementation. It could be that "studentName" is mapped to the POST variable "NameOfStudent".> Besides formats I think we need standards for:I am trying to understand your thinking here ... First of all take a look at Mike Amundsen's new article on InfoQ: http://www.infoq.com/articles/hypermedia-api-tutorial-part-one. What Mike is doing here is breaking apart the domain definition and the actual web API implementation - first he discuss the domain elements (Students, Teachers, Schedules and so on) and possible actions (state transitions).
> - Naming GET and POST variables
> - Methods of assigning data values to variables
Here is another example: for e-procurement we have domain knowledge of concepts like credit card numbers, product ID (SKU) and quantity. In one API these concepts map to variable names like "CreditCardNumber", "SKU" and "Q" whereas another implementation maps them to "CCN", "ProductID" and "Quantity".
Is this mapping between the domain language and the actual implementation what you are looking for?
Thanks for the follow up and the link; I'll read later.Too tired to process this one constructively but reading your email (though not yet his article) I think this is the type of direction I'm talking about.However, it's one thing for a single developer or team to do it for their app. It's another to establish a standard so we can build open-source and platform embedded tooling to do most of the work for us. (Again. I haven't read his article yet; maybe that's what he's proposing.)
Back to hypermedia in general: the problem I've found is that many who advocate the hypermedia constraint as a panacea hand-wave over all those other issues I mentioned. (I think) the net result is that many people who consider hypermedia get caught in a quagmire trying to implement it and say "screw this" and go about it the easy way.
Isn't HTTP and REST supposed to be stateless?
True, but hypermedia also adds costs in terms of complexity on the client end.
...
BTW. that article from Mike Amundsen is pretty good! I like the breakdown and clear descriptions. Thanks Mike!
Cameron
On Thursday, February 7, 2013 11:17:39 PM UTC+13, mikeschinkel wrote:Back to hypermedia in general: the problem I've found is that many who advocate the hypermedia constraint as a panacea hand-wave over all those other issues I mentioned. (I think) the net result is that many people who consider hypermedia get caught in a quagmire trying to implement it and say "screw this" and go about it the easy way.
Your last sentence pretty closely matches the experience my team & I are going thru, to some extent. We're developing a new API and after some research decided that REST with HATEOAS using JSON was the best approach. We've viewed videos, read blogs, read specs & plenty of other stuff, and still have issues trying to understand how our approach is "better" for the environment.
Thanks for following up on my comments.The more I discuss this the more insight I have into my own perspective and the more I'm able to envision what a solution to these issues might be.Right now some of the most active people on this list are promoting Hal, Siren and/or to a lesser extent Collection+JSON. In the past there have been others on this list and/or rest-discuss promoting their approach: Atom, The Object Network, oData Light and probably several others I can't think of at the moment. These are all mostly operating as islands, each following the preferences of their promoters. And there's nothing wrong with islands per se, but a collection of tribal regions does not a nation make.And none of the above are attempting to address building the bridge all the way across the river; they mostly only deal with how to represent links in a JSON representation (forgive me in advance if one of more of them is actually attempting to do more.)How about instead that all the current players get together and form a "Interoperability Group for Web APIs" (IGWA) with a goal to define ONE global standard for web API interoperability. It would emphasis addressing 80% of the problem today rather 100% of the problem at some arbitrary point in the future much like how HTML 4.x has happily handled 80 percentile use-cases for years. This would mean minimizing options, i.e.
- Do we really need to support every conceivable data format, or can we stick with JSON?- Do we really need to support every conceivable URL format, or can we stick with a common subset?- Do we really need to support every auth method, or can we just use OAuth 2.0?- And so on...
This could be a group working in the W3C or if could be it's own independent organization backed by companies like Apigee, Mashery, 3Scale, Layer 7, etc.One thing that would be important to avoid would be pitfalls that occurred with OAuth 2.0, i.e. it should not become a "standard" designed merely for companies to leverage to sell consulting nor should it attempt to enable every possible enterprise use-case:
Instead it should be a standard designed to empower open-source tooling to simplify and standardize implementations with a result being to encourage more implementations to be the similar rather than to have many arbitrary differences.I'm thinking all of the Zen of Python would be a good guide:
But especially:
There should be one-- and preferably only one --obvious way to do it.
-MikeAs I envision it this would also mean building both client and server based reference implementations in Javascript to fully implement the stack and the Javascript versions would be maintained by this group, published on GitHub of course. This step would be critical to ensuring the working with hypermedia-based Web APIs was as easy as doing HTTP GETs currently is. And other languages and platforms could derived from the Javascript reference implementations (I'd happily write the PHP one.)This would also mean setting a goal of building the entire bridge to cross the river, not just 1/2 of the bridge. What does that mean in real terms? The working group would set goals to tackle all requirements to enable 100% in-band web API interaction for the 80 percentile use-cases and that means addressing the things I mentioned before:- Naming of entities and how they related to GET and POST variables- Methods of declaratively assigning data values to variables- Naming standards for resources and actions- How to declaratively define and apply workflow rules- Methods for overriding things when a declarative approach just isn't good enough.I'm not suggesting one standard for all of these, but a collection of standards where we take one bite of the apple at a time and we are not "finished" until we get to being able to handle 100% in-band interaction for the 80 percentile use-case. Three to five years later?If this were to come to pass then everyone who is currently promoting a different web API representation would be somewhat unhappy with the result, by definition because everyone has conflicting personal preferences. But the result would be something we could all build tooling for and the standards would beget extensions, both open source and commercial, more usage and more overall value.Anyway, that (or something like it) is what I think needs to happen for hypermedia's promise to become reality rather than continue to be the heavy burden it is today.Any takers?
--
That is an interesting idea. My first reaction is, well, there are different needs for different hyper media aware formats depending on your use case, so I think it may be a bit hard to come up with ONE format (to rule them all ...).
You might be right. But I'd really want to validate that it is indeed not viable before discounting the approach because if viable it would make so many things easier.
In the mean time - think about the audience for this format you are envisioning.
To be clear this is not just one format, it would need to be a collection of standards needed to truly able hypermedia, formats just being one aspect.
Is it for mobile apps developers, browser apps, autonomous background services, ... other? It is obviously not a domain specific format, which is another possibility :-)
I really am not thinking about this as being domain specific. Maybe it needs to be, but hopefully not. One approach would be optional approaches for example to reduce latency and/or minimize response size but then doing so adds complexity so maybe it's better to let that be yet another layer added as a a later optimization.
I'm sceptical that the solution to media type or protocol proliferation is to create yet another media type or protocol.. On top if that I'm not even sure "proliferation" is a real problem. I think it's better to have these best practises evolve of their own accord.
Cheers,
M
--
I'm sceptical that the solution to media type or protocol proliferation is to create yet another media type or protocol.. On top if that I'm not even sure "proliferation" is a real problem. I think it's better to have these best practises evolve of their own accord.
I wouldn't use the term "proliferation"; it's not like there's an explosion of formats.
And the exact "how" is not yet clear; what I proposed was a strawman; something to hopefully get a discussion rolling, not the final word.
Fundamentally it seems as if there as several small ecosystems all happy to share the faith in one god (hypermedia) but with no interest in establishing a common language or common currency. So we have effort (development) put into Hal and effort put into Siren etc. but almost none across ecosystems. We have several small "marketplaces" but no ability to create what could become a much larger marketplace, by many orders of magnitude.
One possible future is that we continue on the current path. That will likely see a slow pace of progress with little disruptive change for any one ecosystem for the near terms. And I could see that being less risky because the alternative is unknown.
But I'm hoping that we'd collectively like to see more rapid progress where development can benefit the "marketplace" as a whole instead of smaller segments. I'm reminded of Metcalfe's law, which I think perfectly applies. If the benefit of hypermedia is to create large scale systems that are resilient to change and that can last decades, why choose paths with hypermedia that by definition minimize the benefits it claims?
-Mike
--
You received this message because you are subscribed to the Google Groups "API Craft" group.
To unsubscribe from this group and stop receiving emails from it, send an email to api-craft+...@googlegroups.com.
Visit this group at http://groups.google.com/group/api-craft?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.
On Feb 7, 2013, at 5:15 PM, CJunge <camero...@sella.co.nz> wrote:Right now some of the most active people on this list are promoting Hal, Siren and/or to a lesser extent Collection+JSON. In the past there have been others on this list and/or rest-discuss promoting their approach: Atom, The Object Network, oData Light and probably several others I can't think of at the moment. These are all mostly operating as islands, each following the preferences of their promoters. And there's nothing wrong with islands per se, but a collection of tribal regions does not a nation make.
And none of the above are attempting to address building the bridge all the way across the river; they mostly only deal with how to represent links in a JSON representation (forgive me in advance if one of more of them is actually attempting to do more.)
How about instead that all the current players get together and form a "Interoperability Group for Web APIs" (IGWA) with a goal to define ONE global standard for web API interoperability. It would emphasis addressing 80% of the problem today rather 100% of the problem at some arbitrary point in the future much like how HTML 4.x has happily handled 80 percentile use-cases for years.
Anyway, that (or something like it) is what I think needs to happen for hypermedia's promise to become reality rather than continue to be the heavy burden it is today.Any takers?
> a better approach is a "universal translator."Baebelfish.
"I am not sure about that universal translator… we are not doing this with HTML so why do we need this for JSON?"it's possible i was a bit flip in my use of metaphors.
I am not advocating for a client app that converts everything into one language. I am advocating for a client app that can speak more than one language. Web browsers today speak HTML. HALTalk explorer speaks HAL. I suggest it would not at all be difficult for Web browsers to speak HTML and HAL and Cj and Siren and Atom.And if the vendor community does not see value in doing so (ample evidence to date), forking an open source browser and adding support for these other hypermedia formats is a do-able project. hard work, but not complicated. Not any harder or less assured of success than would be establishing a single shared JSON-based API language/format for everyone to use.
To be clear I do not see HAL as a competitor to HTML in any way. HAL is for machine actors (APIs) HTML is for human actors (Apps).
I do not think any more effort should be spent on GUI-oriented hypermedia types intended as alternatives to HTML.. I would rather see that energy go in to the HTML5 spec itself, or spent on media types that are actually focused on m2m.
HTML is not perfect but if you want GUI stuff like forms, buttons, images, text.. I'm not aware of any significant reasons to avoid it.
Cheers,
M
--
1.) I was NOT proposing a new media type per se. FULL STOP.2.) I AM proposing structured collaboration with a goal to achieve web API interoperability. That MIGHT result in a new media type but it also might not; it might be an extension of an existing media type or it might be a way to allow a few different media-types to co-exist.3.) The proposed collaboration would ALSO include recognizing and standardizing other aspects needed to achieve interoperability. Weaving the web wasn't about defining HTML, it was about defining HTML, HTTP, URLs, Media Types and all the other technologies and ensuring they had what they needed to work together.4.) NOTHING about this would LIMIT options for people who want to do otherwise. The fact HTTP standards exist does not stop people from building new ad-hoc protocols using TCP/IP. And the HTML standard's existence does not stop people serving JSON, XML, PDF, or any other media types over HTTP.5.) I'm not proposing something heavy here. 80th percentile. The Zen of Python. Special cases are ignored as much as possible. "A simply as possible, but no simpler."6.) What I'm proposing is as much a marketing campaign as technology. The coming together of everyone so that the tech press pays attention and the thought process in the minds of people who otherwise don't pay any attention becomes "Hey, we really should probably implement our APIs and we should do it that way."7.) And for those who are interested in the business angle this could create an large "ground floor opportunity" for most everyone involved. That's not my motivation, but I'm sure it is something that might interest others.
- "Achieving a level of interoperability among implementing web APIs such that a single machine client could surf the entire web of APIs using a set of declarative workflow rules without requiring custom programming (per se) and without requiring human involvement during the surfing session."
WS-Star was tunneling over HTTP which makes what I'm proposing very different.
The web is about freedom to choose, but we still have a single primary media type (HTML); a single primary protocol (HTTP); a single specification for URLs; and a registry for Well-Known media types and a single standard for Media Types identifiers. Nothing about any of those keeps you from serving non-HTML media type nor using other protocols besides HTTP or attempting to define your own resource identifiers. But the fact that they exist as a standard makes the value of your freedom to choose worth many orders of magnitude more than if they did not exist.The analogy that comes to mind is I'm proposing a controlled access highway from one end of the country to another and you are saying we can't because we need the freedom to take the scenic route. But a highway doesn't stop you from taking the scenic route when you want to, it just allows you to cross the country cheaply and more efficiently when that's your primary objective.
True, but his dissertation did not say, explicitly or implicitly that his architecture by definition could not be used as the basis for a standard. The most successful architectures have birthed standards; that's kind of the point.
That's an assertion, yes, but on what basis?
Note that the proposal was not about a single format but instead about establishing structured collaboration so we can achieve interoperability. That may or may not mean a single format, but even if it did mean a single format it wouldn't keep you from doing your own format if you really felt the need, or extensions to the one format. Choosing your freedom would mean you'd just not benefit from off-the-shelf tooling and standardization that the single approach would result in.Asked another way, if a single format is not desirable why is the web primarily based on the single format of HTML? Is the web really not "free?"
Thanks for taking the effort. Really appreciate having your input and involvement.
Exactly.
I'm not proposing Esperanto PER SE even though that might be the outcome. I'm proposing establishing a structured collaboration process with a goal to achieve interoperability. If the best minds come up with a single format then that's because it's the best answer that the best minds can achieve. Or not.
Addressing that specific comment, I disagree. I think that a better analogy is to create a set of standards that collectively enable a web or APIs for machines.
As you know I've periodically been the kid who's been saying that "The Emperor has No Clothes" with respect to the benefits of hypermedia. And when I've asked for examples of hypermedia systems that actually work and that are easy to implement the only answer I ever hear is "The web is hypermedia, and look how successful it has been." That of course ignores the fact that the web requires an intelligent agent to navigate it, i.e. a human.But therein lies the irony; the web is fundamentally based on Esperanto, i.e. HTML. There is ONE MAIN media type for the web: HTML. And that's in large part why it works as a globally interoperable system that has lasted for decades and will persist for many more.If I had enough Shakespeare monkeys they could surf the ENTIRE web using a single web browser. The further irony is that not only can we not surf the entire web of APIs we can't even surf from one API to the second API in the wild without custom code.Back to my advocacy against hypermedia. Actually with my theoretical hat on I think "Hypermedia is awesome and solves so many problems!" On the other hand with my pragmatic hat on I think "Hypermedia is just not practical nor is it cost effective, in most cases." And yet people keep advocating hypermedia as if it were (almost?) always practical. But rather than be a continual naysayer though I would prefer to see us bridge the gap to the point where not only is hypermedia practical but also the path of least resistance because of how easy it is. For that to happen, we need collective effort to achieve interoperability and tooling to be developed so only a few people have to do the hard parts.
Such an effort is definitely part of what's needed to achieve interoperability. FYI though, your H-Factors came across to me as a "lessons learned" blog post (albeit a very good one), it didn't appear to be positioned as a collaboration process.
I don't really follow what you meant by "to beef up "programability""; care to elaborate with examples?
Right, if I understand what you mean.The fact that HAL has a browser is a leg up. The question is, what is HAL missing? I've read enough on this list to know many people feel HAL does not meet their needs. Why? Unfortunately I haven't chronicled exactly what all those needs are; I only know the issues I myself run into.So in what I envision it might be that HAL becomes the HTML for Web APIs but with enhancements to address what is missing.
By "they" do you mean if the clients for these other media types existed then they would represent "ready-made" clients for hypermedia-enabled media types that work *today*, right?That said, doesn't having to build a browser for each of Cj, Siren, Atom, OData, et al. seems like a tremendous duplication of effort; why advocate that? Why suggest that are have ~10 GitHub repos with an average of 1/10th of the collective effort going into each one? Why not advocate one GitHub repo where all effort towards interoperability is channeled?
Is that really the best approach? That would mean we'd need 5-10 different libraries for processing hypermedia logic in each client, even newer feature phones and the ever smaller devices that are being added to the web. And we need for each of them to have feature parity so our API client can seamlessly move from one API to another.And even so, imagine all the error handling code required for when one doesn't line up with others. You really want to see a world like that? That'd be worse than IE's incompatibilities have been for the web. Imagine the literally billions of dollars wasted over a decade or two dealing with all the inconsistency.And we know that one of them will become the most popular the rest will have declining popularity and thus fewer resources and less ongoing maturity resulting in API that "die" and have to be replaced with another format simply because they bet on the loosing horse.
To me that statement seems ironic given our discussion. Can you not see the irony I see?I remember from reading "Weaving the Web"[1] that TBL proposed a "Universal Resource Locator" and a single HTML format and many of the people in the IETF told him he was arrogant to propose something universal and unrealistic that people (would be able to) work with a single HTML format, especially one that didn't have all the richness of SGML.Although I am by no means a visionary as was TBL -- I'm merely standing on his shoulders right now -- I still feel a sense of deja vu. I believe that I'm proposing a web of APIs that would effectively be the same thing that Tim proposed for the original web. See how successful his vision of a universal dare-I-say "Esperato" set of standards became, with many of the same types of criticisms I'm hearing to a "universal" approach.Is it really true that no one else see the parallels between the W3C's coordination of an HTTP+URL+HTML web and what I'm proposing?Most of the differences that we have seen in web API formats has been arbitrary and based on individual proponent's personal preferences. Why not coalesce into a group dedicated to enabling computers to be able to surf a global web of APIs using a declaratively defined workflow?Why can't we strive to create standards such that a web API agent can start at Salesforce.com's future API, generate an invoice in Xero's future API, start a project and assign tasks in Clarizen's future API, send email notices via SendGrid, and to orchestrate anything else that is needed? And all with a single API client based on a declarative workflow? Think of the tooling we could build it we have this.Think of the economic value we could create.
One approach is to start with HAL and bring the other stakeholders in to identify what HAL is missing to be able to achieve global web API interoperability? I know Kevin Swiber would like it to be Siren, and looking at the two I see that Siren addresses issues that HAL does not but I think we need to start somewhere. If we don't start somewhere we'll continue with our Balkanized landscape and forgo what I think can be an incredible value creation process on the scale of value that the commercial web brought to us over the past 2 decades.
Thanks too for your response.Sorry if the term "promoted" had a negative connotation you'd prefer to avoid, wasn't my intention to imply a negative. Would "advocated" be a better word to use?
That's fair. OTOH it's really the fact that so many are working on independent implementations, and probably nothing yet addresses all necessary aspects.
I hope I didn't imply that you were ignoring each other. Frankly, this is a very congenial community as mailing lists go.
URLs and HTTP and HTML were experimental in 1993-1995 time frame. But TBL understood that one web was infinitely more powerful than many. I think one web of APIs would be a lot more powerful than a collection of different approaches.
Truth be told, that might be why we have continued Balkanization! :)My personal experience is I'm very opinionated about my coding styles and formats etc. However over my ~25 year career I have found myself adopting conventions I previously hated simply because I moved to a different platform and now I hate the conventions I used to use.More importantly the conventions that I come to accept and adopt that flies in the face of my own preference are the ones that a standard (or defacto-standard) dictates. If I had designed HTML or PHP or Javascript or JSON they would each look very different. But I didn't design them and so I happily use them every day.I think that the same would occur with a Web API standard that attempts to address the 80 percentile use-case. Some people want their JSON data in the root and want their control values in to be underscore prefixed well-known keys. Others want their JSON data as sub-objects and their root values to represent meta data. And a whole bunch of other approaches.And I have my preferences. But you know what, if Mike and Mike and you (Kevin) and Kin and Jørn and Glenn and Brian and Andrei and Felipe and Luke and Ted and Pat and Peter and Sam and Steve and Mark and anyone else I missed all came together and decided on the approach that I most hated, I'd still gleefully use it moving forward because I'd know that most efforts towards interoperability would be implemented with it in mind.So my life experience tells me that maybe personal preferences are not as fundamental as we might feel they are on any given take.
I agree 100%.
I'd love to, but I have personal situations that might make taking the leadership role difficult. I also have a small consulting practice that currently depends on my billing to supply me with enough income so I can't really take on the role without sponsors.Honestly, I didn't email with a goal of establishing myself in the role but if that's what it takes and there are some companies who would be interested in sponsors the effort I could consider cutting back on the consulting and focus more on evangelizing and coordinating this. Even so, that's not my first choice.
Glad to hear it.Any other takers?
I've been reading this thread with interest.
@mike schinkel - are you trying to achieve pure machine automation?
Given a generic client that only understands this tbd web api standard, you could declaratively point it at sales force and xero and have a functioning app without having to tell it about the particulars of each api like sf invoice-number = xero invoice-id?
--
I've been reading this thread with interest.
@mike schinkel - are you trying to achieve pure machine automation?
Given a generic client that only understands this tbd web api standard, you could declaratively point it at sales force and xero and have a functioning app without having to tell it about the particulars of each api like sf invoice-number = xero invoice-id?
Yes, pure machine automation would be the ideal scenario though I doubt we'll get there 100% in the near time. But like calculus focused an approaching the limit, I'm thinking we can get ever closer.
Imagine we have a collection of standards for web APIs just like we currently have a collection of standards for URLs, HTML, HTTP, Media Types, et. al. and of course web APIs would leverage most of those standards.Now imagine that we have an canonical open source client implemented in Javascript and imagine that we pass it a Javascript object or tell is to load the object from a JSON file and that object contains the workflow rules in declarative form that tell the API client how to orchestrate a session where a session > simple transaction. The rules may look something like this (ignoring that it's not JSON syntax):
customer= get-customer using http://api.salesforce.com for id= 12345xero-api= http://api.xero.comcustomer = convert customer using xero-api from salesforceinvoice-items= build-items from getMyInvoiceItems(project)add-tasks with project, getMyProjectTasks()project= new-project from getMyProject()halt-if project.cost > customer.creditinvoice= create-invoice using xero-api with customer, invoice-itemsformattedInvoice = format-invoice using http://localhost/formatInvoice with invoicesend using http://api.sendgrid.com with formattedInvoiceadd-project using http://api.clarizen.com with projectImage the client that can orchestrate this with only something like the above as input. Admittedly the above hand-waves over 1000+ details, but we have to start with a vision if we are going to know what we want to achieve.
Now work backward and ask "What technologies do we need to implement a client such that it is capable of the above?"
Let's assume that both Hal and Siren and Cj continue to be in use. But each of those would need to be enhanced for the features they are missing otherwise they could not be used to orchestrate a session like this.
Our client needs to be able to contact the API can figure out it's auth type and what web API "language" it speaks. And then it would proceed to orchestrate the workflow.
It would need to be able to call on conversions, both via web service and from local functions. It would need to be able to understand what actions are available and how to call them, probably in a coupled way (some things *have* to be coupled.)
This client would need to be able to query APIs for named entities, and it would need some method to enable us to discover somehow and refer to them also in a coupled way.
And actions and entities are probably related, i.e. "create-invoice", "add-tasks", etc.
As I write this it seems to me that this is not really that much of a stretch. It just needs guidance and collective support from the community that would all strive to see it become reality, sooner than later.
On Feb 10, 2013, at 1:21 PM, Felipe Sere <felip...@gmail.com> wrote:
If I understand you right, you are not interested in a single, all-mighty application/vnd.this.is.the.type.of.life+json (or xml) kind of standard (though not against it if it comes to it). But rather something like commons message blocks which have a common semantic across multiple messages. Think of it like the you address an envelope. Its common across the globe and works across different shapes and sizes of boxes/letters. If that is what you mean, then I could really get behind that idea.
Yes.
Basically its all about defining common hypermedia controls and not so much the data in a message. Really. Who cares about the content.If we had a common way of defining Hypermedia-Controls, and the H-factors give enough ground for discussions, then you can have a common client to navigate the APIs.
Yes. See above.
Mind you, the thought of having an autonomous Agent crawling across APIs autonomously kind of reminds me of the twins in the Matrix.
heh. ;)
The thing I feel like doing is taking Cj, HAL and Siren and comparing how they implement the different H-Factors.I honestly don't care about anything else. I believe the only thing that should be standardized is the way to transition state with and between APIs.
It that approach is workable that would be excellent. I have no idea at this stage if it is workable or not.
I'll see if I can draw up a comparison of the different types with respect to the H-Factors. Might even be useful for work :)
That would be very cool!
@mike schinkel: have you ever thought about the complexities of such a canonical client? Even if you introduce some kind of "rule configuration" you a pretty much down the in the "planning problem" (automaton calculating the steps in a procedure to achieve a goal), coupled with a translation problem (ID is not the same across APIs e.g.) which means you'd need some semantics behind it which is a big thing which no one outside the scientific community really uses.
I'm assuming the work we'd do would approach the ideal, but might reach the absolute ideal for a very long time, if ever. The microcode in each of our computers is incredibly complex but that doesn't stop non-techies from using their smartphones the point being is the evolution of technology is a process of layering simple technologies over the top of technologies that came before. Eventually the technologies become complex enough that they are indistinguishable from magic, see rule #3 of [1].The idea is to start building small layers, and if we do over time we'll get there but in the mean time things get incrementally better each step of the way.
Fr my bachelor thesis I played around with business process composition based on services and even disregarding the translation issue, the result is always pretty ugly, if at all computable....
I've been building the type of software I talk about all of my professional development career, albeit on very small scales. But I've had something like this vision for the web since around 1997 and I truly believe we have had enough layers and enough convergence that the idea that 5-10 years from now this can all exist to address the 80th percentile is not unrealistic. Quoting[2] Bill Gates:
“Most people overestimate what they can do in one year and underestimate what they can do in ten.”
I'd much rather empower humans to work easier with power tools rather than trying to accommodate every scenario within a canonical client.
I'll repeat what I said to Jørn:
This is for the 80th percentile use-case. The other 20 percentile will continue with business as usual; no capabilities will be taken away from them.
From my point of view, if you design common hypermedia controls on the server side, all eye client has to do is follow the provided controls and populate and transform values.
I'm not actually smart enough to know what the technology to achieve interoperability will ultimately look like, but I do believe I'm smart enough to envision it is possible if the right people come together with a common goal.
--
Been reading / observing this thread.High level thoughts. I can see plenty of value in coming up with some agreed upon practices, idioms, and conventions for hypermedia APIs.
Well said. You were able to capture the gist much more concisely than I was able to.
I am skeptical of the value of "jumping" around from API to API.
-MikeProbably so. Humans often browse by whim, without a set plan and often without a set goal. Machines need a set of instructions which effectively set up a goal for them. Starting a process to agree on practices, idioms, and conventions can move us in the direction of being able to process a set of instructions with decreasing coding effort and increasingly less coupling.
--
I definitely agree with this statement "Starting a process to agree on practices, idioms, and conventions can move us in the direction of being able to process a set of instructions with decreasing coding effort and increasingly less coupling."As long as collaboration doesn't lead to a ton of complexity, unreasonable constraints, and analysis paralysis, I am all for it :-)
Agreed with that!So it seems like there are at least a few people who like the idea of establishing some agreed upon practices, idioms, and conventions for web APIs. Not knowing any better my next step would be to create a GitHub account for the initiative that is not tied to any one company but instead collectively held by a leading group of stakeholders. Then I expect we'd establish a repository/wiki where we'd start by documenting some current formats and current authentication protocols.I'll assume that this Google group will be appropriate for continued discussion and that their is no need for another group? I'll also assume we'd use the GitHub ticketing system to keep track of specific todos and sub-proposals? Not sure the best way to keep track of a list of people who want to consider being participants; a page on the wiki maybe?If that doesn't make sense then I'm all ears; I've not actually organized such a thing before.If it does make sense then we probably need a better name than "Interoperability Group for Web APIs", IGWA just has no ring to it. I started with "Web API Interoperability Group" but WAIG has already been taken[1].So how about some name proposals; what will this initiative be called? And what's the best online tool to capture votes to determine which name is liked the best?
--
You received this message because you are subscribed to the Google Groups "API Craft" group.
To unsubscribe from this group and stop receiving emails from it, send an email to api-craft+...@googlegroups.com.
Visit this group at http://groups.google.com/group/api-craft?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.
On 10 Feb 2013 23:13, "Glenn Block" <glenn...@gmail.com> wrote:
>
>
> As long as collaboration doesn't lead to a ton of complexity, unreasonable constraints, and analysis paralysis, I am all for it :-)
>
I'm pretty sure this would also have been the sentiment during the beginnings of <insert over-engineered technology here>.
Cheers,
M
--
Sounds like a problem for a subcommittee..!?!
Ok ok, I'm done now. ;)
In my experience, it's very much an art form to develop and deliver something that is useful, easy, robust, and flexible. All of these attributes are required for the killer API.
That said, I have a few comments based on past experience.
You can implement super-advanced features on the API service side but I have found the developer audience which leverages web APIs appreciates pure simplicity.
With simplicity, the learning curve is greatly reduced---how many developers actually read doc cover to cover if they don't already have a stake in its usage (e.g. client specifies it, they are a business partner with the service, etc.)?
Most developers are interested in finding libraries, services, endpoints, content, etc. that will solve specific problems quickly and easily. Extraneous parameters and options, however useful, may actually distract this audience (who, again, has no previous stake or attachment to the service)---I jokingly call it "developer A.D.D." and I believe it makes or breaks APIs as products. Those extra options, IMHO, can also contribute to code bloat, can reduce performance, and can limit flexibility.
I can go on and on here based on personal experience... I work with veteran programmers who have been accustomed to releasing additional functionality on top of existing SDK libraries year after year...the old COM model (which becomes an unwieldy monster over time) ...enough said.
If you look at how Amazon's Simple Storage Service (S3)---a hyper-robust and scalable no-SQL key-value store--- exposes pagination options for listingv or enumerating those keys to clients, you will see that they opt for simplicity and flexibility over features; thus, empowering the service's clients to wire up their own solutions based on their specific (and, hopefully, market-driven) use cases.
Amazon S3 List Objects pagination:
1. Results (list of keys/unique identifiers) are *always* guaranteed to be returned/enumerated in the same order
2. An optional 'max-keys' parameter can be specified to explicitly limit the number of keys returned in the response.
3. An optional 'marker' parameter can be specified to explicitly indicate where to start the enumeration of the list of keys.
4. In the response, a 'marker' property is returned which identifies the last key that is returned in the response (if there are more keys)...It's where you can start from in your next web request.
Lots of flexibility here IMHO whether you are consuming this service in a cpu/memory/bandwidth-limited mobile app or a multi-threaded high memory ETL workflow on a server app.
Thanks,
Tony
I don't work for Amazon ;)
Ref: http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGET.html
And the obligatory xkcd: http://xkcd.com/927/
Exactly! That was the XKCD that came to mind when I first saw this email:Just sayin... :)
--
Would you be willing to start a GitHub repo under your own account until the Sub-Committee on Naming adjourns their quarterly meeting?If you could take a first pass at jotting down some high-level ideas, that would be great. We can always transfer ownership to an org later.
Thanks for the suggestion.
you could take a first pass at jotting down some high-level ideas, that would be great.
Recognizing that history favors action I created a GitHub account, gave it a name and was writing the wiki main page when you emailed.
We can always transfer ownership to an org later.
That's what I figured, even with the name. We can easily rename the effort if we like. Here's the repo:
I'll post again once I've added the first draft of the wiki main page online.
Would you be willing to start a GitHub repo under your own account until the Sub-Committee on Naming adjourns their quarterly meeting?If you could take a first pass at jotting down some high-level ideas, that would be great. We can always transfer ownership to an org later.
Thanks for the suggestion.
you could take a first pass at jotting down some high-level ideas, that would be great.
Recognizing that history favors action I created a GitHub account, gave it a name and was writing the wiki main page when you emailed.
We can always transfer ownership to an org later.
As promised here is the very rough first draft for the "Web API Interoperability Initiative (WAI2)" (tentatively named):
When you read it please appreciate it is only a *starting point*; the journey of 1000 miles starts with the first step.Comments, criticisms and suggestions for improvement are not only welcome but encouraged; here, in other threads on this list and/or using the GitHub issue tracker:
Thanks in advance.
I wanted to publicly throw out my support for The Web API Interoperability Initiative.
Rather than focusing on a service-oriented architecture, you seem to lean towards a behavior-oriented architecture.
Maybe, but I would not want to put a label on it this early for fear that incorrect labeling could create tunnel vision resulting in unconscious bias, a stall or an otherwise poor outcome. I'd rather us ask the question: "What is the MVP that enables web APIs interoperable and easiest to consume?" and start from there.
User makes a query for a particular goal X with location info and possible additional constraints:I want X [location],…Each potentially interested Server returns to the Client a behavior to follow to generate goal X as code on demand.Behavior is in the form of various sequences of activities to take place to generate X.Behavior can be encoded in may different manners such as workflows, scripts, code, hierarchical plans, state machine, behavior trees… let's ignore this for the moment.Client is then free to execute any behavior of interest that is the most likely to meet the user goal. Some decisions may be needed during execution on the client side… Activities get performed on the Server side by following action links….As activities are performed, Server generates an activity stream that can be used to propagate the story in the user social context (Story-telling). Friends of friends can repeat that story and activities making the API stick.There is some light semantic involved to make sure we all talk about the same things….The activity stream protocol is already well on its way for this…So this is not a stretch. I am thinking of implementing this for the international disaster community. Since location does matter in that context, this has been coined Open GeoSocial API (to contrast it from Open GeoSpatial API).Are we talking about similar things?
Regardless of what I said above that does does sound a lot like what I'm envisioning, albeit I fear limiting to only that per chance the best solutions would include things outside these realms.
The activity stream protocol is already well on its way for this…
I was vaguely familiar with this but hadn't followed because I decided to let others worry about social networking. How would you envision this being used when activity streams[1] seems to be organized around the activities people take on social networks? (Pseudo-)code examples would help. (Yes, the more we can leverage the work others have previously done the better.)
On Feb 12, 2013, at 8:21 AM, Pat Cappelaere <cappe...@gmail.com> wrote:Rather than focusing on a service-oriented architecture, you seem to lean towards a behavior-oriented architecture.Maybe, but I would not want to put a label on it this early for fear that incorrect labeling could create tunnel vision resulting in unconscious bias, a stall or an otherwise poor outcome. I'd rather us ask the question: "What is the MVP that enables web APIs interoperable and easiest to consume?" and start from there.
User makes a query for a particular goal X with location info and possible additional constraints:I want X [location],…Each potentially interested Server returns to the Client a behavior to follow to generate goal X as code on demand.Behavior is in the form of various sequences of activities to take place to generate X.Behavior can be encoded in may different manners such as workflows, scripts, code, hierarchical plans, state machine, behavior trees… let's ignore this for the moment.Client is then free to execute any behavior of interest that is the most likely to meet the user goal. Some decisions may be needed during execution on the client side… Activities get performed on the Server side by following action links….As activities are performed, Server generates an activity stream that can be used to propagate the story in the user social context (Story-telling). Friends of friends can repeat that story and activities making the API stick.There is some light semantic involved to make sure we all talk about the same things….The activity stream protocol is already well on its way for this…So this is not a stretch. I am thinking of implementing this for the international disaster community. Since location does matter in that context, this has been coined Open GeoSocial API (to contrast it from Open GeoSpatial API).Are we talking about similar things?Regardless of what I said above that does does sound a lot like what I'm envisioning, albeit I fear limiting to only that per chance the best solutions would include things outside these realms.
The activity stream protocol is already well on its way for this…I was vaguely familiar with this but hadn't followed because I decided to let others worry about social networking. How would you envision this being used when activity streams[1] seems to be organized around the activities people take on social networks? (Pseudo-)code examples would help. (Yes, the more we can leverage the work others have previously done the better.)
On Feb 12, 2013, at 2:10 PM, Mike Schinkel <mi...@newclarity.net> wrote:On Feb 12, 2013, at 8:21 AM, Pat Cappelaere <cappe...@gmail.com> wrote:Rather than focusing on a service-oriented architecture, you seem to lean towards a behavior-oriented architecture.Maybe, but I would not want to put a label on it this early for fear that incorrect labeling could create tunnel vision resulting in unconscious bias, a stall or an otherwise poor outcome. I'd rather us ask the question: "What is the MVP that enables web APIs interoperable and easiest to consume?" and start from there.Problem is that everyone claims interoperability of the standard APIs they engineered (See Open GeoSpatial Consortium API standards http://www.opengeospatial.org/)
I definitely understand claims that don't meet reality. My interest here is only in pragmatic solutions that really do meet reality. That means by-definition we cannot solve 100% of problems. The initial HTTP+URI+HTML didn't solve 100% either but it did a damn good job of meeting many unmet needs and growing from there.
The better question is what kind of API is necessary to put the user back in control? (IMHO) I hope that this is the intent here.
My interest is in minimizing the time it takes to go from realizing that you need a solution built with web APIs to actually having that solution in place. I don't anticipate that we'd attempt to enable end users but would instead enable developers to build more rapidly and with less testing.That said, if developers can build more rapidly with less testing then it will be much easier for them to build tools that end users can leverage.
The user sets the goal and could get guided in proper direction via proper API.This is a higher level API that would need to be defined. It certainly needs a good name such as user-centric, goal-oriented or behavior-oriented.What would be the term that sticks the most?
If I'm understanding what you mean I think trying to directly empower end-users is out-of-scope for this effort but again this effort should make it easier for developers to address their needs.
Regardless of what I said above that does does sound a lot like what I'm envisioning, albeit I fear limiting to only that per chance the best solutions would include things outside these realms.I do not mind keeping the scope pretty wide but a small win would go a long way in that area, I think. You will have to start limiting the scope fairly soon or nothing will get done.
Funny how we misinterpreted each other. :) I very much think we should keep the scope small just not prematurely label the effort to avoid choosing the wrong labels.
Not at all. Activities are organized around what people do. They have a fairly formal and simple syntax:[user] [verb] [object] [target]This makes the output very readable and understandable by most humans. This is the core of "user" activities. We may need to extend the verbs but that's ok.This is an output format. To execute an activity, we need action links.If we are thinking of a user-centric API, it has to hinge on user activities that can easily be described and executed.
I'm not thinking of a user-centric API, but am thinking that a motivated and reasonably intelligent person willing to learn Javascript should be able to learn how to work with the solutions in a self-taught manner, but it still requires programming of some sort. At least in the first phase.I think if we were to attempt to enable the end-user at this point we'd fall into the trap of trying to define solutions that require too much technology to be built at once. After seeing the approach Microsoft initially took with ASP.NET Web Forms vs. AJAX & JSON I think that perfecting little layers before moving onto the next layer makes a lot more sense.
It also hinges on a representation to chain activities together so that a user or user-agent can easily follow the template and execute the action (via links).What is on the table at the moment is to use behavior trees (BT) represented in some JSON format. BT's come from the Game AI community (http://aigamedev.com/open/article/behavior-trees-part1/) and are very popular to encode behaviors. This was the idea of Stuart Charlton at RESTFest 2012. He called that Linked Behavior Trees. There is a video on the site http://vimeo.com/50215125
I remember watching Stuart's video. His concepts seemed interesting and I was excited to learn more but left wanting as it was all conceptual; I wanted to see proof-of-concept work. I've recently realized that I am what I now call a "Code-learner" as compared to a Visual-learner, Auditor-learner, etc. so it was hard for me to get much value out of his talk.
So we could have BTs retrieved on-demand (after a goal-oriented query of some sort) and executed on the client side. As activities get performed, they are output as activity streams (which contain links back to the original event) Problem is to publish the goals that could be met by a particular Provider.
That's pretty advanced, and will likely be be down the road but not sure it would be practical to try to bite all of that off up-front. We probably have a lot of other things to formalize and validate in the wild before we can get to this advanced level.At least that's my first reaction, would love to hear other's take.
Facebook Open Graph does take a stab at this with a very lightweight RDFa approach. https://developers.facebook.com/docs/coreconcepts/
FWIW I'm lukewarm on RDF for the primary reason that it is not easy to grasp except by skilled and/or very motivated people. I don't think it makes a good component for a web-wide general solution (if it did, wouldn't every website would be using it for "semantic web" by now?)
I have a few presentations on the topic on slideshare if you are interested.
-MikePlease post links. More relevant content helps.
One thought around segmenting (or maybe augmenting) the specification/patterns. Given the recent OAuth2 specification issues on how consumer and enterprise patterns combined to make one set of "weird" patterns -- would it be useful to identify enterprise and consumer based patterns and maybe separate the efforts?
Very much definitely.Except I would ask if we really want to label them as "consumer" or instead something different. I would not define many of my own use-cases for web APIs as "consumer" but they are definitely not "enterprise" except for in cases when Enterprises adopt solutions that were not developed with them in mind.
As an example, one of the challenges with enterprise APIs (or even multi-tenant SAAS) is how user and account context is described. It may be hard to come up with set of patterns that meet enterprise needs that does not pollute the generate patterns and make them difficult to understand.
I don't know if it would be better to have a follow-on spec, but much of the debate around what is a "good" pattern will break down to the "what type of software" is the pattern design for.
I would love to have you (or someone else) detail those challenges in an issue or for the wiki or even your own blog, if possible.
How can we prevent the slowdown that occurs when the two perspectives collide?
This may be wishful thinking but...My thoughts are that we simply document the type of enterprise use-cases that Evan felt derailed OAuth2 and address them by enabling enterprises to code exceptions that are outside the scope of our solutions and that we stick to only specifying the 80 percentile solutions. How that turns out in practice remains to be seen.
Who is this API for anyway? :)
What I have been envisioning is anyone willing to learn basic web technologies[1] and Javascript, or another programming language but not necessarily a full-time professional web developer.
"Since your process is what your users want, just give that to them! This is the essence of hypermedia: use the links, forms, and other affordances to guide your users through your business processes and workflows.
This is also why we design state machines as part of the design process: they model these workflows, and that’s what we want to expose.
By exposing your workflow rather than your data model, you’re free to change data models at a later time. Your clients don’t have to duplicate your business logic, they can just follow the hypermedia and let it guide them through. This also means that your clients will be more resilient to change, due to this decreased coupling. If you offer a new workflow, a well-made client will be able to automatically present it to your users, with no code updates."
I've been holding my comments for a few days because I've been trying to process your input (and had some client work to do too. :)
In my field, we have what we called neo-geographers: users that know just enough to mash up some data and put it on the map with a few lines of javascript and get what they want done.Are they who you are targeting? [it certainly is in my case]
I may be misunderstanding your question so I'll answer it both ways I think I could have interpreted it.1.) Is my idea to target neo-geographers?
2.) Is my idea to target end-users who would enter in data and a few lines of Javascript?No. That would be far too narrow a use-case, and I have no expertise in that area.
No. End users are too far down the pipe from where I believe we are right now. Consider us like a remote African village that has just gotten clean water. Now we need to be thinking about standardizing plumping and sewage so we can get most people to have clean water in their homes and maintain sanitation, we're really not ready for a multi-theater entertainment complex yet.And that's not to say it's not a great goal to empower end-users nor that it should be a goal to build that on top of what we do after what we enivision reaches maturity, but there's a lot that's got to come before that ideal and where we are now or we'll end up with the type of problems Microsoft had with ASP.NET WebForms (vs. AJAX and JSON) because they tried to push the technology too far, too fast.Frankly I think that today end-users need to be looking at things like Zapier, ItDuzzit, IFTTT, etc.
Facebook RDFa may have to be done by developers but this makes search by clients much more effective and powerful. They simply use Open Graph. I would not discount its use.
Facebook's Open Graph is interesting because many people are leveraging it. RDFa itself hasn't had tremendous adoption except with thinks like Facebook's Open Graph where the interest in leveraging Facebook's data overwhelms the natural friction against a technology that is otherwise does not see a lot of organic adoption.That said, RDFa is attribute-level extensions to (X)HTML and what I've proposed is limited to focusing on JSON. So yes there is value to Facebook's Open Graph and I'm sure it will make sense at some point to accommodate representation of the open graph data in JSON somehow, but Open Graph is not where I'm headed with this.
Similarly, BTs may be implemented by developers and simply executed by users/user-agents. This makes their lives so much simpler. Just follow links…BT's do not have to be that complex. They can run in a web page: http://machinejs.maryrosecook.com/So imagine that you fetch one or more BTs, load that library and here you go… you meet your goal.Several slide decks on my slideshare that you may want to look at:…
I spent a good two hours reviewing all this and what I came away with was I still don't really understand it. I'm sure if I worked at it I could learn it but too me it is too abstract for what I'm envisioning. If I can't quickly understand it then I highly doubt it would be something that would get the high adoption I'm envisioning for what we work on.To understand my perspective on "easy to understand" I need to tell as story. I launched a meetup group in 2007 called "Atlanta Web Entrepreneurs" to be able to coordinate and learn from a peer group. It grew to almost 2000 members with monthly meetings of 100+ people before I burned out in 2010 and wound it down. I really wanted to be the dumbed guy in the room and learn from smarter people but it turned out that it was mostly non-technical entrepreneurs (and many "want"trepreneurs) trying to understand technology so the group turned into me teaching other people about technology and I learned that most struggled with things I thought were simple.I was also a programming instructor from 1987 to 1994 and learned a lot about what people find difficult to understand during that time.What I envision has got to be brain-dead simple, and got to be build and many small layer, each of which is really easy to understand. Your idea of behavior trees could very well be something that is added on top of the things I'm envisioning, and then your desire to target actual end-users could be layered even on top of that. But I'm working towards solving much simpler problems first.Now maybe I'm the only one that thinks this way in which case my attempt to coordinate people will be viewed in hindsight as a lot of hot air. Or maybe not and it will be successful like I envision. Only time will tell.
To continue the discussion a bit further, I would like to recommend Steve Klabnick upcoming book "Designing Hypermedia API"
In Chap 13, Steve says:
"Since your process is what your users want, just give that to them! This is the essence of hypermedia: use the links, forms, and other affordances to guide your users through your business processes and workflows.
*snip*
Users really want to output of the process (not the process itself but this is a minor detail). Users want to reach a goal that can be achieved by following what Steve calls a workflow. Workflows are really implementation specific… so I would rather call these behaviors to stay at the user level. But this is the same concept at play.
I think Steve's use of "workflow" is a lot easier to comprehend for the average person vs. behavior trees; the latter is more abstract and thus harder to grasp. Most business people have some clue what a workflow is yet everyone who hasn't previously been exposed to the term behavior trees will require at least education to understand what it means in practice. I think BT's a probably a great concept for advanced people who are solving difficult problems but not for foundational work designed to be as easily understood by as broad a group as reasonably possible.
[There will be another one at the New York Strategy Conference… Are you going to be there?
Unfortunately, no. I didn't sign up in time and also I'm reminded of my travels to NYC in February of last year where I promised never again to travel to the snow belt in that time of year if I could avoid it (I have a foot problem that makes travel painful, especially in cold weather.)
What about if you were to show us some code/examples to see what you mean?I really want to grasp what you are talking about.
-MikeRead my later reply to Daniel Roop; I think he was spot on with his ideas about a User-Agent. I don't have code to show because I'm still conceptualizing all this, but I do think what Daniel mentions creates a great place to start.