Implementation of such an environment moves away from versioning at an overall application level and towards versioning at a domain component level, allowing versions to float between different domain components. These "domain components" are connectors that are paired for client(s) that speak that API and translate RESTful requests to business logic API presentations that exist. They provide additional services when necessary such as mediation of mismatched transactional requirements to the internal APIs.
At the REST API level, a container path handler first multiplexes on domain scope, then on version, such as "/userdomain/v5/*". This reduces complexity in path resolution and allows for multiple concurrent versions of domain components to be resolved cleanly. So long as a handler for a specific domain and version is available for resolution by the container, a given client can connect (regardless of how many releases an individual connector has participated in).
At the business logic level, the selected client connector has a selection and/or range of versions of a given API that it can work with. This binding happens as a part of the packaging lifecycle as provided by the container, assisted by metadata automatically embedded at build-time ("what version range does this package need for each dependency"). It's a lot easier to think of as a dependency graph rather than as multiple bags of APIs.
Once established, the projects evolve with very little maintenance as the build environment and container automatically resolve dependency paths, indicating missing dependency bindings early on component load and unused / orphaned dependencies with simple inspections. In degenerate cases, many different versions of the same code may be loaded and used by different clients as they have evolved. But when considered as a component graph, it's not hard to simplify by releasing and deploying a new version of an older connector, maintaining the exposed REST API whilst updating to newer business APIs that a majority of other components in the deployment are already connecting to, then removing the now-orphaned components.
Again, this isn't to diminish the value of well-designed REST APIs. When such foresight is available from inception, this kind of deployment flexibility gets far less exercise. But when that's not possible or temporal constraints close in, it's a great tool to have available.
On Sep 30, 2012, at 3:07 PM, mca <m...@amundsen.com> wrote:
> "...it's quite practical in *closed* systems to decouple internally (at a server-side connector level) instead of at the network level by pairing a versioned client connector on the server side with the version of the client itself."
> I'd like to hear more about this POV. I'm not sure i can conjure up a tangible example of this; can you elaborate?
> On Sun, Sep 30, 2012 at 5:17 PM, Brian Topping <topp...@codehaus.org> wrote:
> One architectural analysis that I've found very helpful is to recognize that some coupling is inevitable in complex, evolving systems, and in turn it's quite practical in *closed* systems to decouple internally (at a server-side connector level) instead of at the network level by pairing a versioned client connector on the server side with the version of the client itself. In that case, the canonicalization of the data happens on entirely on the server, and the client is free (because of the coupling between the connector and the client) to do what it needs to get the job done.
> This is especially helpful in "under-resourced agile environments" (wink, wink) that don't have the benefit of proper initial requirements gathering, yet still want to benefit from modular environments in the long term. In time, as resource constraints are removed and de facto requirements emerge, a more noble API such as discussed here can be implemented without the front-loaded risk that might be incurred by teams with less aggregate experience (or more burdensome management that radically changes requirements based on the phase of the moon).
> Key here is that the server stack is capable of efficient multiplexing of a multitude of connectors. For instance, if the SDLC that falls out of a particular stack can't also manage the packaging and deployment of these connectors, all bets are off that this will be a reasonable solution. For instance, I use Java, Maven and OSGi, where OSGi provides the runtime version mechanics and Maven provides the packaging and distribution mechanics. I don't have enough current knowledge about non-JVM based solutions to speak of how this might be done elsewhere.
> Anyway, my point is that organizational sustainability often trumps masterful APIs, but all is not lost for teams that have limitations on getting everything right in the first pass.
> On Sep 30, 2012, at 12:49 PM, mca <m...@amundsen.com> wrote:
>> good observations regarding mobile, etc. here's the basic guidance that I advocate and teach in workshops, etc.:
>> 1) Take advantage of Separation of Concerns when designing (yes, i used that word) your programming interfaces (APIs).
>> 2) build a solid set of private components (storage, class libraries, business layer, etc.) that knows nothing about connectors (HTTP, XMPP, WebSockets, etc.)
>> 3) when setting out an API, start from the use cases, not the components. What do ppl (devs, etc.) want to accomplish? what workfow makes sense for these use cases. Keep in mind platform and or device usually represent different use cases even when attempting to complete the same task
>> 4) implement your interface as a thin layer between the private components (DB, etc.) and the public connectors (Web Server, etc.). This is where you "script" your component calls into a useful solution for the targeted use cases.
>> 5) treat representation work (XML, JSON, CSV, HTML, etc) as a separate layer so that future calls for new formats, media types does not disrupt other parts of the system
>> This can be done very quickly and easily, even on small scales.
>> good news is this pattern works at the small level and still scales well; even in large organizations.
>> bad news is that there is no magic here; no silver bullet. it still involves attention to detail, focus on users, not data, and iterating to create great APIs.
>> On Sun, Sep 30, 2012 at 3:33 PM, Jørn Wildt <j...@fjeldgruppen.dk> wrote:
>> Interesting statement! I like it :-) But you have to start somewhere, right? I am working on one of those "dump the data for others to use" APIs - in parallel with a few guys who are designing an API with a specific intent (a custom mobile/iPad client).
>> What I see from this is that the mobile API is so narrow that nobody else will be able to use it since it is driven by very specific client needs. It also means the mobile API puts less effort into backwards compatibility, expecting customers to upgrade their iPad apps ASAP when a new version is out (but maybe they will regret this ... time will show). But you are certainly right about performance - that is one thing which is high priority on that project.
>> Me, on the other hand, I am trying to create an open API for third party clients to work with - and since there is no specific use case here, well, I end up dumping from one end to another. With a twist though - its not the internal data structures I dump, but a well designed choice of names and structures that I expect to live through multiple internal versions of our data.
>> I am also adding features for adding, deleting and changing stuff in the system - but these focus on the business operations available (like "add bug report", "attach document", "close a bug report" or "add comment" or "assign responsibility"). So, well, it does have some kind of intend.
>> What am I am trying to say? That sometime all you have is the intent of, well, dumping the data for others to inspect ...
>> Just my two cents :-)
You must Sign in before you can post messages.
To post a message you must first join this group.
Please update your nickname on the subscription settings page before posting.
You do not have the permission required to post.