SP background document

38 views
Skip to first unread message

Martin Sustrik

unread,
Aug 5, 2011, 8:34:07 AM8/5/11
to sp-discu...@googlegroups.com
Hi all,

As already mention, a high-level background document is needed for SP,
so that it can be introduced to IETF community, which, in many cases,
may not be familiar with business messaging.

Here's my attempt of 30,000-feet overview of what problem we are solving
within today's status quo of the Internet:

https://raw.github.com/sustrik/sp-docs/master/draft-sustrik-spbackground.txt

It's just an introduction. Obviously, more text is needed. We need a
history of business messaging to get the picture into perspective, but
most importantly we need concrete examples of different areas where
business messaging is being used: financial services, HPC, gaming, web
backend, etc.

It would be great if those with experience with these areas could write
a paragraph or two explaining the usage and the use cases.

Thoughts?
Martin

Martin Sustrik

unread,
Aug 8, 2011, 4:39:28 AM8/8/11
to sp-discu...@googlegroups.com
Hi all,

I've added a section about history of messaging. It's rather sketchy and
incomplete though. Specifically, I don't know much about web services
history, so the relevant section is marked as TODO. If you have some
background in the area feel free to supply the text.

Additionally, I've added a short section about messaging-vs-network
interactions.

https://raw.github.com/sustrik/sp-docs/master/draft-sustrik-spbackground.txt

Comments are welcome!
Martin

Alexis Richardson

unread,
Aug 8, 2011, 9:53:46 AM8/8/11
to sp-discu...@googlegroups.com
Martin

I think this is a pretty good start BUT...

1. I don't think the view that 'we are moving to thin clients' follows
from better access to the internet. Instead what is happening is that
more 'apps' are being delivered with a mix of online and client side
capability. But I think your main points about services are
reinforced by this - I would just clarify...

2. I don't follow the argument in section 3.

alexis

> --
> Note Well: This discussion group is meant to become an IETF working group in
> the future. Thus, the posts to this discussion should comply with IETF
> contribution policy as explained here:
> http://www.ietf.org/about/note-well.html
>

Gary Berger

unread,
Aug 8, 2011, 10:13:32 AM8/8/11
to sp-discu...@googlegroups.com
My $.02 (although with the US downgrade, maybe its $.01 :)

On 8/8/11 9:53 AM, "Alexis Richardson" <ale...@rabbitmq.com> wrote:

Martin

I think this is a pretty good start BUT...

1. I don't think the view that 'we are moving to thin clients' follows
from better access to the internet.  Instead what is happening is that
more 'apps' are being delivered with a mix of online and client side
capability.  But I think your main points about services are
reinforced by this - I would just clarify...


@gaberger: I think this is bifurcated by the iOS/HTML5 client side technologies vs. the push to VDI which is primarily to deal with uncoupling the operating environment from the edge device. If we look at the primary limiting factor of future technology being energy, we may see even simpler client side implementations and interactive streaming protocols. See Steve Perlman's (Reardon) introduction to Onlive[1] (long but worth it). The opposite would be if we start to "cluster" the exponentially growing "smart" device footprint making it capable to run services onto a grid of smart devices..


2. I don't follow the argument in section 3.

@gaberger: There are several interesting dimensions here which we need to find a way to articulate. The first is the port overloading, popularized by the OTT community (I.e. Running everything over port 80/443). The second is the need to create differentiated services. Given the former, the network cannot prioritize flows effectively without complex DPI, as the network core speeds grows higher this drives the need to push this closer to the edge. If it is closer to the edge, might as well do it in the application stack, again treating the network as a dumb set of pipes..

We shouldn't be surprised about this, distributed systems are incredibly hard and the network is a distributed system. There are fundamental flaws in almost every layer of the protocol stack which start to impact our applications at scale. In effect you have a set of unpredictable interfaces which makes it very fragile. Given our dependency on IP, its not likely to change anytime soon but I feel it will have to change in the future for the Internet to survive. In the mean time, it would be great to push the expectations of network services (name resolution, address binding, QOS, isolation, segmentation, etc..) into a layer decoupled from applications. In this way the underlying protocols can shift to better support next generation application demands (adaptive BW, intelligent caching, latency guarantees, etc..)



alexis


  1. http://engineering.columbia.edu/video-magill-lecture-steve-perlman

Martin Sustrik

unread,
Aug 8, 2011, 10:50:17 AM8/8/11
to sp-discu...@googlegroups.com, Alexis Richardson
Hi Alexis,

> I think this is a pretty good start BUT...
>
> 1. I don't think the view that 'we are moving to thin clients' follows
> from better access to the internet. Instead what is happening is that
> more 'apps' are being delivered with a mix of online and client side
> capability. But I think your main points about services are
> reinforced by this - I would just clarify...

Yes. But again the question is: What's the motivation for the developers
to deliver applications as mix of sever and client side functionality?
Better access to Internet is definitely a piece of the puzzle, e.g.
google docs wouldn't make sense 15 or even 10 years ago, but it is
definitely not the only piece.

Please, if you feel like there's more to say about the topic, propose
the text and I'll place it into the document.

Same applies to the history section: I feel that my view is heavily
skewed by looking at web sevcices from enterprise-messaging point of
view. The only way to overcome the bias is for others to express their
views.

> 2. I don't follow the argument in section 3.

In short: For underlying network, web service traffic looks like a mess
of TCP packets, all passed on port 80 and all containing random
undecypherable XML. The question asked by section 3 is whether further
development of the "service" layer with no help from the network is
possible.

The answer may well be "yes", meaning that development of network as
such is basically done and in the future we are going to see just more
of the same (higher wire speeds etc.)

This option is actually pretty compelling as it makes the layering very
clean: Network is responsible for passing packets between the endpoints
while applications are responsible for business logic.

However, emergence of phenomena such as deep packet inspection seem to
indicate that this solution is not an ideal one.

Martin

Martin Sustrik

unread,
Aug 8, 2011, 11:21:26 AM8/8/11
to sp-discu...@googlegroups.com, Gary Berger
HiGary,

> @gaberger: There are several interesting dimensions here which we need
> to find a way to articulate. The first is the port overloading,
> popularized by the OTT community (I.e. Running everything over port
> 80/443). The second is the need to create differentiated services. Given
> the former, the network cannot prioritize flows effectively without
> complex DPI, as the network core speeds grows higher this drives the
> need to push this closer to the edge. If it is closer to the edge, might
> as well do it in the application stack, again treating the network as a
> dumb set of pipes..
>
> We shouldn't be surprised about this, distributed systems are incredibly
> hard and the network is a distributed system. There are fundamental
> flaws in almost every layer of the protocol stack which start to impact
> our applications at scale. In effect you have a set of unpredictable
> interfaces which makes it very fragile. Given our dependency on IP, its
> not likely to change anytime soon but I feel it will have to change in
> the future for the Internet to survive. In the mean time, it would be
> great to push the expectations of network services (name resolution,
> address binding, QOS, isolation, segmentation, etc..) into a layer
> decoupled from applications. In this way the underlying protocols can
> shift to better support next generation application demands (adaptive
> BW, intelligent caching, latency guarantees, etc..)

I have never thought of it that way, but well said! Maybe we can rewrite
section 3 in terms of need for decoupling well-defined subset of
functionality from application layer and moving it to intermediary
"middleware" layer.

I feel there are at least two areas that we can address with some level
of confidence at the moment:

1. Clean separation of business flows, ie. forcing developers to
articulate what the individual flows are and making them visible on the
network layer.

2. Making requirements for smart routing, eg. load-balancing,
subscriptions (as found in pub/sub) visible below the application layer
and thus creating an opportunity for future development of generic smart
routing infrastructure.

Martin

Pieter Hintjens

unread,
Aug 10, 2011, 11:00:04 AM8/10/11
to sp-discu...@googlegroups.com
On Mon, Aug 8, 2011 at 4:50 PM, Martin Sustrik <sus...@250bpm.com> wrote:

> Yes. But again the question is: What's the motivation for the developers to
> deliver applications as mix of sever and client side functionality?

IMO the motivations are always economic. The reasons for centralization are:

* Lowering the cost of deploying applications to users (the web is
attractive because it brings this cost down to almost zero)
* Lowering the cost of system administration (centralized services are
cheaper than distributed ones, compare gmail to classic email)
* Control of data (Facebook relies on holding your data).
* Simpler topologies (conceptually, star networks are simpler to
understand, and technically they work better across firewalls and
NAT).
* User interface consistency (less training, less confusion, fewer questions)

And the reasons for decentralization are:

* Increased bandwidth at lower costs (e.g. bittorrent)
* Privacy (we use private networks all the time for local data transfers)
* User interface freedom (native apps vs. web apps)
* Localized flexibility (e.g. inside a business LAN)

I'm sure I'm missing some reasons in both directions but you can see
that the weight lies towards centralization. There may be historic
cycles as the cost of centralization starts to outweigh its benefits
(mainframes -> minicomputers -> PC). Any decentralization has to be
economically worth it.

-Pieter

Martin Sustrik

unread,
Aug 15, 2011, 2:24:46 AM8/15/11
to sp-discu...@googlegroups.com, Pieter Hintjens
On 08/10/2011 05:00 PM, Pieter Hintjens wrote:

> IMO the motivations are always economic. The reasons for centralization are:
>
> * Lowering the cost of deploying applications to users (the web is
> attractive because it brings this cost down to almost zero)
> * Lowering the cost of system administration (centralized services are
> cheaper than distributed ones, compare gmail to classic email)
> * Control of data (Facebook relies on holding your data).
> * Simpler topologies (conceptually, star networks are simpler to
> understand, and technically they work better across firewalls and
> NAT).
> * User interface consistency (less training, less confusion, fewer questions)

Agreed.

> And the reasons for decentralization are:
>
> * Increased bandwidth at lower costs (e.g. bittorrent)
> * Privacy (we use private networks all the time for local data transfers)
> * User interface freedom (native apps vs. web apps)
> * Localized flexibility (e.g. inside a business LAN)

I think you've missed the "network effect" argument which may well be
the most important one. Ie. delivering a game via facebook allows you to
reach large community. By doing so, though, you create a decentralised
system (user <-> facebook <-> game provider).

Martin

Reply all
Reply to author
Forward
0 new messages