One Callee many Callers, scalability

63 views
Skip to first unread message

IvanK

unread,
Feb 28, 2015, 11:47:48 PM2/28/15
to autob...@googlegroups.com
Hi all,

I feel like I seriously misunderstand how WAMP works with sockets and hope that somebody can shed some light on it.

Suppose I have one Callee (a Server) serving multiple Callers (Clients) and they are connected via a Router. If my understanding of WAMP works is correct, all Caller-Router connection and a Callee-Router each use a socket as a transport layer, whether it be a WebSocket or a native one.  As there is one socket connecting Router with a Callee, all the calls from different Callers become effectively serialized. 

Now what if one of the Clients communicates a big chunk of data over a (slow) network with the Server? It seems it would block all other calls until data transfer is finished. How is it supposed to scale? I hope I got it all wrong.

Thanks,
Ivan.

Tobias Oberstein

unread,
Mar 2, 2015, 10:22:50 AM3/2/15
to autob...@googlegroups.com
> Suppose I have one Callee (a Server) serving multiple Callers (Clients)
> and they are connected via a Router. If my understanding of WAMP works
> is correct, all Caller-Router connection and a Callee-Router each use a
> socket as a transport layer, whether it be a WebSocket or a native one.
> As there is one socket connecting Router with a Callee, all the calls
> from different Callers become effectively serialized.

No, because call results/errors are sent asynchronously.

That means, call 7 can send it's reply, while call 1-6 are still processing.

HTTP is strict request/response. There can only be 1 outstanding request
(even when the underlying TCP connection is kept open and reused for
subsequent HTTP requests).

WAMP is different: multiple calls can be issued over a single connection
and responses can come back out-of-order and asynchronously ..

Cheers,
/Tobias

Ivan Komarov

unread,
Mar 2, 2015, 10:53:43 AM3/2/15
to autob...@googlegroups.com

Thanks Tobias,

but while one (big) request is being sent from a Router to a Callee over the transport layer, other requests to the same Callee will be waiting for it to finish sending the data, right?

--
You received this message because you are subscribed to a topic in the Google Groups "Autobahn" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/autobahnws/25vCssyMe14/unsubscribe.
To unsubscribe from this group and all its topics, send an email to autobahnws+unsubscribe@googlegroups.com.
To post to this group, send email to autob...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/autobahnws/54F48045.902%40gmail.com.
For more options, visit https://groups.google.com/d/optout.

Tobias Oberstein

unread,
Mar 2, 2015, 10:58:10 AM3/2/15
to autob...@googlegroups.com
Am 02.03.2015 um 16:53 schrieb Ivan Komarov:
> Thanks Tobias,
>
> but while one (big) request is being sent from a Router to a Callee over
> the transport layer, other requests to the same Callee will be waiting
> for it to finish sending the data, right?

Yes. On a given transport, there can be only 1 message currently being
sent or received (WAMP messages cannot be fragmented and interleaved).

If your transport is network bound (link bandwidth not enough), it does
not matter how many TCPs you have open: they all will stall.

You need to have callees on different machines, over different network
links.

This is possible with Crossbar.io and the new "Shared Registrations"
feature.

Cheers,
/Tobias

Ivan Komarov

unread,
Mar 2, 2015, 11:09:18 AM3/2/15
to autob...@googlegroups.com

What I have in mind, is a situation when frequent lighweight requests are mixed with rare heavy ones and I don't want to see the latter ones completely blocking processing the formers.

--
You received this message because you are subscribed to a topic in the Google Groups "Autobahn" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/autobahnws/25vCssyMe14/unsubscribe.
To unsubscribe from this group and all its topics, send an email to autobahnws+unsubscribe@googlegroups.com.
To post to this group, send email to autob...@googlegroups.com.

Tobias Oberstein

unread,
Mar 2, 2015, 11:21:39 AM3/2/15
to autob...@googlegroups.com
Am 02.03.2015 um 17:09 schrieb Ivan Komarov:
> What I have in mind, is a situation when frequent lighweight requests
> are mixed with rare heavy ones and I don't want to see the latter ones
> completely blocking processing the formers.

Ah, so you a concerned about so called "head of line blocking" at the
(single) transport level.

A single, standard WebSocket transport can't interleave messages. Once
message has started to be sent out / or received, everything for that
message needs to go over the transport first.

This isn't specific to WAMP, but standard WebSocket.

Extensions to WebSocket could. See my proposal addressing exactly this
use case

https://github.com/oberstet/permessage-priority/blob/master/draft-oberstein-hybi-permessage-priority.txt

Note that WebSocket MUX and HTTP/2 have multiplexing of multiple
_connections_ over a single physical connection (TCP), but they don't
directly address interleaving _messages_ (allowing for priorization and
avoiding head-of-line blocking on a single channel).

/Tobias


>
> On Mar 2, 2015 7:58 AM, "Tobias Oberstein" <tobias.o...@gmail.com
> <mailto:tobias.o...@gmail.com>> wrote:
>
> Am 02.03.2015 um 16:53 schrieb Ivan Komarov:
>
> Thanks Tobias,
>
> but while one (big) request is being sent from a Router to a
> Callee over
> the transport layer, other requests to the same Callee will be
> waiting
> for it to finish sending the data, right?
>
>
> Yes. On a given transport, there can be only 1 message currently
> being sent or received (WAMP messages cannot be fragmented and
> interleaved).
>
> If your transport is network bound (link bandwidth not enough), it
> does not matter how many TCPs you have open: they all will stall.
>
> You need to have callees on different machines, over different
> network links.
>
> This is possible with Crossbar.io and the new "Shared Registrations"
> feature.
>
> Cheers,
> /Tobias
>
> --
> You received this message because you are subscribed to a topic in
> the Google Groups "Autobahn" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/__topic/autobahnws/25vCssyMe14/__unsubscribe
> <https://groups.google.com/d/topic/autobahnws/25vCssyMe14/unsubscribe>.
> To unsubscribe from this group and all its topics, send an email to
> autobahnws+unsubscribe@__googlegroups.com
> <mailto:autobahnws%2Bunsu...@googlegroups.com>.
> To post to this group, send email to autob...@googlegroups.com
> <mailto:autob...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/__msgid/autobahnws/54F4888B.__1030504%40gmail.com
> <https://groups.google.com/d/msgid/autobahnws/54F4888B.1030504%40gmail.com>.
> For more options, visit https://groups.google.com/d/__optout
> <https://groups.google.com/d/optout>.
>
> --
> You received this message because you are subscribed to the Google
> Groups "Autobahn" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to autobahnws+...@googlegroups.com
> <mailto:autobahnws+...@googlegroups.com>.
> To post to this group, send email to autob...@googlegroups.com
> <mailto:autob...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/autobahnws/CAJDzOcZiaX1cKcWxnRjzo_bGeYgnyOH0H%2BQBV1U2hMQ7E0a1zA%40mail.gmail.com
> <https://groups.google.com/d/msgid/autobahnws/CAJDzOcZiaX1cKcWxnRjzo_bGeYgnyOH0H%2BQBV1U2hMQ7E0a1zA%40mail.gmail.com?utm_medium=email&utm_source=footer>.

IvanK

unread,
Mar 4, 2015, 1:49:14 AM3/4/15
to autob...@googlegroups.com
Hi Tobias,

thanks for the link, it's a quite interesting limitation of WebSockets I was not aware of and a nice proposal, BTW!

However, I am pretty sure the problem I am trying to bring your attention to is different.
Let me show you an example.

I've created an Autobahn|Cpp Server (Callee) based on "register1" example and added an empty "acceptArray" function there. Also, I've created two Autobahn|JS web clients (Callers), one calling standard "add2" function (CallerAdd) and another one (CallerArray) sending an array of 4M elements over to the "acceptArray" function on the server. Obviously, I used Crossbar.io as a router.

I got Crossbar and the Server running and checked that CallerAdd sends its requests correctly. Then I started CallerArray client that initiated transferring 4M elements. While the latter request was running (~25s), I tried sending requests from CallerAdd again - none of them was processed before the Crossbar.io finished transferring data to the Server. As you see, just one call from one client can block the entire system completely so none of other clients are able to use the server.

To the contrary, a raw zaphoyd/websocketpp -based WebSocket server works quite differently in the same scenario. While the server is receiving a big array from one client, it readily processes smaller requests from other clients.

With these experiments run, I think that the aforementioned WebSocket protocol's limitation is not the origin of the problem I am describing, but rather the fact that there is only one socket/transport in between Router and Callee is. It appears that the situation could be greatly improved if WAMP would create one Router-Callee socket per every Caller-Callee pair or at least a pool of sockets. Unless I miss something important, I can't see how the current WAMP implementation can be used for the purposes I have in my mind.

Do you consider addressing this issue in the future?

Thanks,
Ivan.
Reply all
Reply to author
Forward
0 new messages