tcp connection multiplexing

395 views
Skip to first unread message

Roberto Zanotto

unread,
Sep 24, 2015, 7:28:32 PM9/24/15
to golang-nuts
Hi, gophers.
I need to implement a system that allows different streams of data ("virtual" connections) to work through a single tcp connection.

The problem I need to solve is the following (also described here):
Suppose we have two unidirectional streams of data, A and B, that flow on the same direction through a single tcp connection. The producer sends data on both streams, but consumer A does not read (fast enough). What happens is that the tcp receive buffer gets filled up with messages to A. As consequence, B is not able to get messages anymore, even though he is ready to receive them and the producer is sending them. The situation gets worse with more than 2 streams.

I should be able to implement a small protocol that manages the connection. In this case, it should send a message back to the producer at some point, saying: "A is not reading the messages, stop sending to him". What I want to avoid, is improvising and reinventing the wheel (possibly badly).

Is there some known algorithm to do this? I'm having trouble finding information about this on the web.

Cheers.

Caleb Spare

unread,
Sep 24, 2015, 7:33:34 PM9/24/15
to Roberto Zanotto, golang-nuts
I haven't considered the actual problem you brought up before, but off
the top of my head I'm aware of

https://github.com/inconshreveable/muxado

It's apparently based on ideas from HTTP/2 so you might want to look
into that as well.

Hope that helps!

-Caleb
> --
> You received this message because you are subscribed to the Google Groups
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to golang-nuts...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

Roberto Zanotto

unread,
Sep 24, 2015, 7:50:51 PM9/24/15
to golang-nuts, roby...@gmail.com
I get more useful search results adding HTTP/2. Thanks :)

Roberto Zanotto

unread,
Sep 24, 2015, 8:34:44 PM9/24/15
to golang-nuts, roby...@gmail.com
After investigation, it looks like HTTP/2 does not have this problem at all. You get multiple files served from multiple streams, but as soon as you get a response you process it, no matter from what stream it comes from. This situation where one consumer is busy and leaves a lot of messages in the shared buffer does not occur. I wonder how (if?) this was solved in muxado, I sent a message to the creator of the library.

andrewc...@gmail.com

unread,
Sep 24, 2015, 8:53:24 PM9/24/15
to golang-nuts, roby...@gmail.com
HTTP2 and muxado both suffer from head of line blocking where if packets are dropped from an individual stream, it will delay the others. But without packet drops things don't wait. Muxado and http2 in fact implement a similar protocol as can be seen in the muxado documentation which you should probably read before messaging someone.

Roberto Zanotto

unread,
Sep 24, 2015, 9:26:36 PM9/24/15
to golang-nuts, roby...@gmail.com, andrewc...@gmail.com
I did read the docs. I didn't even know what head of line blocking meant, the docs mentioned it with packet losses, my problem is not tied to packet losses, I assumed it was not what I was looking for. Where did you find an explanation of how my problem is solved in muxado? I re-read the readme on github and the API documentation, I still can't find how the problem I'm having is addressed. Do you have a link?

andrewc...@gmail.com

unread,
Sep 24, 2015, 9:36:35 PM9/24/15
to golang-nuts, roby...@gmail.com, andrewc...@gmail.com
My mistake, I would advise you to look at the muxado code, or the http2 spec which will explain it.

Roberto Zanotto

unread,
Sep 24, 2015, 9:42:27 PM9/24/15
to golang-nuts, roby...@gmail.com, andrewc...@gmail.com

Egon

unread,
Sep 25, 2015, 3:25:54 AM9/25/15
to golang-nuts
I know of "packet juggling" (I'm not sure what it's actually called and whether there is actually better ways etc... basically something that we use in our company).

let's say A is connecting through M to X.

A sends 2 (or more if needed) packets to M. M puts them to X-s queue. X reads 1 packet, sends back an ACK through M to A. A sends the next packet. If A doesn't get the ACK it won't send the additional packets.

M as the middle-man must ensure that it only reads 2 packets from A and doesn't read more until an ACK is received.

Juggling multiple packets at once ensure that you are not getting the latency roundtrip, instead new packets keep flowing in. Once the ACK-s stop coming you will stop asking packets from A -- hence blocking that process, while others can progress.

+ Egon

Roberto Zanotto

unread,
Sep 25, 2015, 9:59:32 AM9/25/15
to golang-nuts
Thanks for the input, this is what I'm gonna do in the end.
I have a goroutine (M) that reads from the connection and sends to the appropriate buffered channels (Xs), one for stream. M also counts how many items are consumed from the channels (with cap and len) and sends the information back to A to the other side of the connection. A must make sure never to send more than the buffered channels can hold.
Reply all
Reply to author
Forward
0 new messages