mongoose as a websocket client

940 views
Skip to first unread message

adam smith

unread,
Sep 22, 2013, 7:27:06 AM9/22/13
to mongoos...@googlegroups.com
Its is possible / sensible to use mongoose as a websocket client such that it sits within an application and initiates websocket connections to a remote server (nginx)

If not does anyone have suggestions for a lightweight websocket client library that will compile in Visual Studio 2010 - ideally without Boost.

I've been looking at :


Thanks,

Adam

Sergey Lyubka

unread,
Sep 22, 2013, 2:57:19 PM9/22/13
to mongoose-users
Mongoose doesn't provide sensible websocket client API.
Could you describe your use case please? What project are your working on?


--
You received this message because you are subscribed to the Google Groups "mongoose-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongoose-user...@googlegroups.com.
To post to this group, send email to mongoos...@googlegroups.com.
Visit this group at http://groups.google.com/group/mongoose-users.
For more options, visit https://groups.google.com/groups/opt_out.

adam smith

unread,
Sep 22, 2013, 4:25:45 PM9/22/13
to mongoos...@googlegroups.com
Hi Sergey, thanks for getting back to me.

We want to use a persistent websocket connection between our media players and a remote central server.

The media players need to receive remote control instructions from a tablet / phone interface without the players having to be constantly polling the server.

We can't connect the two directly because even though the tablet and the media player may be in the same physical venue they may not be on the same LAN or one may be on a public WiFi service hence they can't communicate directly.

Any thoughts gratefully received



--
You received this message because you are subscribed to a topic in the Google Groups "mongoose-users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/mongoose-users/hjuK--IxZrY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to mongoose-user...@googlegroups.com.

Sergey Lyubka

unread,
Sep 23, 2013, 8:43:47 AM9/23/13
to mongoose-users
On Sun, Sep 22, 2013 at 9:25 PM, adam smith <adamv...@googlemail.com> wrote:
Hi Sergey, thanks for getting back to me.

We want to use a persistent websocket connection between our media players and a remote central server.

The media players need to receive remote control instructions from a tablet / phone interface without the players having to be constantly polling the server.

We can't connect the two directly because even though the tablet and the media player may be in the same physical venue they may not be on the same LAN or one may be on a public WiFi service hence they can't communicate directly.

Any thoughts gratefully received

As far as I understand, a web server would be just a transparent proxy for player / phone communication.
player is going to be a server side, quietly sitting and replying on phone's control requests.

I think this kind of communication could be done using mongoose with little effort.
Player side is server side, which is already supported by mongoose.
Client side could be implemented using mg_download() and mg_write() / mg_read().
Let me know if you'd like to get help on that.

adam smith

unread,
Sep 23, 2013, 2:26:10 PM9/23/13
to mongoos...@googlegroups.com
Agreed, and mongoose works brilliantly that way- the thing is that we hundreds of players in different venues that need to talk to a central server...

Here's a good example of what we're doing http://www.secretdj.com/map-feed/ though is for customers phone's so its ok latency is ok and  the media players can poll a central service once or twice a minute

The tablet remote control is for staff in venue who will expect fast responses so having media players poll at an increased frequency become too heavyweight,

The architecture we envisage is that the tablets send request to a central server which then sends the request to the relevant media player over websocket thus getting around the need for the players to poll.

Does that sound sensible?


  
h

Sergey Lyubka

unread,
Sep 23, 2013, 3:26:32 PM9/23/13
to mongoose-users
On Mon, Sep 23, 2013 at 7:26 PM, adam smith <adamv...@googlemail.com> wrote:
Agreed, and mongoose works brilliantly that way- the thing is that we hundreds of players in different venues that need to talk to a central server...

Here's a good example of what we're doing http://www.secretdj.com/map-feed/ though is for customers phone's so its ok latency is ok and  the media players can poll a central service once or twice a minute

The tablet remote control is for staff in venue who will expect fast responses so having media players poll at an increased frequency become too heavyweight,

The architecture we envisage is that the tablets send request to a central server which then sends the request to the relevant media player over websocket thus getting around the need for the players to poll. 

Does that sound sensible?

Yes is does sound sensible.

I would also add that in that architecture, the central server could serve an important role to fan-out client requests. To illustrate that point, let's consider different architectures. Let's assume we have 1000 media players, and 100000 (100K) phone/tablet clients. Ideally, all clients would evenly load all players, resulting in 100 clients per player. But in reality, some players would be hot spots, and serve many more clients then the other servers. The solution needs to serve hot spot well. Let's assume a hot spot player serves 100x more then average, meaning 10K clients. Now, what choices are out there to serve 10K clients.

A. No central server, clients poll

A central server is used only to route client to the correct player, and then client <-> player communication is done directly. Player would need to have it's serving port exposed. Clients poll every minute, which generates 10K / 60 = 166 QPS (queries per second) against a single server.
Very much doable with quite low resource usage, small amount of serving threads, assuming the reply doesn't eat much CPU and takes reasonable time, say, < 50 ms. Player itself could be both IO and CPU bound to decode media stream, so it would be a good idea to bump up serving threads priority.

Central server takes the routing load of 100K clients, each polling once a minute, generating 100L / 60 = 1666 QPS, which is considerable, but again, very much doable without load balancing. Routing for 10K backends is computationally cheap.

Client --(give me address of player X) -> Central server
Client <--(here it is)-- Central server
Client --(command Y)--> Player X
Client <--(reply on Y)-- Player X


B. No central server, clients do persistent websocket connection.
As in previous example, central sever just does the routing. No change for central server.
Clients keep persistant connections, meaning 10K clients keep opened sockets against a server. Now, that could be a problem. Many solutions, including mongoose, use synchronous, connection-per-request architecture. That means 10K threads. Other solutions, say, libevent, use async, non-blocking IO, meaning more complex programming. Anyhow, the environment where player runs, must support either big number of threads, or big number of multiplexing sockets. Another way to deal with that is to use load balancing.

Client --(give me address of player X) -> Central server
Client <--(here it is)-- Central server
Client --(command Y)--> Player X    //  this is wrapped
Client <--(reply on Y)-- Player X   //  into websocket



C. Central server does routing and fanout, uses poll.
In this example, control traffic is going through the central server. 

Client --(command Y to player X) -> Central server
Central Server --(command Y)--> Player X
Central Server <--(reply on Y)-- Player X
Client <--(reply on Y)-- Central server

In this scenario, central server must poll players to serve client's request. Assuming that polling takes some time, say 100ms, and central server needs to serve 100K clients, this could be a problem. If using synchronous IO, one thread can serve ~10 QPS.  Central server needs to serve 10K / 60 = 1.7K QPS, that means it needs to run 170 working threads. The process is IO bound, so even with single process it is doable.

D. Central server does routing and fanout, uses websockets.
As example C, but central server keeps websocket connections to players. 1K threads would be required for synchronous IO.

Now, in all these scenarios, I've disregarded traffic from venue staff, which needs to be faster. Okay, let's say there are 10 staff members per venue, each doing 1 QPS, which is 60x faster then usual client. That only adds 10 QPS more to the figures above, which is not significant and could be ignored.

Having said that, with assumptions given above, I would not go with websockets and resort to polling. Also consider polling with keep-alive, which has pretty much the same properties as websockets, but much easier to program. Your case is not duplex, thus websockets is an overkill IMO. I would add a simple LB (load balancer) to have redundancy, and keep the solution simple (poll) for a little extra cost for hardware.

Of course, I would set up a quick stress test before I'd commit to the solution. I am pretty sure there some caveats I don't know about.

adam smith

unread,
Sep 24, 2013, 5:51:31 AM9/24/13
to mongoos...@googlegroups.com
Really interesting - thank you!

I guess one big issue is that table clients cannot physically access media players because they are not visible due to firewalls etc so without using some form of reverse tunnelling (which would have to persist somehow anyway) thus the server has to do fan out - essentially ruling out  A/B

In addition it (currently) is the players that poll the server - again because the server cannot see (most of) the players

In this scenario do you still think its better the media players poll? 

Websockets seemed like the 'right' way to do it but the problems you outline re: threading etc do make them seem a bit theoretical and not quite ready for primetime...

Sergey Lyubka

unread,
Sep 24, 2013, 6:44:06 AM9/24/13
to mongoose-users
On Tue, Sep 24, 2013 at 10:51 AM, adam smith <adamv...@googlemail.com> wrote:
Really interesting - thank you!

I guess one big issue is that table clients cannot physically access media players because they are not visible due to firewalls etc so without using some form of reverse tunnelling (which would have to persist somehow anyway) thus the server has to do fan out - essentially ruling out  A/B

A single port forwarding rule on firewall would expose listening port on a player side. The good thing about A/B is that
it reduces the load on central server, making the whole thing more scalable. Central server approach, however, is easier
to manage, and less a security risk. But harder to implement.
 

In addition it (currently) is the players that poll the server - again because the server cannot see (most of) the players

In this scenario do you still think its better the media players poll? 

Both poll with keep-alive and websocket would be fine. Each player would consume one thread on a central server in
this case. I would make central server load balanced, to make sure if one server goes down or gets upgraded,
the whole service stays alive. Then, three reasonably spec-d boxes would do the job pretty well.
 

Websockets seemed like the 'right' way to do it but the problems you outline re: threading etc do make them seem a bit theoretical and not quite ready for primetime...

Why not? Do a simple test. Get mongoose websocket example, modify it to keep the connection alive, get some command line websocket client, for example https://blogs.oracle.com/PavelBucek/entry/websocket_command_line_client , run 1000 threads on websocket example
and slam it with 1000 clients. I bet even one process would do the job fine, but for production I'd do load balancing.

adam smith

unread,
Sep 24, 2013, 8:36:30 AM9/24/13
to mongoos...@googlegroups.com
Agreed - would much rather have the server instigate via polling - the problem is that in practice its extremely difficult to get parent companies / the big IT providers to open up a port - default answer is always 'no' and though some can be convinced others just regard is as not worth the theoretical risk regardless.

I'll checkout Pavel's blog - if its sturdy enough then brilliant!

Many thanks


Sergey Lyubka

unread,
Sep 24, 2013, 8:40:43 AM9/24/13
to mongoose-users
On Tue, Sep 24, 2013 at 1:36 PM, adam smith <adamv...@googlemail.com> wrote:
Agreed - would much rather have the server instigate via polling - the problem is that in practice its extremely difficult to get parent companies / the big IT providers to open up a port - default answer is always 'no' and though some can be convinced others just regard is as not worth the theoretical risk regardless. 

I'll checkout Pavel's blog - if its sturdy enough then brilliant!

Many thanks

You're welcome Adam!
It would be interesting to know about your future findings. Secret DJ is a great service btw.

adam smith

unread,
Sep 24, 2013, 8:42:29 AM9/24/13
to mongoos...@googlegroups.com
Thank you! will keep you informed.
Reply all
Reply to author
Forward
0 new messages