Channels - slow with limited number of connections

1,326 views
Skip to first unread message

Lada B

unread,
Mar 5, 2017, 4:59:05 AM3/5/17
to Django users
Hi, im using django-1.10 + channels + asgi_redis + daphne + uwsgi + nginx but i have problem with workers+daphne being slow. 

Im developing chat and I tried how fast its by holding ENTER button (spamming messages one after one and it doesn't even respond until i stop sending messages.

When i try to test performance (sending message to websocket every 10 ms) i get kicked after 5 seconds..when i try to do it every 100 ms, nobody else can connect. I checked CPU usage and when i set runworker --threads 500 there is running like 250 threads of workers...when i set 10 threads then its like no way to use it

Am I doing something wrong? Why are workers so slow in processing of one websocket message that is empty string?

Lada B

unread,
Mar 5, 2017, 6:28:52 AM3/5/17
to Django users

HW performance with 1 user sending message every 100ms: https://i.imgur.com/5F4mKAI.png


Yesterday i tried to use it in production and when there were 15 clients connected (each one pinging server every 2 seconds) nobody else could connect.

I have default settings and I followed Getting started tutorial.

Avraham Serour

unread,
Mar 5, 2017, 8:27:07 AM3/5/17
to django-users
I don't think you need 500 workers, the number of workers depends on how long will message will take to be processed not the number of concurrent connections.

The worker job is to get a message and send to a bunch of clients, the ASGI server, meaning the process actually holding the websocket, in your case daphne, will actually upload the data.

Is the client and server running on the same computer?

Don't stress test your server manually, you should write a script to do it.

--
You received this message because you are subscribed to the Google Groups "Django users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to django-users+unsubscribe@googlegroups.com.
To post to this group, send email to django...@googlegroups.com.
Visit this group at https://groups.google.com/group/django-users.
To view this discussion on the web visit https://groups.google.com/d/msgid/django-users/d86b1eb9-3492-48bc-9460-365571585db5%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Andrew Godwin

unread,
Mar 5, 2017, 2:14:55 PM3/5/17
to django...@googlegroups.com
Python threads are not that great at sharing I/O - I'd recommend you run at most one or two per physical CPU core in your machine. 500 is just going to suffocate Python in context switching.

We have loadtests that definitely got above 10 messages a second - what ASGI backend are you using? What size messages are you sending? Have you tried running workers in separate processes?

Andrew

Lada B

unread,
Mar 5, 2017, 5:00:45 PM3/5/17
to Django users
im using asgi_redis, and somehow it's better since i changed settings to:

channel_layer = RedisChannelLayer(
    capacity=1000,
    expiry=5,
)

but i still dont have idea whats correct. I would like to explain what im trying to achiev. I have website for matchmaking of dota 2 lobby games. When 10 players are finding match, ready check is sent and when everyone is ready then Steam bot starts and hosts the game...Bot must run entire time when players are playing so it needs one worker for each game played...Its why i cannot have only 2 workers or so i guess...because when there is more than one game workers are busy.

Dne neděle 5. března 2017 20:14:55 UTC+1 Andrew Godwin napsal(a):
Python threads are not that great at sharing I/O - I'd recommend you run at most one or two per physical CPU core in your machine. 500 is just going to suffocate Python in context switching.

We have loadtests that definitely got above 10 messages a second - what ASGI backend are you using? What size messages are you sending? Have you tried running workers in separate processes?

Andrew
On Sun, Mar 5, 2017 at 5:26 AM, Avraham Serour <tov...@gmail.com> wrote:
I don't think you need 500 workers, the number of workers depends on how long will message will take to be processed not the number of concurrent connections.

The worker job is to get a message and send to a bunch of clients, the ASGI server, meaning the process actually holding the websocket, in your case daphne, will actually upload the data.

Is the client and server running on the same computer?

Don't stress test your server manually, you should write a script to do it.
On Sun, Mar 5, 2017 at 1:28 PM, Lada B <lada...@gmail.com> wrote:

HW performance with 1 user sending message every 100ms: https://i.imgur.com/5F4mKAI.png


Yesterday i tried to use it in production and when there were 15 clients connected (each one pinging server every 2 seconds) nobody else could connect.

I have default settings and I followed Getting started tutorial.

--
You received this message because you are subscribed to the Google Groups "Django users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to django-users...@googlegroups.com.

To post to this group, send email to django...@googlegroups.com.
Visit this group at https://groups.google.com/group/django-users.

--
You received this message because you are subscribed to the Google Groups "Django users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to django-users...@googlegroups.com.

To post to this group, send email to django...@googlegroups.com.
Visit this group at https://groups.google.com/group/django-users.

Lada B

unread,
Mar 5, 2017, 5:05:19 PM3/5/17
to Django users
Also i havent tried running workers in separate processes

Is the client and server running on the same computer?
No

And i just realized that expiry=5 is wrong because it disconnects users. Thats why it feels "better" now :D

Lada B

unread,
Mar 5, 2017, 8:17:50 PM3/5/17
to Django users
I just tried to run workers in separate processes and it works great! it's awesome and it fails only because of redis channel memory being full. Dont know how to increase it tho.

Andrew Godwin

unread,
Mar 5, 2017, 9:44:38 PM3/5/17
to django...@googlegroups.com
If your channels get full it means your workers aren't draining the channels fast enough - you should spin up more workers. At some point you also need to increase the channel capacity as well to smooth out bumps, but the channel capacity only needs to be about the same as the number of requests per second - it's 100 by default.

Andrew

On Sun, Mar 5, 2017 at 5:17 PM, Lada B <lada...@gmail.com> wrote:
I just tried to run workers in separate processes and it works great! it's awesome and it fails only because of redis channel memory being full. Dont know how to increase it tho.

--
You received this message because you are subscribed to the Google Groups "Django users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to django-users+unsubscribe@googlegroups.com.

To post to this group, send email to django...@googlegroups.com.
Visit this group at https://groups.google.com/group/django-users.
Reply all
Reply to author
Forward
0 new messages