CPU usage by workers

76 views
Skip to first unread message

qnub

unread,
May 8, 2017, 8:57:53 AM5/8/17
to Django users
We have executed cluster of 3 docker containers (on separated machines) with daphne and workers --threads=4 in each container. And this container consube about 15% machine's CPU after start. Then it start consume addtional 4-5% of CPU per new connection and seems like not free this resources when user is disconnected (we've removing channels from groups on WS disconnection)…

I've decreased amount of worker threads to 1 and now container get 3-5% of CPU after start and 1-2% per new user. Looks like now it started to free CPU after user disconnection when get about 10% of CPU.

Also we use asgi_redis.RedisLocalChannelLayer with 3 redis instnaces (one per machine).

I not sure if it's correct behaviour ot it's my fault somwhere?

Andrew Godwin

unread,
May 8, 2017, 1:10:28 PM5/8/17
to django...@googlegroups.com
Daphne does tend to idle hot, but this is so it performs better under high load. It's not clear from your description which of the processes is using more CPU as connections come through and then disconnect - is it Daphne or is it runworker?

Andrew

--
You received this message because you are subscribed to the Google Groups "Django users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to django-users+unsubscribe@googlegroups.com.
To post to this group, send email to django...@googlegroups.com.
Visit this group at https://groups.google.com/group/django-users.
To view this discussion on the web visit https://groups.google.com/d/msgid/django-users/964cb488-a5d9-4f6c-890f-3c4dec4d2ec4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

qnub

unread,
May 10, 2017, 7:17:05 AM5/10/17
to Django users
Thanks for answer!

python manage.py runworker --threads=4

This process consume CPU resources (daphne and delay processes seems not meaningful here).

I'll try describe step by step:

1. daphne, workers and delay started and workers consume about 10% CPU
2. open new tab in chrome and worker's process CPU usage raises up for about 3-4% and start use about 13-14%
3. keep tab opened and do nothing, CPU usage still same even without any action (only keep WS connection alive).
4. close tab in chrome, WS disconnects but workers continue use same CPU 13-14%

When i repeat open and close tabs CPU usage raising but not lowered when i close connection. So after some amount of iterations CPU usage raises up to 96% and system start to be unresponsive.

I thought workers should free used resources when channel is closed.

Also it's may be my fault somewhere (and i pretty sure it is). But i not sure where to start my investigation.


On Monday, May 8, 2017 at 11:10:28 PM UTC+6, Andrew Godwin wrote:
Daphne does tend to idle hot, but this is so it performs better under high load. It's not clear from your description which of the processes is using more CPU as connections come through and then disconnect - is it Daphne or is it runworker?

Andrew
On Mon, May 8, 2017 at 5:46 AM, qnub <qnu...@gmail.com> wrote:
We have executed cluster of 3 docker containers (on separated machines) with daphne and workers --threads=4 in each container. And this container consube about 15% machine's CPU after start. Then it start consume addtional 4-5% of CPU per new connection and seems like not free this resources when user is disconnected (we've removing channels from groups on WS disconnection)…

I've decreased amount of worker threads to 1 and now container get 3-5% of CPU after start and 1-2% per new user. Looks like now it started to free CPU after user disconnection when get about 10% of CPU.

Also we use asgi_redis.RedisLocalChannelLayer with 3 redis instnaces (one per machine).

I not sure if it's correct behaviour ot it's my fault somwhere?

--
You received this message because you are subscribed to the Google Groups "Django users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to django-users...@googlegroups.com.

Melvyn Sopacua

unread,
May 10, 2017, 7:58:38 AM5/10/17
to django...@googlegroups.com

On Tuesday 09 May 2017 23:21:23 qnub wrote:

 

> Also it's may be my fault somewhere (and i pretty sure it is). But i

> not sure where to start my investigation.

 

I would start with strace - a common cause for this is expecting a resource that does not exist and keep trying. A filename or network address / port number can be that candle in the dark.

 

--

Melvyn Sopacua

qnub

unread,
May 15, 2017, 10:37:59 AM5/15/17
to Django users
Thanks for suggestions but i have found nothing suspicious.

But i've installed pyinotify and switch to running workers in separate processes manually instead of using --threads option.
Not sure which of this actions helps more but now CPU usage floating around 15-25%

qnub

unread,
May 25, 2017, 3:10:14 AM5/25/17
to Django users
For those who have same CPU usage issue — my mistake was in isage RedisLocalChannelLayer in combo with delay server. 

Looks like many delayed messages just hangs on nodes. I've thought it should be executed on same node, but seems partially it was consumed, but partially hangs.
Then i've tried to run delay worker on single node and hope it'll consume delayed messages from all other nodes, but it's not work too.
Then i've switched this node from RedisLocalChannelLayer to RedisChannelLayer and seems like all nodes start to ignore this settings and consume all existed messages. Also CPU usage lowered to the 0-1%

So i've just switch all nodes to RedisChannelLayer.
Reply all
Reply to author
Forward
0 new messages