How to limit the number of requests Apache forwards to mod_wsgi?

22 views
Skip to first unread message

Phivos Stylianides

unread,
Nov 18, 2019, 8:58:39 PM11/18/19
to modwsgi
I have a backend service which exposes a single endpoint to perform some database updates. Each request is expected to take between 5-25 seconds on average. Apache/mod_wsgi and the flask app sit on 2 Elastic Beanstalk instances with 2 CPUs and 8GB ram each. I want to limit the traffic send to the app to 10-20 requests at any given time per instance. Any additional requests should queue in the backlog and if not picked up within the configured timeout they should be dropped until repeated later.

For some reason the number of requests per process goes really high, and so the average latency starts to go up and failure rates increase too. I setup recording of `mod_wsgi.process_metrics()["request_count"]` stats and I verified it goes from 0 to >180, and `mod_wsgi.process_metrics()["threads"]["request_count"]` ranging from 24-65 within less than 1 hour. I tried reducing Apache's `MaxRequestWorkers` and `ListenBackLog`. Also WSGIDaemonProcess' `queue-timeout` and `listen-backlog` but no luck. It's important to not accept a request when max capacity is reached and then timeout halfway while processing because all threads were too busy. This is a worker service so it's fine to drop the request as it will be retried later.

I'm using Apache 2.4.39 and mod_wsgi 4.6.5. Apache is configured as follows:

<IfModule reqtimeout_module>
    RequestReadTimeout header=15,MinRate=500 body=150,MinRate=500
</IfModule>


TimeOut 150


KeepAlive Off
KeepAliveTimeOut 0


<IfModule mpm_worker_module>
  StartServers           1
  ServerLimit            2
  MinSpareThreads        5
  MaxSpareThreads        10
  ThreadLimit            5
  ThreadsPerChild        5
  ListenBackLog          10
  MaxRequestWorkers      10
  MaxConnectionsPerChild 5000
</IfModule>

mod_wsgi config:

WSGIDaemonProcess wsgi \
  processes
=10 \
  threads
=5 \
  display
-name=%{GROUP} \
  python
-home=/opt/python/run/venv/ \
  python
-path=/opt/python/current/app user=wsgi group=wsgi \
  home
=/opt/python/current/app \
  lang
='en_US.UTF-8' \
  locale
='en_US.UTF-8' \
  connect
-timeout=15 \
  socket
-timeout=25 \
  request
-timeout=25 \
  deadlock
-timeout=120 \
  graceful
-timeout=120 \
  restart
-interval=0 \
  maximum
-requests=200 \
  queue
-timeout=15 \
  listen
-backlog=10


WSGIProcessGroup wsgi
WSGIApplicationGroup %{GLOBAL}

Any ideas what might do the trick?

Graham Dumpleton

unread,
Nov 19, 2019, 2:24:42 AM11/19/19
to mod...@googlegroups.com

On 19 Nov 2019, at 2:53 am, Phivos Stylianides <stph...@gmail.com> wrote:

I have a backend service which exposes a single endpoint to perform some database updates. Each request is expected to take between 5-25 seconds on average. Apache/mod_wsgi and the flask app sit on 2 Elastic Beanstalk instances with 2 CPUs and 8GB ram each. I want to limit the traffic send to the app to 10-20 requests at any given time per instance.

Currently you are using:

  processes=10
  threads=5

which means the application can handle up to 50 requests at any one time. Up to 5 requests per process, with 10 processes. If you want to cap it at a lower maximum number, you need to change those.

Any additional requests should queue in the backlog and if not picked up within the configured timeout they should be dropped until repeated later.

For some reason the number of requests per process goes really high, and so the average latency starts to go up and failure rates increase too. I setup recording of `mod_wsgi.process_metrics()["request_count"]` stats and I verified it goes from 0 to >180, and `mod_wsgi.process_metrics()["threads"]["request_count"]` ranging from 24-65 within less than 1 hour.

The request_count values from memory are an incrementing counter of number of requests over time, so it will keep increasing. It is not a count of the number of current active requests.

You would need to track len(mod_wsgi.active_requests) on a per process basis to know how many are active at the time of the call.

I tried reducing Apache's `MaxRequestWorkers` and `ListenBackLog`. Also WSGIDaemonProcess' `queue-timeout` and `listen-backlog` but no luck. It's important to not accept a request when max capacity is reached and then timeout halfway while processing because all threads were too busy. This is a worker service so it's fine to drop the request as it will be retried later.

It is unclear what you want to happen. FWIW, you may want to read:


If you want really fine grained control over what happens when back logging occurs, you probably can't use a WSGI server, and should write an async HTTP application to handle requests. That way you can hold requests in your own queue if you want to try and limit number of concurrent database updates.

--
You received this message because you are subscribed to the Google Groups "modwsgi" group.
To unsubscribe from this group and stop receiving emails from it, send an email to modwsgi+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/modwsgi/f386576b-2e19-4115-9ecb-f954a389be02%40googlegroups.com.

Reply all
Reply to author
Forward
0 new messages