Sorry for the delay in responding, was travelling and still catching up.
You are going to run up against two possible timeouts designed to protect against ensuring the web application is still alive and hasn't got blocked.
optparse.make_option('--socket-timeout', type='int', default=60,
metavar='SECONDS', help='Maximum number of seconds allowed '
'to pass before timing out on a read or write operation on '
'a socket and aborting the request. Defaults to 60 seconds.'),
That is, an Apache child worker process will wait at most 60 seconds by default when using mod_wsgi-express, for any data to be received related to a response from the mod_wsgi daemon process. This also applies where an Apache child worker process is reading or writing data with a HTTP client.
If you have very long running requests, you would need to set this higher, but increasing it carries dangers and needing to set it higher generally indicates what you are trying to do is probably a bad way of doing it.
In general, what you are better off doing is not doing such long running work inside of the actual web request. You should offload the processing to a backed task system queue. The web request would therefore return immediately once queued and the task system would pick up the job. The web UI can then poll or use some other mechanism to determine when the task is complete and the data available.
Now although you can set this higher, the next timeout that will cause a problem is:
optparse.make_option('--request-timeout', type='int', default=60,
metavar='SECONDS', help='Maximum number of seconds allowed '
'to pass before the worker process is forcibly shutdown and '
'restarted when a request does not complete in the expected '
'time. In a multi threaded worker, the request time is '
'calculated as an average across all request threads. Defaults '
'to 60 seconds.'),
This timeout is designed to detect blocked requests or requests which are taking too long in the actual mod_wsgi daemon process. When this timeout is triggered the mod_wsgi daemon process itself will be restarted to try and recover the process. For a single threaded process this will be at 60 seconds by default. For a multithread process the timeout is actually more dynamic and depends on what other concurrent requests are being handled by the process, allowing more time where there might be parallel requests that would be interrupted if done strictly at 60 seconds.
So if increasing --socket-timeout you may also have to increase this as well, or definitely will if a single thread process. May still be okay for a multi threaded process.
Thinking about it, for a multithreaded daemon process, the interaction between this timeout and the socket timeout one isn't ideal. This is because also request timeout will allow the request to run longer in a multithread process, the socket timeout on the client will already have expired and the client would have got a gateway timeout error already. I will need to think about that one, but there isn't necessarily a simple answer as to what to do. Simply increasing the default for the socket timeout to 90 from 60, only delays the inevitable.
Graham