There are two scenarios I can think of as being plausible to describe what you are seeing.
What could happen is that if for some reason the request thread handling the read in the mod_wsgi daemon process, and which is reading in the input, got stuck and stopped reading input even though it hadn't all been read.
In that case what will happen is that the Apache child worker process will still be trying to proxy across the request data, but eventually it would fill up the available network socket buffer. For Linux systems this is generally in the MBs, but the exact size is actually dynamic and depends on overall system buffer usage.
Either way, the very large request data you have would still fill up the buffer and the Apache child process proxying the request would block and wouldn't be able to write any more data.
When it blocks in this way, it would then timeout based by default on the timeout value set by the Timeout directive. In your case that is 5 minutes after it blocks. You would then see the error you do.
The request thread in the daemon process if it was blocked and can't recover, would stay busy and if this kept happening you would start to run out of request threads.
If this latter scenario is happening then there are special timeout values for daemon process mode that can be set to recover a process automatically, but I haven't seen anything to suggest your server as a whole is hanging due to request thread exhaustion.
The second scenario I can think of would as far as I know be hard to hit on Linux because of the larger socket buffer sizes, but the fact that you have large request content is one half of what can trigger it.
In this case if the amount of request content is larger than what can fit in the socket buffer and the WSGI application generates a response without having read the complete response, and where the response itself is larger than the socket buffer size, then you can get a situation where the Apache child process proxying the request is blocked as it cannot write more request content, but that the daemon is also blocked as it cannot write the response. It cannot write the response as the child process proxying the request content will only start reading the response when it has written all the content. So both sides block on each other. As before the child process will timeout after 5 minutes and things will then recover.
So is it possible that you have a situation where you are generating a response without having read all request content and the response content size itself is very large?
I have never seen anyone hit this later problem on Linux before, but I know on MacOS X it isn't too hard to achieve as the default socket buffer size on MacOS X is only 8KB and not in MB range as is the case with Linux. The WSGIDaemonProcess directive has options to change the socket buffer size specifically because of the small socket buffer size on Mac OS X.
Graham