My guess at what's happening in the first case: whenever you fork(), the child gets a copy of the state of the parent. This includes any listening sockets too. So, when you start the WSGI server then fork, there is actually a WSGI server listening on a socket in the child process as well as the parent, as well as a copy of the parent's memory space including a copy of result_queue...
Normally with opening a socket and then forking, both processes would share the listening socket, and incoming connections would be divided between the two processes. However, in the parent you call os.waitpid() after forking. I think this call is blocking the main thread in the parent process, and so blocking all greenlets in the parent process from running. The WSGI server in the parent needs to spawn and run greenlets to accept connections, but it can't. So no connections can be accepted in the parent, and so all the connections from the child actually go to the listening socket / server in the child process. Presumably the copy of result_queue in the child process is filling up. Then when the child process exits, os.waitpid() returns, all the greenlets in the parent process are un-blocked, but no connections ever were accepted by the parent process, and so the queue in the parent is empty.
In the second case, when you start the server in the parent after the fork(), there is no server running in the child and so only the parent is accepting connections on the socket. But when the parent calls os.waitpid(), it blocks the main thread in the parent process, and so no greenlets can run, no connections can be accepted in the parent - and so all the connections from the child timeout.
Dealing with fork(), subprocesses and signals in gevent is tricky. I think the proper non-blocking substitute for os.waitpid() would be to use libev child watchers. I suggest looking at using gipc -
http://gehrcke.de/gipc/ - which should simplify all this for you.