Hi:
I think it is safe to perform connection acceptance and epoll() oriented TCP socket
transactions in separate thread/tasks if you adhere to the following rules.
My server follows these rules and works seamlessly with the following exception. If it uses
Linux close() TCP socket descriptors, after uv_poll_stop() and uv_close() have succeeded for a
particular uv_poll_t connection handle, internal Libuv structures are corrupted
intermittently after a long period of successfully making and releasing many TCP connections.
This implies that Libuv automatically closes TCP socket descriptors because my Server
never runs out of them no matter how long it runs.
RULES
-----
* There is separate uv_loop_t Loop structure for accepting incoming connections and epoll() I/O events.
In my case they are declared as follows:
uv_loop_t Connect_Loop; // Libuv loop for incoming connections.
static uv_loop_t Poll_Loop; // Libuv loop for reading incoming network data.
* The Loop structures are owned by separate tasks and only accessed by the task that owns the Loop structure.
In my case the main() task/thread owns Connect_Loop and the IO_Task() thread/task owns Poll_Loop.
* Only the task/thread which owns the incoming connection acceptance Loop calls connection management
related Libuv API routines, such uv_accept() and uv_close(), and access uv_tcp_t connection
handles allocated during connection acceptance.
In my case the main() task/thread follows this rule and owns Connect_Loop.
* Only the task/thread which owns the epoll() I/O event Loop calls polling related Libuv API routines,
such as uv_poll_start() and uv_poll_stop(), and access uv_poll_t connection handles allocated
during polling initiation.
In my case the the IO_Task() thread/task follows this rule and owns the Poll_Loop.
Best Regards,
Paul R.