2/ At least on linux, asyncio uses epoll, which is a structure owned by the kernel and identified by a fd. When forking, the child inherits this fd. This means that the list of events watched by the loop (for instance "a read is ready on socket X") is registered by the kernel and shared by both processes.
If one of the processes open a socket and watches an event on it, the other process will receive the notification... but it doesn't know anything about this new file, and may try to read on the wrong one (or non-existing fd).
Anyway, even if the loop is not running when the fork is performed, there is still a problem which requires monkey-patching.
If you choose to close the paren's loop in the child right after the fork, you will only prevent the 1st problem: when the loop is closed and disposed, the cleanup will unregister all watched events, which will affect the parent loop (2nd problem). You *must* monkey-patch the loop's selector.
I've got a project which does that, and it's quite brittle as I've got to take care of the global state of the loop when forking. I am considering replacing this fragile implementation with one that start a fresh python process. The downside with the strategy is that spawning a process will take more time (initialization is quite slow in python) and I will need a RPC to send data from the parent.
Maybe there are other problems I'm not aware of, but as I said, I fork a process with a running loop in a something used in prod, and it works fine, so in practice, it's hard but doable.