Since the asyncio TCP listeners always process data from all connections the listener spawns in the same event loop, this would be difficult. You could accept the socket yourself and then fork(), create a fresh asyncio event loop, and then call loop.connect_accepted_socket() to convert the accepted socket to an asyncio transport for an SSHServerConnection, but it’s not really clear what benefit AsyncSSH is providing to you at that point. You’d probably be better off just using a standard SFTP server like the one in OpenSSH.
The alternative would be to let the files all be owned by the UID/GID which is running the Python asyncio event loop, but have your own implementation of open() which would do enforcement of permissions. If you really want the users to not be system users, you might need to do something like that, as you can’t do a setuid or setgid without the IDs you pass to them actually being real system users/groups. Even if you are mounting something off of a remote system with NFS, the UIDs and GIDs on that NFS server need to exist on the machine mounting the volume if you want proper permissions enforcement based on UID/GID.
One other thing that AsyncSSH supports is the ability to a chroot() based on the user, so they can only see a specific subtree of files. That way, even if the UID running the Python process is the same for everyone, each user connecting in via SFTP would only be able to see “their own” files. Would that work for you? If so, there’s an example of this at
https://asyncssh.readthedocs.io/en/latest/#sftp-server. See the second example specifically there which sets the “chroot” argument in MySFTPServer.