The instance.list command is the correct corresponding to docker ps, the general mapping is "show me the instances of running containers." But I think what you are asking about is more of a general list of container processes, and we don't have a good command for that. The key difference is that a uesr running a docker container is still running via docker (and the pid is kept track of) but for singularity, the final execv is to the run/shell so if you looked for a singularity process, you wouldn't find it. If you tried something like:
pgrep -fa singularity
you could get generally processes with singularity in the name, but that's not going to work for shell / run / anything because of what I just described. For example I just tried this, and my shell inside the container is just shown on my system as running bash. For this same reason you can't peep into /usr/local/var/singularity/mnt.
If you look in the session directory (usually tmp) you can get a hint of current (and sometimes non cleaned up) sessions:
ls /tmp/.* | grep "singularity-runtime"
But that's not so useful.
So I'm not sure I can offer a good solution, maybe others can comment. We don't have a central orchestration to keep track of all container processes, beyond the ones that are more by definition running as instances. When you have a bunch of container instances running, they each have associated namespace / pid and then when you run as sudo you can do things like list, stop all, etc. If you are interested in singularity being run on a cluster resource, you could use lmod to keep track of loads, and then look for singularity commands used in batch scripts. That could minimally give you a way to identify running jobs (with singularity) on your resource to stop, if needed.
Anyone have ideas for how/if we could implement an equivalent ps for a superuser?