Hi,
Unfortunately, there is no way to do this (at least not in a useful fashion). One can technically run multiple simulations at the same time on one GPU, if you simply open two different command lines/terminals, and run the scripts at the same time. But the total time per simulation will increase, so it ends up being a wash. If one simulation takes 5 minutes to run, running 2 simulations back to back will be 5 minutes each, or 10 minutes total. Running 2 at the same time, and both will take 10 minutes (and probably slightly longer, as the GPU will have to do some extra coordination of resources). This is true even for very small systems. You're better off just queueing simulations as normal. Basically, while you can double up on the memory, the computation ends up having to trade back and forth, so there is no time savings.
You might be able to get some savings by reusing things like the demag kernel (mumax tries to do this automatically, so this probably is already being done). Or loading an initial configuration in from an OVF file, instead of having to relax manually every time, if that is how you're obtaining particular initial configurations. But that's about it.
Best,
Josh L.