Yes absolutely, you can and should use multiple cores if you are running this on a cluster. There is documentation on how to do this
.
It looks like your scripts (julia script and slurm submission script) are already well setup to use multiple cores.
In short: slurm should request 10 CPUs per task, if we want SNaQ to do 10 independent runs each time. Otherwise, when we ask Julia to use 10 "workers", all workers will use the same CPU: they will have to share the same CPU if slurm didn't give julia 10 separate CPUs for the same julia session. For 3 runs, we only need to request 3 CPUs. For 10 runs, we need 10 CPUs for the total computation time to be similar to the time of 1 single run.
It looks like your slurm submit script requests 30, which is quite more than what you need. Perhaps that puts your job down in the priority list?
In your slurm submit file, I'm worried about the variable "$SLURM_ARRAY_TASK_ID_5runs". I wonder if this will cause an error at the end of snaq, to redirect its output. Should it be ${SLURM_ARRAY_TASK_ID}_5runs perhaps? I see that this error in