Hi again,
So i can run Dirac perfectly on a VM created on our data server (scalable II / 2x16cores / 50Gb ram) and on my home computer (2x2686v4/128Gb ram) but we have segmentation fault on our cluster (4x2x40cores / 1Tb ram) which is driving me nuts. Its 4 nodes in a slurm cluster. No matter if i run the calculation on an individual node or through slurm from the masternode; same fault.
I have found that it is maybe because of the 32bit installation of Dirac.
In the manual the bottleneck could be 16Gb/core. I dont know how this is counted but if i run a job with only 1node/40cores then we are talking about ~25Gb/core.
Just realized while writing this that maybe i should run the jobs on at least 80 cores :)
That would be the aim anyway if i am moving on real complexes after evaluation.
The other related question is if Intel fixed the "64bit integer Intel MPI mpi_reduce"
problem? Would nice to know before putting much time in reconfigure the cluster.
Thank you again,
Best regards,
Szabolcs