Hi all,
I'm facing trouble in figure out how many hours should we spent to run jobs on using nested sampling. In my first trial for four combinations on speciation and clocks, I run for 1 million generation, with subsamplings 10000. In general, I spent ca 30 hours to get 1 particle done. However, calculation shown that I need 450 particles to achieve a standard deviation of less than 2.
I understand that we could reduce the runs timing by using multithreads. My question is would it be enough if we use 4 gpu and 4 cpu, for 7 particles runs (memory 400G). Is there anyway to reduce the usage of memory, as this keep my submitted job pending in a long queue.
Any suggestions /experiences are welcome. Many thanks.