Hi Manuel,
Many thanks for your answer.
I tried to find the cause of the error and tried many different solutions, but it still did not work. I tried to clean up the cfiles folder for every irace run, but it did not work either.
It is strange to me since the corresponding "cfiles\c71-33-259406559.stdout" exists in the folder, gives a correct output, and has never been deleted by any means. Moreover, the corresponding stderr file did not return any error.
I tried to run the same target-runner in a cluster (single node, multiple cores), and it works perfectly fine. So, I guess there are other problems that I cannot comprehend when I run it on my personal computer.
Thank you also for your answer regarding the logout file, I managed to work on it.
Since I am now working on a computer cluster (slurm), I have a question regarding the implementation of mpi to run irace. I need to use several computer nodes because due to the limited time per user allowed in the cluster, I need to do high parallelization (i.e., more than the max number of CPUs on a single node). So, the cluster administrator suggested I use mpi for this. I managed to follow and run the example that is written here:
https://docs.alliancecan.ca/wiki/R#Rmpi to test how to work with MPI and Rscript in the cluster, but I don't know how to extend this knowledge to implement irace.
What I do now to run irace in in the cluster with a single node, multiple cores is to create a bash file that calls my Rsript in which I run irace from the scenario file. As in the example of "slurm" in irace documentatin.
But, then I do not have any idea on how to extend this when I want to use mpi for several computer nodes in the cluster. To be honest, I am having trouble understanding the command in "parallel-irace-mpi". Do I need to implement this command on the bash file or on the specified target runner? Moreover, is it correct if I reference myself to "parallel-irace-mpi"? Or should I use other examples in this case?
I hope my questions are clear enough. Any feedback from you would be very much appreciated.
Thank you again and hope you have a nice holiday season!
Best regards,
Anisha