Different execution times in different nodes

30 views
Skip to first unread message

José Manuel Abuín Mosquera

unread,
Jan 20, 2015, 7:20:02 AM1/20/15
to mp...@googlegroups.com
Hello,

I am a Phd student at the University of Santiago de Compostela, in Spain. I am trying DataMPI and I have a question.

I am launching a Java program in a cluster with 4 computing nodes. Each one of these nodes has 8 cores. The command is this one:

mpidrun -mode COMMON -O 31 -A 1 -jar DataMPIProlnat.jar DataMPIProlnat /Datos/Wikipedias/WikipediaTest.txt /Datos/Wikipedias/saidaDataMPI

Actually, I don't need an aggregator task, but if I set it to 0, it doesn't work. Then, according with the exit produced by the program, the processes distribution is something like this:


node1: Rank 0 to 7
node2: Rank 8 to 15
node3: Rank 16 to 23
node4: Rank 24 to 31

This distribution can change, but the processes 0 to 7 are always running together, also the 8 to 15 and so on.

The thing is that I've noticed that the node where processes from 0 to 7 are running is slower than the others. I am running various executions to check if it is a problem with the node, but it is not, it always happens in the node where the processes 0 to 7 are running. If I launch a top on this node, I can see how process 0 is using about 300% of the CPU.
This has as consequence that the other processes running in this node are slower than the rest of the processes in the rest of the nodes, and the execution time of the whole program is conditioned by the processes running in this node.

So, my question is: Is this because the process with the rank 0 is coordinating the rest of the processes and it uses CPU time for this?
Or maybe it is because I am not using the aggregator, despite I am indicating it in the mpidrun command?

As an example, some execution times:
Rank: 26. Time: 291.087 seconds
Rank: 28. Time: 346.387 seconds
Rank: 10. Time: 374.219 seconds
Rank: 27. Time: 373.876 seconds
Rank: 14. Time: 384.833 seconds
Rank: 8. Time: 388.848 seconds
Rank: 11. Time: 389.071 seconds
Rank: 20. Time: 407.365 seconds
Rank: 12. Time: 411.784 seconds
Rank: 23. Time: 409.859 seconds
Rank: 25. Time: 409.053 seconds
Rank: 17. Time: 419.079 seconds
Rank: 13. Time: 423.223 seconds
Rank: 30. Time: 422.54 seconds
Rank: 24. Time: 425.876 seconds
Rank: 9. Time: 443.026 seconds
Rank: 15. Time: 440.794 seconds
Rank: 29. Time: 442.323 seconds
Rank: 22. Time: 442.808 seconds
Rank: 16. Time: 453.815 seconds
Rank: 19. Time: 463.651 seconds
Rank: 21. Time: 463.868 seconds
Rank: 18. Time: 470.588 seconds
Rank: 4. Time: 496.915 seconds
Rank: 5. Time: 508.04 seconds
Rank: 6. Time: 515.287 seconds
Rank: 1. Time: 518.023 seconds
Rank: 2. Time: 533.121 seconds
Rank: 3. Time: 542.59 seconds
Rank: 7. Time: 583.004 seconds
Rank: 0. Time: 815.559 seconds

As you can see, the execution times of ranks 0 to 7 are considerably higher than the others and, the one from rank 0 even higher. If you are running a program that takes a few hours to run, this can be a problem.

Thank you very much!!

MPI-D

unread,
Jan 23, 2015, 1:15:59 PM1/23/15
to mp...@googlegroups.com
Thanks a lot for your feedback. Could you please share a reproducer (application + sample data set) to us? And we will try and get back to here later.

Thanks,
DataMPI Team

José Manuel Abuín Mosquera

unread,
Jan 27, 2015, 4:18:34 AM1/27/15
to mp...@googlegroups.com
Hello,

finally I think I figured out the problem (please, correct me if I am wrong in some of these clarifications)

In our program, we don't need to do anything with the Communicator A, but, if we launch the program with the parameter -A 0, the program never finish, so we set this parameter as -A 1, but in the Communicator A part of the code, we don't do anything.

else if(MPI_D.COMM_BIPARTITE_A != null){
               
            }


If we do it like this, the threads used for communication between communicators O and A are using computing time, despite the fact that we don't need to.

So, the final conclusion is that the communication between O and A processes is impoved by using MPI, but I have no way to launch a program without using A tasks (I know that this makes no sense in the case of use the library, because, actually, this is one of the improvements of DataMPI). And the question is, is it possible to launch a program wihtout A tasks?

Thank you very much for your time, and please, correct me if I am wrong in some of these lines.

MPI-D

unread,
Jan 27, 2015, 11:43:18 PM1/27/15
to José Manuel Abuín Mosquera, mp...@googlegroups.com
Thanks for the exploration here. Seems like you have answered your
questions well even though we don't have your reproducer and
application.

Regarding no-A jobs, I think you just need to launch a traditional
parallel job, rt? You have explained the reasons also. :-)

But we will think more on your suggestion. And may give a support in
our next release.

Thanks
DataMPI Team
> --
> You received this message because you are subscribed to the Google Groups
> "MPI-D" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to mpi-d+un...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
Reply all
Reply to author
Forward
0 new messages