Dear ADDA Developers,
Greetings from China! I am a graduate student currently using ADDA to simulate the scattering matrix of randomly oriented superellipsoidal particles. I greatly appreciate your team's development of such a powerful and flexible tool.
Recently, however, I encountered some difficulties. When simulating particles with a radius of approximately 3 µm at a wavelength of 355 nm, the computation has been running for more than 5 days on a server with 76 CPU cores, yet it has not completed. I am unsure whether this is due to limitations in ADDA's applicability to large particles of this shape, or if my simulation parameters may be suboptimal.
To help clarify the issue, I have attached the following files:
avg_params.dat
— my orientation averaging configuration
run_adda_test.sh
— the shell script used to launch the simulation
nohup.out
— partial output from the running process
I would be truly grateful if you could kindly offer any insights or suggestions regarding the cause of this slowdown or potential improvements to my settings.
Thank you very much for your time and for making ADDA freely available to the scientific community.
Wishing you all the best in your work and life!
Wang Laibin,
university of science and technology of China.
Dear ADDA Developers,
Thank you very much for your previous reply.
I have continued running ADDA on our system as per your suggestions. I would like to ask a few further questions regarding performance optimization:
Since ADDA is written in C, but also includes some routines in Fortran and C++, I am curious whether there is any noticeable difference in execution speed depending on whether it is compiled primarily as a C, Fortran, or C++ project?
I plan to run ADDA on another supercomputing platform based on Intel processors. In that case, would compiling ADDA with ICC and/or using Intel MPI provide a significant performance advantage compared to GCC and OpenMPI?
Thank you for your time and for developing such a powerful simulation tool.
Best regards,
Laibin,
university of science and technology of china.
Since ADDA is written in C, but also includes some routines in Fortran and C++, I am curious whether there is any noticeable difference in execution speed depending on whether it is compiled primarily as a C, Fortran, or C++ project?
I do not think that you can actually control that. The
corresponding parts are compiled by corresponding compilers
(like, gcc, gfortran, and g++) and the only change you can make
is which compiler you use for linking at the end. But the latter
should not make any difference at all, since it will anyway use
internally a special linker (like ld), and the difference is
only in the supplied libraries (whether they're automatically
supplied, e.g., C standard libraries when gcc is invoked for
linking, or need to be added manually, like is now for Fortran
libraries in ADDA makefile).
I plan to run ADDA on another supercomputing platform based on Intel processors. In that case, would compiling ADDA with ICC and/or using Intel MPI provide a significant performance advantage compared to GCC and OpenMPI?
I have been experimenting with that a lot about 10-15 years ago. First, there was indeed some speedup (up to 20%) from using Intel compilers, but later (at least 10 years ago) gcc have evolved significantly (and some optimizations have been implemented in ADDA), so finally I have seen no significant difference. Sometimes, gcc was even marginally faster. Still, the Makefiles include more or less up-to-date optimizations flags for both gcc and Intel compilers, so you can easily try them and benchmark the resulting code. If you do, please share the results with the group.
With regards to Intel MPI, on the one hand, I have never tried
it explicitly (only MPICH/MPICH2, OpenMPI, and MSMPI). On the
other hand, switching MPI implementation is generally easier
than switching compilers, so you may try it as well.