Parallelism in Cantera 3.0

19 views
Skip to first unread message

yiru wang

unread,
Jul 19, 2024, 3:55:32 PM (3 days ago) Jul 19
to Cantera Users' Group
In cantera2.5 and previous versions, I used multi-threaded computing based on the multiprocessing library. At this time, each parallel task would occupy a logical core to achieve acceleration. But after I recently upgraded to 3.0, I found that even for serial programs, a single thread can reach more than 70% of the total CPU usage, and each logical core is working. The computing speed has also increased accordingly, reaching an efficiency close to that of parallel computing (after all, the CPU is working at full capacity).
In response to this, I am very curious whether the solver in 3.0 has completed the parallel computing work in the C code module part?

Bryan Weber

unread,
Jul 21, 2024, 8:08:28 PM (17 hours ago) Jul 21
to Cantera Users' Group
Hi,

Whether you will see Cantera use multiple cores for a single solve depends on a few things. First, as far as I know, only the ReactorNet-compatible classes are able to use multiple cores, because they are linked to SUNDIALS (CVODES) which can be linked to parallel BLAS/LAPACK implementations that use multiple threads. The solvers for 1-D flames are not linked to libraries that use multiple threads, if I recall correctly.

Second, it depends on how you install Cantera. I don't recall exactly which platforms support which libraries. I feel fairly certain that with Anaconda on Linux and macOS, SUNDIALS is linked to either MKL or OpenBLAS, and Accelerate (respectively) all of which I believe use multiple threads to solve the linear algebra problems. If you install Cantera on Windows or from PyPI, I'm not sure if those capabilities are enabled.

Lastly, if you're building from source on macOS, we do link to the Accelerate library automatically, as I mentioned. On other platforms, you usually need to specify the correct BLAS/LAPACK yourself via configuration options.

Hope that helps!
Best,
Bryan
Reply all
Reply to author
Forward
0 new messages