Xyce uses more than 100% of CPU

72 views
Skip to first unread message

Mehmet Cirit (Ceridli)

unread,
Sep 27, 2022, 5:30:23 PM9/27/22
to xyce-users
I have a bunch of Xyce jobs, some of them start using more 100% of cpu. There are 32 threads on this machine, obviously they are using more than 1 cpu. This 7.5. I don't
see similar situation on a 16 thread machine. 

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND    
13635 mac       20   0 1302436 712148  38236 R 377.2  1.1  54:20.99 Xyce        
13638 mac       20   0 1288536 698268  38236 R 367.9  1.1  54:28.31 Xyce        
13980 mac       20   0 1147936 627160  38236 R 299.7  1.0  35:41.36 Xyce        
14880 mac       20   0  927016 476428  38236 R 255.0  0.7   7:34.34 Xyce        
13929 mac       20   0  892916 442168  38236 R 245.4  0.7  31:55.01 Xyce        
15442 mac       20   0  266148  63060  33168 R  70.9  0.1   0:28.32 Xyce        
15175 mac       20   0  296116  93208  33360 R  67.9  0.1   1:17.69 Xyce        
15152 mac       20   0  310232 109676  35772 R  67.2  0.2   1:25.87 Xyce  

When this happens, some other jobs are starved of CPU time.  Is it possible that
multithreading turned on dynamically ? Is it possible to turn it off?  This machine is running Fedora 28. The other two which does not show this are Centos 7, and Ubuntu 22.


xyce-users

unread,
Sep 27, 2022, 5:36:36 PM9/27/22
to xyce-users
If you are using our precompiled binaries, they have been built using the Intel MKL (Math Kernel Library) multi-threaded libraries which can indeed use more than  one thread when for some reason the library thinks it can help.

The MKL libraries use OpenMP to do threading, and therefore you can control the maximum number of threads it can use by setting an environment variable OMP_NUM_THREADS.  Set it to 1 to disable multithreading.

If you are not using one of our precompiled binaries then I can't account for where this would be coming from unless you are using another threaded BLAS or LAPACK library (such as Atlas, which has no such mechanism for restricting the number of threads).

Mehmet Cirit

unread,
Sep 30, 2022, 6:14:57 PM9/30/22
to xyce-users
Thanks for the explanation. Reading the manuals, it looks like when you use iterative methods for solving linear equations, you are using multithreading. I used the  -linsolv KLU option to force the use of direct methods.
It seems to have fixed the problem to a large extent. Setting the OMP_NUM_THREADS env variable fixed the remainder.

--
You received this message because you are subscribed to the Google Groups "xyce-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to xyce-users+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/xyce-users/123d1524-6e94-4943-a18a-24c531db3c5fn%40googlegroups.com.


--

Dr. Mehmet A. Cirit                    Phone:  (408) 647-6025
Library Technologies, Inc.        Cell:       (408) 647-6025
19959 Lanark Lane                   http://www.libtech.com
Saratoga, CA 95070                 Email: m...@libtech.com

Kevin Cameron

unread,
Oct 1, 2022, 7:39:44 PM10/1/22
to Mehmet Cirit, xyce-users
You can also try running in Docker or a VM to limit the available resources. You can turn (virtual) cores off & on in the VMs.

Multithreading and parallel-processing are not quite the same thing, I presume you just want to limit the cores used rather than threads, but I don't know what the relationship is for Xyce.

Kev.

xyce-users

unread,
Oct 1, 2022, 8:35:18 PM10/1/22
to xyce-users
Multithreading and parallel processing are indeed not quite the same thing, and Mehmet's issue is a multithreading one:  the MKL libraries that are linked to in the precompiled Xyce binaries can use multiple threads during their BLAS and LAPACK operations.  Sometimes this usage can run away and try to use as many threads as there are cores on the machine --- and if more than one copy of Xyce is running at that time it can cause serious oversubscription.

The manuals make no mention of this because it is unique to the MKL and most users don't build with that.  The issue of iterative solvers vs. direct solvers is very different:  if you are using a parallel build of Xyce with OpenMPI and are running a problem that is large enough, Xyce defaults to using iterative solvers.  This has nothing directly to do with multithreading, although the same runaway threading can happen in a parallel build as happens in a serial build. 

Using OMP_NUM_THREADS to limit the number of threads any run of Xyce can use (whether it's a serial or parallel build) is the definitive solution to this oversubscription problem.
Reply all
Reply to author
Forward
0 new messages