Error trying to pass the option '-gpu 0' to gprmax

84 views
Skip to first unread message

Anadyr

unread,
Sep 25, 2020, 9:54:17 AM9/25/20
to gprMax-users
Dear  all,

I got the following error trying to run the gprmax using CUDA:

Traceback (most recent call last):
 
File "/opt/anaconda3/envs/gprMax/lib/python3.8/runpy.py", line 194, in _run_module_as_main
   
return _run_code(code, main_globals, None,
 
File "/opt/anaconda3/envs/gprMax/lib/python3.8/runpy.py", line 87, in _run_code
   
exec(code, run_globals)
 
File "/opt/anaconda3/envs/gprMax/lib/python3.8/site-packages/gprMax-3.1.5-py3.8-linux-x86_64.egg/gprMax/__main__.py", line 6, in <module>
    gprMax
.gprMax.main()
 
File "/opt/anaconda3/envs/gprMax/lib/python3.8/site-packages/gprMax-3.1.5-py3.8-linux-x86_64.egg/gprMax/gprMax.py", line 66, in main
    run_main
(args)
 
File "/opt/anaconda3/envs/gprMax/lib/python3.8/site-packages/gprMax-3.1.5-py3.8-linux-x86_64.egg/gprMax/gprMax.py", line 188, in run_main
    run_std_sim
(args, inputfile, usernamespace)
 
File "/opt/anaconda3/envs/gprMax/lib/python3.8/site-packages/gprMax-3.1.5-py3.8-linux-x86_64.egg/gprMax/gprMax.py", line 229, in run_std_sim
    run_model
(args, currentmodelrun, modelend - 1, numbermodelruns, inputfile, modelusernamespace)
 
File "/opt/anaconda3/envs/gprMax/lib/python3.8/site-packages/gprMax-3.1.5-py3.8-linux-x86_64.egg/gprMax/model_build_run.py", line 170, in run_model
    G
.memory_check()
 
File "/opt/anaconda3/envs/gprMax/lib/python3.8/site-packages/gprMax-3.1.5-py3.8-linux-x86_64.egg/gprMax/grid.py", line 241, in memory_check
   
raise GeneralError('Memory (RAM) required ~{} exceeds {} detected on specified {} - {} GPU!\n'.format(human_size(self.memoryusage), human_size(self.gpu.totalmem, a_kilobyte_is_1024_bytes=True), self.gpu.deviceID, self.gpu.name))
gprMax
.exceptions.GeneralError: Memory (RAM) required ~2.58GB exceeds 1.95GiB detected on specified 0 - GeForce GT 1030 GPU!


pyCUDA is install and other programs, which use CUDA, are working perfectly on the same PC.
Here my NVIDIA settings:

Fri Sep 25 15:50:24 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 455.23.05    Driver Version: 455.23.05    CUDA Version: 11.1     |
|-------------------------------+----------------------+----------------------+
| GPU  Name  Persistence-M      | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  GeForce GT 1030     On   | 00000000:08:00.0 Off |                  N/A |
|  0%   25C    P8    N/A /  30W |     38MiB /  1999MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      1076      G   /usr/lib/xorg/Xorg                 12MiB |
|    0   N/A  N/A      1122      G   /usr/bin/sddm-greeter              23MiB |
+-----------------------------------------------------------------------------+


Any help?

Antonis Giannopoulos

unread,
Sep 25, 2020, 10:05:21 AM9/25/20
to gprMax-users
Hi Anadyr,

I don't understand what kind of help one can provide with this. 

raise GeneralError('Memory (RAM) required ~{} exceeds {} detected on specified {} - {} GPU!\n'.format(human_size(self.memoryusage), human_size(self.gpu.totalmem,a_kilobyte_is_1024_bytes=True), self.gpu.deviceID, self.gpu.name))

gprMax.exceptions.GeneralError: Memory (RAM) required ~2.58GB exceeds 1.95GiB detected on specified 0 - GeForce GT 1030 GPU!

It seems that if you just simply look at the error reported it seems that you do not have enough RAM on your GPU to fit the model you are building. I suppose you need to make the model smaller or get a GPU with more memory.

Hope that helps

Antonis

Anadyr

unread,
Sep 25, 2020, 4:07:08 PM9/25/20
to gprMax-users
After searching for hours I found that a:

sudo apt install nvidia-cuda-toolkit

was necessary, despite the output of the

nvidia-smi

telling me, that CUDA v11 was installed.

The installation of cuda-toolkit fixed the problem
Reply all
Reply to author
Forward
0 new messages