Hi all,
What is the best way to free the GPU memory using numba CUDA?
Background:
cuda.current_context().get_memory_info()
I find that after the kernels have completed the memory is not free.cuda.current_context().reset()
seems to free up the memory, but it also gives me a 139 error when I try to run the next thread on the GPU.('i', 4138168320L, 4294770688L)
('f', 1194045440L, 4294770688L)
('i', 1194045440L, 4294770688L)
('f', 1194045440L, 4294770688L)
('i', 1194045440L, 4294770688L)
('f', 1194045440L, 4294770688L)
('i', 1194045440L, 4294770688L)
('f', 1194045440L, 4294770688L)
('i', 1194045440L, 4294770688L)
('f', 1194045440L, 4294770688L)
('i', 1194045440L, 4294770688L)
('f', 1194045440L, 4294770688L)
('i', 1194045440L, 4294770688L)
('f', 1194045440L, 4294770688L)
('i', 1194045440L, 4294770688L)
('f', 1194045440L, 4294770688L)
('i', 1194045440L, 4294770688L)
('f', 1194045440L, 4294770688L)
('i', 1194045440L, 4294770688L)
('f', 1194045440L, 4294770688L)
('i', 1194045440L, 4294770688L)
('f', 1194045440L, 4294770688L)
('i', 1194045440L, 4294770688L)
('f', 1194045440L, 4294770688L)
('i', 1194045440L, 4294770688L)
('f', 1194045440L, 4294770688L)
('i', 1194045440L, 4294770688L)
('f', 1194045440L, 4294770688L)
('i', 1194045440L, 4294770688L)
('f', 1194045440L, 4294770688L)
('i', 1194045440L, 4294770688L)
('f', 1194045440L, 4294770688L)
('i', 1194045440L, 4294770688L)
('f', 1194045440L, 4294770688L)
('i', 1194045440L, 4294770688L)
('f', 1194045440L, 4294770688L)
('i', 1194045440L, 4294770688L)
('f', 1194045440L, 4294770688L)
('i', 1194045440L, 4294770688L)
('f', 1194045440L, 4294770688L)
('i', 1194045440L, 4294770688L)
('f', 1194045440L, 4294770688L)
('i', 1194045440L, 4294770688L)
('f', 1194045440L, 4294770688L)
('i', 1194045440L, 4294770688L)
('f', 1194045440L, 4294770688L)
('i', 1194045440L, 4294770688L)
('f', 1194045440L, 4294770688L)
('i', 1194045440L, 4294770688L)
('f', 1194045440L, 4294770688L)
('i', 1194045440L, 4294770688L)
('f', 1194045440L, 4294770688L)
('i', 1194045440L, 4294770688L)
('f', 1194045440L, 4294770688L)
('i', 1194045440L, 4294770688L)
('f', 1194045440L, 4294770688L)
('i', 1194045440L, 4294770688L)
('f', 1194045440L, 4294770688L)
('i', 1194045440L, 4294770688L)
('f', 1194045440L, 4294770688L)
('i', 1194045440L, 4294770688L)
('f', 1194045440L, 4294770688L)
('i', 1194045440L, 4294770688L)
('f', 1194045440L, 4294770688L)
('i', 1194045440L, 4294770688L)
('f', 1194045440L, 4294770688L)
('i', 1194045440L, 4294770688L)
('f', 1194045440L, 4294770688L)
('i', 1194045440L, 4294770688L)
('f', 1194045440L, 4294770688L)
('i', 1194045440L, 4294770688L)
('f', 1194045440L, 4294770688L)
('i', 1194045440L, 4294770688L)
('f', 1420013568L, 4294770688L)
--
You received this message because you are subscribed to the Google Groups "Numba Public Discussion - Public" group.
To unsubscribe from this group and stop receiving emails from it, send an email to numba-users...@continuum.io.
To post to this group, send email to numba...@continuum.io.
To view this discussion on the web visit https://groups.google.com/a/continuum.io/d/msgid/numba-users/2cb328fa-eb25-4704-a371-e4f0235c264e%40continuum.io.
For more options, visit https://groups.google.com/a/continuum.io/d/optout.
To view this discussion on the web visit https://groups.google.com/a/continuum.io/d/msgid/numba-users/CAKiRKhTa_yEF2P6n%2BenehKwftCms0CVnNjE6Yz6wSjodM%3D4Knw%40mail.gmail.com.
For more options, visit https://groups.google.com/a/continuum.io/d/optout.
-- Andrew Kenny Test Engineer Forwessun 17 Hurricane Court, International Business Park, Speke, Liverpool, Merseyside, L24 8RL Tel : ​+44 (0)151 3700 112
There is a numba way to force a GPU garbage collection.
See the TrashService
https://github.com/numba/numba/blob/master/numba/cuda/cudadrv/driver.py#L286
Â
From: numba...@continuum.io [mailto:numba...@continuum.io] On Behalf Of Andrew Kenny
Sent: Thursday, October 8, 2015 9:55 AM
To: numba...@continuum.io
Subject: Re: [Numba] Best way to clean up GPU memory
Â
The CUDA equivalent as far as I can tell is described on this page:
http://developer.download.nvidia.com/compute/cuda/4_1/rel/toolkit/docs/online/group__CUDART__MEMORY_gb17fef862d4d1fefb9dba35bd62a187e.html
Is there a Numba equivalent that can be explicitly called?
On 08/10/2015 09:13, Diogo Silva wrote:
Have you found any solution to this? I'm just having this problem myself and am in dire need of a good solution.
On 16 September 2015 at 22:34, Christopher Wright <cjwrig...@gmail.com> wrote:
Hi all,
What is the best way to free the GPU memory using numba CUDA?
Background:
1.    I have a pair of GTX 970s
2.    I access these GPUs using python threading
3.    My problem, while massively parallel, is very memory intensive. So I break up the work between the GPUs based on their free memory, usually this means that each gpu will get used a bunch of times.
4.    However, when I access them and print out their memory using
cuda.current_context().get_memory_info()
I find that after the kernels have completed the memory is not free.5.    Using
cuda.current_context().reset()
seems to free up the memory, but it also gives me a 139 error when I try to run the next thread on the GPU.
For an example see the printout below. Note ‘i’ is the pre-wrapper memory and ‘f’ is post wrapper, the first number is the free memory, the second the total. This is using a single GPU, for ease of reading.
6.
('i', 4138168320L, 4294770688L)
7.
('f', 1194045440L, 4294770688L)
8.
('i', 1194045440L, 4294770688L)
9.
('f', 1194045440L, 4294770688L)
10.       Â
('i', 1194045440L, 4294770688L)
11.       Â
('f', 1194045440L, 4294770688L)
12.       Â
('i', 1194045440L, 4294770688L)
13.       Â
('f', 1194045440L, 4294770688L)
14.       Â
('i', 1194045440L, 4294770688L)
15.       Â
('f', 1194045440L, 4294770688L)
16.       Â
('i', 1194045440L, 4294770688L)
17.       Â
('f', 1194045440L, 4294770688L)
18.       Â
('i', 1194045440L, 4294770688L)
19.       Â
('f', 1194045440L, 4294770688L)
20.       Â
('i', 1194045440L, 4294770688L)
21.       Â
('f', 1194045440L, 4294770688L)
22.       Â
('i', 1194045440L, 4294770688L)
23.       Â
('f', 1194045440L, 4294770688L)
24.       Â
('i', 1194045440L, 4294770688L)
25.       Â
('f', 1194045440L, 4294770688L)
26.       Â
('i', 1194045440L, 4294770688L)
27.       Â
('f', 1194045440L, 4294770688L)
28.       Â
('i', 1194045440L, 4294770688L)
29.       Â
('f', 1194045440L, 4294770688L)
30.       Â
('i', 1194045440L, 4294770688L)
31.       Â
('f', 1194045440L, 4294770688L)
32.       Â
('i', 1194045440L, 4294770688L)
33.       Â
('f', 1194045440L, 4294770688L)
34.       Â
('i', 1194045440L, 4294770688L)
35.       Â
('f', 1194045440L, 4294770688L)
36.       Â
('i', 1194045440L, 4294770688L)
37.       Â
('f', 1194045440L, 4294770688L)
38.       Â
('i', 1194045440L, 4294770688L)
39.       Â
('f', 1194045440L, 4294770688L)
40.       Â
('i', 1194045440L, 4294770688L)
41.       Â
('f', 1194045440L, 4294770688L)
42.       Â
('i', 1194045440L, 4294770688L)
43.       Â
('f', 1194045440L, 4294770688L)
44.       Â
('i', 1194045440L, 4294770688L)
45.       Â
('f', 1194045440L, 4294770688L)
46.       Â
('i', 1194045440L, 4294770688L)
47.       Â
('f', 1194045440L, 4294770688L)
48.       Â
('i', 1194045440L, 4294770688L)
49.       Â
('f', 1194045440L, 4294770688L)
50.       Â
('i', 1194045440L, 4294770688L)
51.       Â
('f', 1194045440L, 4294770688L)
52.       Â
('i', 1194045440L, 4294770688L)
53.       Â
('f', 1194045440L, 4294770688L)
54.       Â
('i', 1194045440L, 4294770688L)
55.       Â
('f', 1194045440L, 4294770688L)
56.       Â
('i', 1194045440L, 4294770688L)
57.       Â
('f', 1194045440L, 4294770688L)
58.       Â
('i', 1194045440L, 4294770688L)
59.       Â
('f', 1194045440L, 4294770688L)
60.       Â
('i', 1194045440L, 4294770688L)
61.       Â
('f', 1194045440L, 4294770688L)
62.       Â
('i', 1194045440L, 4294770688L)
63.       Â
('f', 1194045440L, 4294770688L)
64.       Â
('i', 1194045440L, 4294770688L)
65.       Â
('f', 1194045440L, 4294770688L)
66.       Â
('i', 1194045440L, 4294770688L)
67.       Â
('f', 1194045440L, 4294770688L)
68.       Â
('i', 1194045440L, 4294770688L)
69.       Â
('f', 1194045440L, 4294770688L)
70.       Â
('i', 1194045440L, 4294770688L)
71.       Â
('f', 1194045440L, 4294770688L)
72.       Â
('i', 1194045440L, 4294770688L)
73.       Â
('f', 1194045440L, 4294770688L)
74.       Â
('i', 1194045440L, 4294770688L)
75.       Â
('f', 1194045440L, 4294770688L)
76.       Â
('i', 1194045440L, 4294770688L)
77.       Â
('f', 1194045440L, 4294770688L)
78.       Â
('i', 1194045440L, 4294770688L)
79.       Â
('f', 1420013568L, 4294770688L)
​
To view this discussion on the web visit https://groups.google.com/a/continuum.io/d/msgid/numba-users/561675B4.6060600%40forwessun.com.
To view this discussion on the web visit https://groups.google.com/a/continuum.io/d/msgid/numba-users/005c01d101db%24103e29b0%2430ba7d10%24%40gmail.com.
Yes, that is exactly what I did, remove the data from the allocations and then use the process method or the clear method of the TrashService to finally clear the memory. I haven’t used this in a while, since the ending of a context was able to get rid of all the memory allocation, even if the get memory info function did not show it. And yes there is a high likelihood of breaking stuff, I would be careful with explicit removal of memory. Make double certain that you remove the data once, otherwise odd stuff happens.
To view this discussion on the web visit https://groups.google.com/a/continuum.io/d/msgid/numba-users/CAKiRKhRgoPDVquQU%3Dhmjoabp6G6bTk_%3DPHTVR-Q%3D0hb4ypE8Eg%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/a/continuum.io/d/msgid/numba-users/006f01d101f1%24cdcfd170%24696f7450%24%40gmail.com.
Just if anyone else has this question. It seems that the magic command iscuda.current_context().trashing.clear()
If you have deleted arrays and need them to not show up when you call get_mem_info
, it seems that this truly removes the array from memory.
...
--
You received this message because you are subscribed to the Google Groups "Numba Public Discussion - Public" group.
To unsubscribe from this group and stop receiving emails from it, send an email to numba-users...@continuum.io.
To post to this group, send email to numba...@continuum.io.
To view this discussion on the web visit https://groups.google.com/a/continuum.io/d/msgid/numba-users/4376e863-f0c7-4634-aec0-f3fda6c9ee87%40continuum.io.