Hello.
I would like to know whether there is an explicit way to "free" memory of cupy.ndarray, which is allocated on a GPU.
I tried the following code:
import numpy as np
import cupy
x = np.random.rand(1000,1000)
cupy.array(x)
cupy.array(x)
cupy.array(x)
cupy.array(x)
cupy.array(x)
...
Everytime I execute `cupy.array(x)`, GPU memory usage increases by about 8MB.
(I checked it using nvidia-smi command.)
But if execute `y = cupy.array(x)` many times, GPU memory usage does not increase.
(Maybe, memory is freed by some reference counting mechanism.)
My problem is that, if allocate cuda array on iterator, like:
def get_iterator():
z = cupy.array(np.random.rand(1000,1000))
for i in range(10):
yield z[100*i:100*(i+1)]
del z # this del seems not free memory
iterator = get_iterator()
for d in iterator:
print d[0]
# memory usage increases after this loop...
the memory is not freed.
Are there any good way to free z?
Any suggestions would be helpful.
Thanks.