Hi all, I tried
fcn(voc-fcn8s) to train on data from
sbdd and i got confused with gpu memory usage.
If caffe is compiled with cudnn v4, gpu memory usage is about 3000MB~4000MB. It would fluctuate as size of input image changes.
>| 0 23150 C voc-fcn8s 3298MiB |
SBDD image size is smaller than 500*500, and as caffe log shows the memory used for image is no more than 1GB:
>I0701 10:19:28.020790 1855 net.cpp:148] Top shape: (1)
>I0701 10:19:28.020797 1855 net.cpp:151] with loss weight 1
>I0701 10:19:28.020807 1855 net.cpp:156] Memory required for data: 978989668
and the model itself is 513MB, so i guess the total usage 1GB+1GB+2*500MB=3GB is normal.
But if cudnn is not used for compile, the memory usage can be more than 5000MB and sometimes my 980Ti may out of memory:
>| 0 1855 C voc-fcn8s 5392MiB
So i dont know why it uses 2GB more for non-cudnn version.
And i tried
pr#2016 with cudnn, however, it seems the memory rise a little(about 4000MB):
>| 0 13196 C voc-fcn8s 4266MiB |
Is it normal, or if thers is another way to reduce memory usage?