Hello!
When loading and testing images with my model, for the largest sized image, it will tell me it requires something like 900MB of RAM
When loading the model, Caffe tells me:
"Memory required for data: 895452040"
I've noticed my GPU has about 2GB of RAM, the Nvidia tools will tell me only about 100-200MB is currently being used.
When I go to do a net.forward() in python, it generates the error message "Check failed: error == cudaSuccess (2 vs. 0) out of memory" when it seems like I have 1.8GB of free gpu RAM.
If I drop the image size to something that generates an estimate of 700-800MB then net.forward() produces the output with no errors. It seems almost as if the python caffe interface requires twice as much gpu RAM as is estimated.
Has anyone else experienced an issue like this?