Network size on GPU takes double what is reported in the paper

43 views
Skip to first unread message

Mohamed Ezz

unread,
Apr 8, 2016, 12:44:15 PM4/8/16
to Caffe Users
I'm using the network architecture U-Net, the paper mentions that a 6GB Titan X was used to train it, but on my GPU it takes around 12GB, it barely fits into my GPU. 
So the moment I started to tweak the network parameters (add a few layers), there's not enough memory. Also a theano implementation takes less than 6GB in GPU.

I'm wondering what could have gone wrong in my setting. Can it be float percision ?

Mohamed Ezz

unread,
May 7, 2016, 4:58:26 AM5/7/16
to Caffe Users
Any ideas ?
Reply all
Reply to author
Forward
0 new messages