Caffe model sizes

29 views
Skip to first unread message

Aggisen

unread,
Apr 18, 2018, 3:17:03 AM4/18/18
to Caffe Users
Hi!

I wonder how I can see the size of a caffe model? For example, the authors of SqueezeNet claims their model has a size <0.5 MB but the .caffemodel file from github has size 5 MB. I want to be able to compare different model sizes with each other and wonder what size is interesting to look at?

Thank you in advance!


Xun Victor

unread,
Apr 19, 2018, 7:32:33 AM4/19/18
to Caffe Users
Hi,
I haven't re-read SqueezeNet article for a long time but I remember that they have stated that the model can reach <0.5MB only under specific compression techniques. Under this compressed form, you can transfer the model but it is not directly usable.

Anyway, I believe that at these size ranges (a few Mb), what matters the most is not the size of the model itself, but rather the amount of (GPU) memory they need during inference. And this value can change a lot depending on which layers you use. For example, a model with a lot of fully convolutional layers use a lot of disk space, but will not necessarily use a lot more memory. On the contrary, batch norm layer parameters are lightweight to store as models but once applied to a specific image, they require some space in memory.
I can't really tell you which architecture uses the least amount of memory, you should try it by yourself as it may depend on the input image sizes.

For example, I have a 3.6Mb squeezenetv2 model (modified it a bit for my specific need), which results in 3Gb GPU memory usage during inference on 600x600 images.

How it helps.

Victor

Aggisen

unread,
Apr 19, 2018, 8:00:21 AM4/19/18
to Caffe Users
Hi,

Thank you for your answer Victor! It is true that the SqueezeNet model can reach <0.5MB only under specific compression techniques. I missed that the first time I read the article.

Interesting that it is the amount of memory needed during inference that is the most vital, I can understand that. How do you check how much GPU memory usage you have?

Aggisen

Xun Victor

unread,
Apr 19, 2018, 10:16:23 AM4/19/18
to Caffe Users
Hi Aggisen,

Do you use a gpu during test ? If yes, then you can simply perform a nvidia-smi command in shell and see how much memory you are using.

Aggisen

unread,
Apr 20, 2018, 8:41:48 AM4/20/18
to Caffe Users
Thank you for your answer!
Reply all
Reply to author
Forward
0 new messages