Use of float vs double precision data type

604 views
Skip to first unread message

Edward Connell

unread,
Mar 17, 2015, 3:47:29 PM3/17/15
to caffe...@googlegroups.com
Hi, I'm new to using Caffe and am trying to choose the appropriate GPU. Past code I've used required double precision. I've found a few Caffe c++ examples where the data type is "float".

I am planning on analyzing audio and image data. Float is fine for the input data samples, however it is unclear to me if Caffe uses double precision for intermediate results on the gpu during training.

Questions:
1) Are the Caffe algorithms optimized to use single precision on the GPU, or will training times be reduced with enhanced double performance?
2) For example: Titan Z vs Titan X??
3) How does float vs double affect model accuracy?
4) I realize this is opinion, but what should I buy? Cost isn't a big factor.


Thanks for your help, Ed

Carlos González Gutiérrez

unread,
May 22, 2015, 8:09:29 AM5/22/15
to caffe...@googlegroups.com
In interested in this topic too.

¿Is there any simple way to "activate" double precision in caffe? I'm working with a linear regressor and I would like to take a look to results with double precision.

Thanks for your help!
Reply all
Reply to author
Forward
0 new messages