Hi, I'm new to using Caffe and am trying to choose the appropriate GPU. Past code I've used required double precision. I've found a few Caffe c++ examples where the data type is "float".
I am planning on analyzing audio and image data. Float is fine for the input data samples, however it is unclear to me if Caffe uses double precision for intermediate results on the gpu during training.
Questions:
1) Are the Caffe algorithms optimized to use single precision on the GPU, or will training times be reduced with enhanced double performance?
2) For example: Titan Z vs Titan X??
3) How does float vs double affect model accuracy?
4) I realize this is opinion, but what should I buy? Cost isn't a big factor.
Thanks for your help, Ed