On Wed, Oct 14, 2015 at 12:59 PM, <
goo...@jan-schlueter.de> wrote:
> The problem about float64 is that Theano doesn't support double precision on
> GPU (I guess because the very first GPUs (compute capability < 1.3) only
> supported single precision, and because double precision is not really
> useful for neural networks, the main application area of Theano). So it's
> important to keep precision in the computation graph down to float32 to
> allow Theano to move everything to the GPU when compiling it into a
> function.
> If floatX is set to "float32", then the default type for T.matrix(),
> T.tensor4(), T.vector() etc. is float32, and all the shared variables
> created by Lasagne are float32 (because we take care of the
> theano.config.floatX setting). However, it may happen that for some
> operation, only one of the operands is float32, and the result is float64 --
> this is what the warning mode will catch. Usually, the second operand in
> such a case is an int64 or a float64 symbolic variable or numpy array.