Thank you for reply.
I describe the two different types of computers that I use.
CPUs: "Intel® Core™ i7-7700" VS. "Intel® Xeon® CPU E5-2620 v4 @ 2.10GHz"
GPUs: both GPU GeForce GTX 1080
OSes: both Ubuntu 14.04
Compilers: both Caffe 1.0.0-rc5 (GPU mode with cuda-8.0, cudnn-v5.05) + python 2 inference code
Model:
I use own design model that is like (conv-prelu-BatchNorm-Scale) x 14 -conv-pooling, finally get one output value. Train label is 0,1,2,3 4, The model's output value will range in about -0.x ~ 5.x
How different are the results:
The model's input data is an image, input size is about 500 x 500.
I tested the same set of images as input, the model got different output value on two computers.
In 50 images, the difference is between -0.21 ~ +0.24.
What's the possible reason. Are there some methods to resolve it.
Przemek D於 2018年4月18日星期三 UTC+8下午6時43分47秒寫道: