Apple's Accelerate implementation of BLAS has a bug in cblas_sgemm

16 views
Skip to first unread message

Jack Quinn

unread,
Oct 16, 2016, 1:37:37 PM10/16/16
to Caffe Users

The cblas_sgemm (single precision general matrix-matrix multiplication) function in the Accelerate framework has a bug.  For certain size matrices certain elements of the output matrix are wrong.  I noticed this problem in the call to cblas_sgemm (from caffe_cpu_gemm inside of math_functions.cpp) which adds the biases onto the output of the convolution layers.   In some cases the output of cblas_sgemm is incorrect and this error shows up as a strange pattern in the output of the convolutional layer, even when the input is all zeros.  

The attached C code demonstrates the problem in isolation.  

Using OpenBLAS instead of Accelerate eliminates this problem.  I suppose using MKL would also solve the problem, but I haven't tried that.  


sgemm_test.c
Reply all
Reply to author
Forward
0 new messages