The cblas_sgemm (single precision general matrix-matrix multiplication) function in the Accelerate framework has a bug. For certain size matrices certain elements of the output matrix are wrong. I noticed this problem in the call to cblas_sgemm (from caffe_cpu_gemm inside of math_functions.cpp) which adds the biases onto the output of the convolution layers. In some cases the output of cblas_sgemm is incorrect and this error shows up as a strange pattern in the output of the convolutional layer, even when the input is all zeros.
The attached C code demonstrates the problem in isolation.
Using OpenBLAS instead of Accelerate eliminates this problem. I suppose using MKL would also solve the problem, but I haven't tried that.