Understanding transpose in CBLAS GEMM (called from innerProductLayer)

119 views
Skip to first unread message

Karthik Ganesan

unread,
Sep 13, 2016, 2:41:23 PM9/13/16
to Caffe Users
Hi, I am trying to modify inner product layer for an experiement. For this purpose i am using the MNIST dataset as the input, with a single fully connected layer of size 128 going to the softmax layer of size 10. Looking at inner product, the matrices for bottom_data, weight and top_data are passed to cblas_gemm. Based on the information passed to the caffe_cpu_gemm function (and comparing it to the cblas_gemm function), it seems like the sizes are as follows:
Bottom_data [M][K]; weight[K][N] and top_data[M][N]. based on the rules of matrix multiply this makes sense to me.  I am getting these from the M_, N_ and K_ parameters passed. 

however, some of the other parameters passed to caffe_cpu_gemm seems strange! 

1. There seems to be transpose happening for the matrix weight when it is being passed to caffe_cpu_gemm. (transpose_?:CblasNoTrans:CblasTrans). Why is this needed if the matrix dimensions are already correct based on the M_, N_ and K_ parameters?

2. I'm not sure what the lda and ldb parameters convey. in caffe_cpu_gemm, these seem to be passed to cblas_gemm as K_ for both bottom_data and weight. However, this doesnt make sense to me for bottom data. the documentation says ld* should be the size of the first dimension of the matrix. so shouldnt this be M_ for bottom_data?

Thank you!
Reply all
Reply to author
Forward
0 new messages