@Evan S,
Thanks to you and your team for adding batnch normalization in caffe. What was the rationale to break-up Batch Normalization implementation into "BatchNorm" followed by a "Scale" layer with bias set to true
By the way, I have successfully translated Inception-v3 into caffe and obtained top-1 accuracy of 0.7844 on ILSVRC12 validation set which is a significant jump from BVLC_GoogLeNet or Inception (v1) that Yangqing released last September.
Do you have any thoughts on what needs to be done to the BN layer as far as fine-tuning task is concerned?
Thanks,
-Vimal