Recently, I am trying to implement Fully Convolutional network in the original paper of Long, Jonathan, Evan Shelhamer, and Trevor Darrell.
In their code, they subtract mean BGR values to every images.
My questions are,
1. why subtract mean BGR values ?
- To normalize the images for training and validation? but why? the distribution of pixel values is same after all. isn't it?
2. In the test phase, Should I subtract mean BGR value of test image?
3. The value in the source code(FCN github) is mean=(104.00699, 116.66877, 122.67892). Where are they come from? - mean value of images from SBDD train.txt + seg11val.txt?
- actually, I calculated the mean value of images from SBDD train.txt + seg11val.txt,
It's not same with (104.00699, 116.66877, 122.67892). then, where are they come from?
Thank you.