I'm building a DNN detector looking for certain cell patterns. The detector response positively when black patterns appears in the center of the input image as below.
As a result, I found that subtracting mean image (BGR image) in the preprocessing instead of subtracting the mean pixel (a single set of BGR value) value from the mean image trains a better model. However, from the code posted here:
https://github.com/NVIDIA/DIGITS/issues/59, I found the transformer is not actually subtracting the whole mean image. If this is the case, is there any way I could finish this preprocessing procedure using DIGITS without touching the DIGITS code?