I modified the ImageNet reference net to discriminate between two classes of images. When I use color images, test accuracy is 94%. But when I try to convert the images & network to grayscale, it doesn't converge (accuracy ~30%).
Here's how I tried to convert the net to grayscale, maybe I am missing a step:
1) Convert all images to grayscale using PIL.
2) Re-run create_imagenet.sh, passing --gray into convert_imageset
3) Re-run make_image_mean.sh
4) attempt to train. I see "Top shape: 50 1 227 227" as expected (vs. "Top shape: 50 3 227 227" with RGB). But accuracy never improves.
What am I forgetting? Do any changes need to be made to train_val.prototxt?