I trained LeNet on the MNIST dataset with num_output: 10. This gives me 98% accuracy.
Then I change num_outputto num_output: 5. This gives me 11% accuracy. This low score makes sense (since there are in fact 10 classes). However a bit surprising that I don't get an error instead, since there is a mismatch.
Anyway, I then split the MNIST dataset in half. So I now I have a subset with classes 5 until 9, and the corresponding validation set. I run the the same experiment on the subset:
LeNet on the MNIST subset with num_output: 5. This gives me 99% test accuracy after about 2000 iterations. (5, because there's only 5 classes now)
This is where it gets weird....
Now I run LeNet again on the subset but with num_output: 10. I again get99% test accuracy after 2000 iterations.
This doesn't make any sense to me. Could it be a bug?