However, when I manually put val.txt into "Test a list of images", the result looks like this.
As there are 281 classes, the top-1 accuracy is a little better than random guessing (0.36%).
This makes no sense when you take the graph above into consideration; as it uses the SAME dataset (training and validation) created by DIGITS, and none of the image files were modified at all, the validation accuracy must be at least above 50%.
I guess it is related to last layers in the network, where the classifying is held; But I'm not sure of the exact reason. Or it might be related to transform_param's scale parameter. But I'm not sure of any.
The network's original prototxt can be read here. The first two conv layers are transferred from autoencoder; the zero learning rate coefficients of them are intended.
Any help or suggestions are appreciated. Thanks.
--
You received this message because you are subscribed to the Google Groups "DIGITS Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to digits-users+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/digits-users/aee3b584-87d0-4a23-b6fa-fd0843b9a53a%40googlegroups.com.
ERROR: Layer 'flatdata' references bottom 'scaled' at the TRAIN stage however this blob is not included at that stage. Please consider using an include directive to limit the scope of this layer.
To unsubscribe from this group and stop receiving emails from it, send an email to digits-users...@googlegroups.com.