I am going through the notebook for hand written digit recognition.
def test_on(start,stop,dontprint=False):
global test_inputs, test_results
global net, predictions_probs, predictions, true_labels
predictions_probs=net.predict_on_batch(test_inputs[start:stop,:])
predictions=argmax(predictions_probs,axis=1)
if dontprint==False:
print("Predictions: ", predictions)
true_labels=argmax(test_results[start:stop,:], axis=1)
if dontprint==False:
print("True labels: ", true_labels)
This is clearly useful for doing after the learning the training the test on inputs before prediction. Why is missing the part in which we have to compute the accuracy? Of course the expected accuracy will be higher than in the validation case.
Many thanks