Two, probably related, question:
Take the following setup in the CONTROL Panel:
STEPS 1000
CYCLES 1000
PATTERN 1
USE data_xyz
VALID 1 USE data_xyz
etc.
xgui produces output that looks like this:
epoch: SSE MSE SSE/o-units
Train 1000: 2598.84058 2.59884 1299.42029
Test 1000: 1880.39893 1.88040 940.19946
Train 900: 55.61801 0.05562 27.80900
Test 900: 56.50090 0.05650 28.25045
...
Train 1: 19.17362 0.01917 9.58681
Test 1: 24.10503 0.02411 12.05252
Question 1:
Since the training and validation data are the same, i.e. data_xyz,
why are the displayed Train and Test errors different?
Also, CONTROL/ERROR produces this:
Number of Patterns : 1000
Number of parameters (Links+bias) : 19
sse : 24.1050
tss : 1581.3745
rsq : 0.9848
mse : 0.0246
etc.
Question 2:
Why are the sse and mse values different from those shown above?
In short, there is only one data set (i.e. data_xyz), but three
different values for mse: 0.01917, 0.02411, 0.0246 . Why?
Is all this due to round-off error?
Thanks,
Caren
--
http://www.nhn.ou.edu/~marzban
With regard to question 1, I believe that vanilla backprop will adjust
network weights immediately after seeing a single data point. Thus, as SNNS
cycles through a training set of 1000 pts, it will effectively generate 1000
different networks and evaluate the error of each point on a slightly
different network.
All of the points in the test set however are evaluated over a static
network.
Thus, the training and test set error were generated using different
networks.
-Jordan
js...@cornell.edu