--------
epoch 1 ______________________________________________________________________
training on 3150 samples and recomputing 2nd order derivatives on 100 samples after every 4000 trained samples...
computing 2nd order derivatives on 100 samples...
diaghessian inf: 0.992948 sup: 49.9442 diaghessian_minutes=0
training: 2000 / 4500, elapsed: 0s, ETA: 0s, remaining: 0: -517 1: 833 2: 834
training: 4000 / 4500, elapsed: 0s, ETA: 0s, remaining: 0: -1184 1: 166 2: 168
computing 2nd order derivatives on 100 samples...
diaghessian inf: 1.55012 sup: 49.9997 diaghessian_minutes=0
epoch_count=4501
training_time=0s
saving net to _net00001.mat
saved=_net00001.mat
Testing on 750 samples...
i=1 name=train [4501] sz=3150 energy=0.96561 (class-normalized) errors=33.1111% uerrors=47.0159% rejects=0% (class-normalized) correct=66.8889% ucorrect=52.9841%
errors per class: 0_samples=150 0_errors=0.666667% 1_samples=1500 1_errors=98.4667% 2_samples=1500 2_errors=0.2%
i=1 name=val [4501] sz=750 test_energy=0.980928 (class-normalized) test_errors=42.5926% test_uerrors=48.1333% test_rejects=0% (class-normalized) test_correct=57.4074% test_ucorrect=51.8667%
errors per class: test_0_samples=30 test_0_errors=30% test_1_samples=360 test_1_errors=97.7778% test_2_samples=360 test_2_errors=0%
--------
But in the second epoch (and all others) the overall correct and errors rates do not match with the errors per classes:
--------
__ epoch 2 ______________________________________________________________________
training on 4500 samples and recomputing 2nd order derivatives on 100 samples after every 4000 trained samples...
computing 2nd order derivatives on 100 samples...
diaghessian inf: 1.18058 sup: 49.9998 diaghessian_minutes=0
training: 2000 / 4500, elapsed: 0s, ETA: 0s, remaining: 0: -517 1: 833 2: 834
training: 4000 / 4500, elapsed: 0s, ETA: 0s, remaining: 0: -1184 1: 166 2: 168
computing 2nd order derivatives on 100 samples...
diaghessian inf: 2.59465 sup: 49.9999 diaghessian_minutes=0
epoch_count=4500
training_time=0s
saving net to _net00002.mat
saved=_net00002.mat
Testing on 750 samples...
i=2 name=train [9001] sz=3150 energy=0.283002 (class-normalized) errors=14.3273% uerrors=10.0952% rejects=0% (class-normalized) correct=85.6727% ucorrect=89.9048%
errors per class: 0_samples=150 0_errors=0% 1_samples=1500 1_errors=12.6% 2_samples=1500 2_errors=5.33333%
--------
How can the errors be 14.3273% when errors per class are lower for each of the classes?
--------
i=2 name=val [9001] sz=750 test_energy=0.179952 (class-normalized) test_errors=24.4345% test_uerrors=11.2% test_rejects=0% (class-normalized) test_correct=75.5655% test_ucorrect=88.8%
errors per class: test_0_samples=30 test_0_errors=0% test_1_samples=360 test_1_errors=2.22222% test_2_samples=360 test_2_errors=5.55556%
--------
The same situation is for validation. The errors per class are: 0%, 2.22222% and 5.55556 but the test_errors = 24.4345% and test_uerrors = 11.2%
Am I interpreting it wrong? thank you for any clarification.
--
Thank you for quick reply. I am using Windows. When do you plan to release new version?
--
As soon as I get home and fire up my windows VM.
Is there any document describing the bug fixes and new features in version 1.2?
--