I've noticed that the way in which data is normalized, and subsequently unnormalized, is flawed in neurolab (but it might just be a np problem?)
For example, I have the following column as my target column:
|
-7.933
|
|
-9.633
|
|
-7.853
|
|
-9.113
|
|
-8.263
|
|
-9.023
|
|
-10.333
|
|
-7.853
|
|
-9.343
|
|
-9.153
|
After the data is initialized (using nl.tool.Norm()), the following is the output
|
0.97899649941656963
|
|
0.89148191365227547
|
|
0.87281213535589275
|
|
1
|
|
0.51108518086347732
|
|
0.33255542590431747
|
|
0
|
|
0.93932230102442871
|
|
1
|
|
0
|
As it should be first apparent, this normalization isn't correct (e.g. both -9.343 and -9.113 are normalizing to '1', when neither is the maximum value").
If you then renormalize the same column, you get the following output:
|
-7.890386231038506
|
|
-8.0461621936989491
|
|
-8.0793943990665102
|
|
-7.8529999999999998
|
|
-8.7232683780630094
|
|
-9.0410513418903147
|
|
-9.6329999999999991
|
|
-7.9610063041765162
|
|
-7.8529999999999998
|
|
-9.6329999999999991
|
While close, these values *do not* match the values before being normalized and then de-normalized (they should be exactly the same). These small errors are causing me a major headache, as errors are within the range of accuracy that I'm looking for.