"Greg Heath" wrote in message <olnf51$j7b$
1...@newscl01ah.mathworks.com>...
Thank you for Greg's great advice.
I totally agree with Greg's suggestion.
And a comprehensive illustration can be found under this link:
https://www.mathworks.com/help/nnet/ug/choose-neural-network-input-output-processing-functions.html
Perhaps the availability of the prescaling function varies differently due to matlab version.
I guess people who have insterests can try these:
1.'mapminmax'(default)
2.'mapstd'
3.'processpca' (mentioned in Stanford CS231n)
------------------------------------------------------------------
4.'fixunknowns'
5.'removeconstantrows'
And I have a question regarding to Greg's suggestion 4.b 'SAME scale'
Is that inferring as same 'Magnitude' for RAW inputs and outputs?
My example goes to Greg's earlier TUTORIAL
https://www.mathworks.com/matlabcentral/newsreader/view_thread/341631#936181
And I revised the input and output data to have relatively 'bad magnitude' issue.
%Code is as followed:
clear all, close all, clc, tic
[ x1 , t1 ] = simplefit_dataset;
[ I1 N ] = size( x1 ) %[ 1 94 ]
[ O1 N ] = size( t1 ) %[ 1 94 ]
% GENERALIZE TO MIMO
x2 = fliplr(x1); t2 = fliplr(t1);
% CHANGE THE MAGNITUDE ON INPUTS AND OUTPUTS
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
x = [ x1.*1000; x2]; t = [ t1; t2.*10000 ];
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
[ I N ] = size( x ) %[ 2 94 ]
[ O N ] = size( t ) %[ 2 94 ]
MSE00 = mean(var(t',1)) % 8.3378
figure(1)
subplot(2,2,1)
plot( x(1,:), 'k', 'LineWidth', 2 )
title( ' INPUT1 ' )
subplot(2,2,2)
plot( x(2,:), 'k', 'LineWidth', 2 )
title( ' INPUT2 ' )
subplot(2,2,3)
plot( t(1,:), 'b', 'LineWidth', 2 )
title( ' TARGET1 ' )
subplot(2,2,4)
plot( t(2,:), 'b', 'LineWidth', 2 )
title( ' TARGET2 ' )
net = fitnet; % H =10 default
rng( 4151941 )
[ net tr y e ] = train( net, x, t );
% y = net( x ); e = t - y;
figure(2)
subplot(2,1,1)
hold on
plot( t(1,:), 'LineWidth', 2 )
plot( y(1,:), 'r.')
legend('TARGET1', 'OUTPUT1')
title('FITTING OF SIMPLEFIT DATA ')
subplot(2,1,2)
hold on
plot( t(2,:), 'LineWidth', 2 )
plot( y(2,:), 'r.' )
legend('TARGET2', 'OUTPUT2')
NMSE = mse(e)/MSE00 % 1.1721e-05
Rsquare = 1 - NMSE % 0.99999
%Code end
I tried different hidden neurons (I guess 12 -14 are the best neuron numbers for this one but still cannot compare with the original matching by using same magnitude data)
I also increased the validation check
net.trainParam.epochs = 5000;
net.trainParam.max_fail = 100;
But I guess these two lines would beyond this topic.
My question is that whether try rescaling the inputs and outputs in the SAME magnitude is a good idea (like taking 'log10' or 'log', if there is no zeros or negatives in the inputs and outputs)?
And another potential issue as Greg mentioned his suggestion 4.b,
if my understanding is correct, but I guess I am wrong,
the SAME scale referred to:
if in our training process inputs are in [0 10], outputs are in [0,10]
After that while we doing the prediction, we should configure(like doing math) our inputs in the scale [0 10]?
Thanks in advance to Greg's dedication, and everyone's patience in reading my tedious question.
Thank you so much,
Regards,
Pan
TAMU-student