On Wednesday, September 23, 2015 at 5:20:14 AM UTC-4, Greg Heath (
alumni.brown.edu) wrote:
> On Tuesday, September 22, 2015 at 1:25:03 PM UTC-4, TomH488 wrote:
> > Both Mean and Standard Deviation are .40 of the training values.
> >
> > The Output is binary (-1 or +1).
> >
> > Symmetric Logistic used on the Hidden layer.
> > Tanh used on the Output layer.
>
> What is the difference between a symmetric logistic and tanh ?
Symmetric Logistic is like the tanh but not as sharp (it's a logistic that is scaled to [-1, 1]
>
> > Early stopping with 20% Test Set.
>
> The validation subset is used for Early Stopping.
>
> The test subset is used to get an unbiased estimate on nontraining
> ( validation, test and unseen ) data
>
> > Just thinking now perhaps I should try Linear on the Output Layer?
>
> I don't think that is your problem.
>
> > I have no other ideas.
>
> [ I N ] = size(input) = 70
> [ O N ] = size(target) = 1
> H = number of hidden nodes = 40
> (Ntrn/Nval/Ntst)/N = (800/200/8)/1000
> Ntrneq = Ntrn*O = number of training equations = 800*1 = 800
> Nw = (I+1)*H +(H+1)*O = number of unknown weights 71*40 + 41*1 = 2881
> computer language = Delphi code which keystrokes NeuroShell 2 GUI
> How many random initial weight designs = averaging 6 trainings, each with a different randomly chosen Val set and randomly chosen IW's.
>
> Greg
NOTE: I have a LINEAR Output run running tonight. Hopefully it will complete without issue.