TomH488
unread,Jun 26, 2015, 4:13:55 PM6/26/15You do not have permission to delete messages in this group
Sign in to report message
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to
This is a stock market forecasting MLP:
Predict the 5 day % change of a 10 day smoothed closing price average 5 days into the future.
One of the problems is trying to determine the confidence of a prediction.
_______________________
3 layer MLP w/8 Outputs
One approach I've just started using is to take the Training Data, Sort the % Gains and split this sort into 8 equally sized membership intervals.
The outputs are now these 8 "classes" where membership is a 1, 0 otherwise.
The desire is to see if the 8 predictions inspire confidence. A bell shaped curve centered on the correct interval would be splendid. Or a single spike too. But other signatures would imply poor generalization and their predictions should be ignored.
After looking at these results, while it is possible to devise many methods for deciding what to do (Long, Short, or Abstain), it is clear that all of these have strengths and serious weaknesses.
So why not a net learn how to do it?
I don't know what kind of a structure that would be having the 8 outputs training and then have new layers of a few hiddens and a final output where the output would be trained. A MLP with internal constraints or targets? I imagine it could be done with code or possibly MemBrain which has completely arbitrary architecture capabilities.
Another method might be to use the 8 outputs as inputs into an "interpretation" 3-layer MLP.
Or....
___________
4 layer MLP:
Take the original 3 layer MLP and inser a new 3rd hidden layer w/8 nodes.
Let the MLP "decide" what it wants for the 8 "outputs."
I realize that is a sophomoric vision of what a 4 layer MLP is, but I would think that the 2 hidden layers perform 2 different fuctions.
_______________________
NOTE: Regarding 3 layer MLP w/multiple outputs:
What is the effect on the Input to Hidden Weight Matrix?
I suspect multiple outputs makes that matrix "better" since it is being corrected by a "consensus" of outputs. Its like getting advice from your sole brother and not from 5 other siblings.
Granted, each additional output adds a dimension to the Hid/Out Weight Matrix which has independent corrections and one might argue that no additional constraints are placed on the In/Hid WM, but I don't that that is the case. The I/H WM will be more constrained and better.