On Wednesday, February 17, 2016 at 4:56:22 PM UTC-8,
sean....@gmail.com wrote:
> I see there is some literature about combining neural nets with decision trees
Here is my way:
A decision tree nn, but fist a no decision tree NN
You have a down counting register 1000 bits long. All set high.
With All these bit, you run it into a auto encoder that recreates the fist image, of a video. That has been trained into the, fist of two nn. Working
in unison.
Down counting the sequence register, in special way, will cause it to generate the next image from the densely trained NN.
At the same time you have a second NN, a classification NN. If a image is in the auto encoder NN, the classification nn will activate and generate a
specific hash code for that image. These two nn can be trained at the same time.
So if this NN conglomeration want to check the authenticity of different
video copy,
it just plays the video copy and compares them frame by frame.
The first image of the copy video is put through the classification NN. If it generates a detection, and a hash code,
then the auto encoder down counter is set to its starting value. The auto encoder generates it stored image and it
is redirected into the classification NN. IF the hash code is the same, then
that is good, and that is what we want. If for some reason the frame
rates become out of line it is easy to speed up or slow down the large
down counter.
Decision Tree RNN
Along with the large down counter register, you have 10 large hash
value registers. That can select up to 10 different direction.
These branch direction are generated by auto encoder NN along with the image. Also a path number or path numbers.
To select the path you want to go, you erase all the direction you do not want to go.
With the down counter binary value and the hash direction code, and path numbers your trained auto encoder generate you next direction of trained reality.
This would be like very complex movie with many different endings. Kinda like
the way the mind would works:)
This is the way my AGI brain will work on a pure ANN level. But mine
uses a unsupervised methods.
Is this like your basic program code, sorry I cannot read basic programming
code?