Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Re: Combining neural nets with decision trees

88 views
Skip to first unread message
Message has been deleted

keghn feem

unread,
Feb 17, 2016, 8:48:36 PM2/17/16
to
Segmentation, like in computer vision?
I am doing a lot of that right now. Have no code yet.
But it will be kinda like this:

https://www.youtube.com/watch?v=juDvLrFQF0U

http://www.cc.gatech.edu/cpl/projects/videosegmentation/

Back propagation. is supervised learning. I am only interested in
unsupervised learning, at the moment, sorry.
But there are few very good back prop algorithms
in this old video, probably better ones around today:

https://www.youtube.com/watch?v=CVJOseIJnww

Message has been deleted
Message has been deleted
Message has been deleted

Manuel Rodriguez

unread,
Feb 18, 2016, 6:33:47 PM2/18/16
to
Am Donnerstag, 18. Februar 2016 01:56:22 UTC+1 schrieb sean....@gmail.com:
> In particular using a decision tree to select a set of weight vectors for a simple type of neural net I have seems appealing.

Sorry, but this makes no sense. To determine the weights of a neural network there are algorithms like Backpropagation or RPropMinus. The only way i know to use a decision tree with success is for KBANN (knowledge-based Artificial Neural Networks). Step 1: convert the decision tree to a neural network topology, Step 2: evolve this topology for increase the reward. This kind of technology goes in direction of Neural Turing Machine. It's not clear, if this was your intention.

keghn feem

unread,
Feb 18, 2016, 6:40:39 PM2/18/16
to
On Wednesday, February 17, 2016 at 4:56:22 PM UTC-8, sean....@gmail.com wrote:
> I see there is some literature about combining neural nets with decision trees


Here is my way:

A decision tree nn, but fist a no decision tree NN

You have a down counting register 1000 bits long. All set high.
With All these bit, you run it into a auto encoder that recreates the fist image, of a video. That has been trained into the, fist of two nn. Working
in unison.
Down counting the sequence register, in special way, will cause it to generate the next image from the densely trained NN.

At the same time you have a second NN, a classification NN. If a image is in the auto encoder NN, the classification nn will activate and generate a
specific hash code for that image. These two nn can be trained at the same time.

So if this NN conglomeration want to check the authenticity of different
video copy,
it just plays the video copy and compares them frame by frame.
The first image of the copy video is put through the classification NN. If it generates a detection, and a hash code,
then the auto encoder down counter is set to its starting value. The auto encoder generates it stored image and it
is redirected into the classification NN. IF the hash code is the same, then
that is good, and that is what we want. If for some reason the frame
rates become out of line it is easy to speed up or slow down the large
down counter.


Decision Tree RNN

Along with the large down counter register, you have 10 large hash
value registers. That can select up to 10 different direction.
These branch direction are generated by auto encoder NN along with the image. Also a path number or path numbers.
To select the path you want to go, you erase all the direction you do not want to go.
With the down counter binary value and the hash direction code, and path numbers your trained auto encoder generate you next direction of trained reality.
This would be like very complex movie with many different endings. Kinda like
the way the mind would works:)

This is the way my AGI brain will work on a pure ANN level. But mine
uses a unsupervised methods.

Is this like your basic program code, sorry I cannot read basic programming
code?

Message has been deleted
Message has been deleted

keghn feem

unread,
Feb 18, 2016, 8:11:18 PM2/18/16
to
There is the identifier Hash numbers generated by the classification NN which
can be 32 bits or more. And then i Have 10 Path hash value that can generated
by the auto encoder that can be 32 to bit long or longer. This video scroll nn
is just plays a monolithic back ground player. And dose not break out sub
objects out of the image Which need their own NNs.

With the 32 bit or longer hash path integers i can have a lot of big hash value
and with the sequence down counter binary values and with a really big nn auto
encoder i can store and retrieve all the information on the planet and more.

hash paths vales are not number 0 though 9 but location for hash vales
that are 32 bit or longer.

NN work in more than one dimension. The conscious mind has a problem with
more than 3 dimensions, but the subconscious mind does not.


keghn feem

unread,
Feb 18, 2016, 8:41:15 PM2/18/16
to

I train the classification NN of one single image, or first set image.
Then i select my large hash has value and what ever fluff data fodder values,
for the auto encoder. Which then trained into the auto to encoder recreates "first set image"
If this recreated "first set image" is redirected into classification NN, from
auto encoder,
it will activate the classification NN. The only thing it does not see is the hash large path hash values that are skimmed off to make the next.

keghn feem

unread,
Feb 19, 2016, 10:33:27 AM2/19/16
to
a auto encoder takes a image in and creates a match on the output, By adjusting
the weights.
But it can be hacked. i could input a image cup and then train it to generate
a output image of a crow.

keghn feem

unread,
Feb 19, 2016, 2:32:19 PM2/19/16
to
i could train a auto encoder to generate a image of a crow with one
binary on bit, a one. but it would be inflexible. It would hold could only
recreate two images, data, from the one, and one from a zero input and
maybe a third if negative one.

One input 1000 x 1000 dark layer and output matrix.

keghn feem

unread,
Feb 20, 2016, 6:38:06 PM2/20/16
to
Message has been deleted

keghn feem

unread,
Feb 21, 2016, 10:22:36 AM2/21/16
to

Cool that your coding.
Tiny, blurry pictures find the limits of computer image recognition:

http://arstechnica.com/science/2016/02/tiny-blurry-pictures-find-the-limits-of-computer-image-recognition/
Message has been deleted
Message has been deleted
Message has been deleted
Message has been deleted
Message has been deleted
Message has been deleted

keghn feem

unread,
Feb 23, 2016, 8:43:29 PM2/23/16
to



I was thinking that there are NN that look at a long sequential data.
You learn from pattern in the past.
in the middle of the this list is you temporal pointer.
Sequential data behind this pointer is input data for a NN
and the data in front is what you train the nn to.
It will do branches. Kinda like a time sliding NN.


Also when a image is changed into a picture of just it out line by a
edge detector, or a special NN.
you could do the same thing with this slider NN.

The sliding NN would follow the lines like a pin point scanner. Like
the eye does.
A very focused middle and lower resolution farther out from center.




Message has been deleted
Message has been deleted
Message has been deleted

keghn feem

unread,
Feb 25, 2016, 3:41:36 PM2/25/16
to
From my unsupervised point of view, that i have not tested in a NN format,
I generate the smallest NN scanning classification kernel. In a evolving
algorithm way, from smallest kernel and then on to bigger, later on.

This type is at first auto trains it self to activate as lest amount
as possible and very sparsely as possible. Bust must activate more
than once. On what ever data it comes cross.
Like a have a 5 x 5 input nn, it should have only one of its 5 x 5
outputs to activate. Or least amount as possible, but not nothing.
Then 5 x 5 auto encoder will be shoved into the front of it. The auto encoder
will be auto trained,
and when the auto encoder makes a match, of what the classification NN has,
a hash code or and a time stamp is also encoded into the auto encoder.
The hash code is also recorded into a engram SDR tray, in sequential memory.
Then a slider NN classification is used to find temporal
clustering. Then the slider classification NN detection are moved
into a auto encoder slider NN generating and a new hash codes to go back into
engram path ways, or sequential SDR memory. This will give hierarchy.

Note. That brain has many million of 5 x 5 scanning nn, and
bigger, of various sizes, working at the same time.
Message has been deleted
Message has been deleted

roun...@hotmail.com

unread,
Feb 26, 2016, 1:46:10 PM2/26/16
to
On Friday, February 26, 2016 at 10:50:03 AM UTC+8, sean....@gmail.com wrote:
> There are essentially an infinite number of options to explore. Simpler is better I suppose but whatever works works. Google deepmind is an example of whatever works works. Very likely human level AI will be reached in a few years with hardly any understanding of what is actually going on within the systems. That's not a big deal, it just means the systems will be more (power/size) inefficient than required.

back prop is pretty easy - you can correlate any two things together, it includes time series or any amount of dimensions either side, its just a matter of how much can you fit in it, before it cant pack anymore.

The only problem with backprop, is it takes so fricken long to train it naively, if you want to put the pattern in so it knows it straight away, its hard to fit it through the synapses without wiping whats in it already.

roun...@hotmail.com

unread,
Feb 26, 2016, 1:47:01 PM2/26/16
to
On Friday, February 26, 2016 at 10:50:03 AM UTC+8, sean....@gmail.com wrote:
> There are essentially an infinite number of options to explore. Simpler is better I suppose but whatever works works. Google deepmind is an example of whatever works works. Very likely human level AI will be reached in a few years with hardly any understanding of what is actually going on within the systems. That's not a big deal, it just means the systems will be more (power/size) inefficient than required.

thats not true man, the designer will know EXACTLY what the robot is doing. So remember that, if you dont know how your robot is thinking, it probably wont work. JMO.

keghn feem

unread,
Feb 27, 2016, 6:31:12 PM2/27/16
to

pineapple head

unread,
Feb 27, 2016, 11:07:42 PM2/27/16
to
On Sunday, February 28, 2016 at 7:31:12 AM UTC+8, keghn feem wrote:
> hi pine cghoon.
>
> http://j.ee.washington.edu/~bilmes/classes/ee512a_fall_2014/

hehe.

I just added that extra piece to my planar chaining thing, and it didnt quite work yet. im going to backup a bit, and get the tracker working faster and more accurate, but its not going to solve my problem yet.
0 new messages