Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

ANN Brain 2016

194 views
Skip to first unread message

keghn feem

unread,
Jun 15, 2016, 3:07:53 PM6/15/16
to
Artificial Neural Network Brain

The center of it all is the consciousness part. Which are artificial neuron
daisy chained together.
A spark jump from one artificial neuron to the next. This spark moves
through the daisy chain at the same rate of life. Like watching a movie
video of life that is on play. Not fast forward, not on slow or on
reversed speed.
When the spark enters into the next neuron it send a pulse to The Main
Auto Encoder Artificial Neural Network. Which is the main sequential
memory storage.
A pulse coming off the daisy chain moves the video to the next
frame of life.
All of these video frame of life are stored in weight space matrix.

The daisy chain neuron take in a second input. They are pulses from
a Perceptron Artificial Neural Network that detects object, backgrounds,
and sub features. if these thing are being detected, then a minor
pulse will enter into daisy chain neuron.
The sprack
traveling in the daisy chain of neuron will be re enforced.
If the spark is not re-enforced it will fade out.
Detection by the perceptron neural network will can cause a new spark
to start up somewhere else on this extremely long daisy chain of neurons.

The daisy chain neuron has two input and two outputs.
Inputs are from the daisy chain before it and input from perceptron
neural networks.
The outputs are the one going to the next daisy chain neuron and the
other output goes to the Main Auto Encoder Neural Network.


http://www.eurekalert.org/pub_releases/2016-06/cp-hi060716.php

keghn feem

unread,
Jul 5, 2016, 11:29:34 AM7/5/16
to

How insights into human learning can foster smarter artificial intelligence:

https://www.sciencedaily.com/releases/2016/06/160614133609.htm

keghn feem

unread,
Jul 7, 2016, 7:53:41 PM7/7/16
to
The extremes of Neural Networks are perceptrons and on the other end
is auto encoder.

Perceptron detect and when they detect data they are trained for a flag
or bit will go high, a very simple output for indication.

Autoencoders output is a recreate the data on the input. The recreation can be
dream, very exact, complete fantasy, or extracted features.


For what i working on right now is having a perceptron detect something.
When it does, then I have that same perceptron look at doodle screen.
This screen rebuilds with evolution algorithm with feedback.

When the perceptron activates, the doodle image or data is then
trained into a autoencoder

The autoencoder is acting a main memory. This is a large autoencoder that
can hold lots of images or data, and has one set of weight matitx.

The percetron or precptorns, detectors, are each made with their own
weight matrix.
These NN detectors, swarms, are combined together:

https://en.wikipedia.org/wiki/Particle_swarm_optimization

keghn feem

unread,
Aug 29, 2016, 3:55:02 PM8/29/16
to
I have been studying something like stock patterns, for years. Input into
the human body. Like one pixel generates a wave profile over time. Just
like a one company trading in the stock market

Pulse-code modulation:
https://en.wikipedia.org/wiki/Pulse-code_modulation

Talking about big data!
Right off the bat i create more date by recording
the change from one moment to the next. Then look for repeating
patterns in the data and repeating patterns of change.
The next thing i do is look at the data again at my different
level of resolution, or magnification. and build a allot of prediction graph
theory maps.

But this is for one pixel or stock over time. In my system i have many pixels
recording data in parallel on parallel tracks. So groupings of
parallel pixels can rise and fall together or a pattern of one rising
on one tack and few moments later a rise happens on a different track.



The human brain does it different it assumes that certain bit of data exist
and then goes out and look for it. By creating A very small
detector NN. if one of these very small NN finds something, it is keep and
does not deleted or re randomized.
Then the next layer up in the deep unsupervised network it assumes the
position of one small NN to another small modular and then goes out and
look for it in the data. If it exist then it is kept. At the same time
on a parallel layer of deep unsupervised net it assumes the position of
of two or more position of aNN detector temporally.
Layers deeper down or
up into unsupervised net are more complicated organization of smaller
successful detector configurations on the lower layers.
But in doing it this way no big data is recorded. The net is trained in real
time as the data comes. But the the net must then figured out what it
captured later on from the successful working tiny NN detectors.
IN a working memory it auto build up randomly, from small to complex,
until one of these tiny NN activates. Then it does this for all
these tiny NN that fired in parallel and then add rebuilt data, or
GAN's, them or OR them to a frame to build up a complete image.
This way it will scale like the current deep networks.

This is not a RNN it is 10,00s of NN stack in a row. But RNN/LSTM could
do it.
0 new messages