I have been studying something like stock patterns, for years. Input into
the human body. Like one pixel generates a wave profile over time. Just
like a one company trading in the stock market
Pulse-code modulation:
https://en.wikipedia.org/wiki/Pulse-code_modulation
Talking about big data!
Right off the bat i create more date by recording
the change from one moment to the next. Then look for repeating
patterns in the data and repeating patterns of change.
The next thing i do is look at the data again at my different
level of resolution, or magnification. and build a allot of prediction graph
theory maps.
But this is for one pixel or stock over time. In my system i have many pixels
recording data in parallel on parallel tracks. So groupings of
parallel pixels can rise and fall together or a pattern of one rising
on one tack and few moments later a rise happens on a different track.
The human brain does it different it assumes that certain bit of data exist
and then goes out and look for it. By creating A very small
detector NN. if one of these very small NN finds something, it is keep and
does not deleted or re randomized.
Then the next layer up in the deep unsupervised network it assumes the
position of one small NN to another small modular and then goes out and
look for it in the data. If it exist then it is kept. At the same time
on a parallel layer of deep unsupervised net it assumes the position of
of two or more position of aNN detector temporally.
Layers deeper down or
up into unsupervised net are more complicated organization of smaller
successful detector configurations on the lower layers.
But in doing it this way no big data is recorded. The net is trained in real
time as the data comes. But the the net must then figured out what it
captured later on from the successful working tiny NN detectors.
IN a working memory it auto build up randomly, from small to complex,
until one of these tiny NN activates. Then it does this for all
these tiny NN that fired in parallel and then add rebuilt data, or
GAN's, them or OR them to a frame to build up a complete image.
This way it will scale like the current deep networks.
This is not a RNN it is 10,00s of NN stack in a row. But RNN/LSTM could
do it.