Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

The SP theory of intelligence: two articles

94 views
Skip to first unread message

Gerry Wolff

unread,
Jun 9, 2014, 8:40:36 AM6/9/14
to
Two articles may be of interest:

* "The SP theory of intelligence: an overview" (PDF (bit.ly/1puspu4), J G Wolff, Information, 4 (3), 283-341, 2013 (bit.ly/1hz0lFE)). See also "The SP theory of intelligence (slides)" (PDF, bit.ly/1rZVrpY). Below this message, there are some suggestions for viewing the slides.

* "The SP theory of intelligence: benefits and applications" (PDF (bit.ly/1pbXwgq), J G Wolff, Information, 5 (1), 1-27, 2014 (bit.ly/1lcquWF)).

The overall aim in developing the SP theory has been to simplify and integrate ideas across artificial intelligence, mainstream computing, and human perception and cognition.

A key concept in the theory is that of multiple alignment, borrowed from bioinformatics but with important differences.

The theory is realised in the form of a computer model. It is envisaged that this will provide the basis for the development of a high-parallel, open-source "SP machine", hosted on an existing high-performance computer. This would provide a means for researchers everywhere to see what can be done with the system and to create new versions of it.

The SP theory has things to say about several aspects of computing and cognition, including unsupervised learning, concepts of computing, aspects of mathematics and logic, the representation of knowledge, natural language processing, pattern recognition, several kinds of reasoning, information storage and retrieval, planning and problem solving, and aspects of neuroscience and of human perception and cognition.

There is further information on www.cognitionresearch.org/sp.htm.

I will be happy to try to answer questions and to hear comments.

Gerry Wolff

--

Dr Gerry Wolff PhD CEng

CognitionResearch.org, jgw AT cognitionresearch DOT org, +44 (0) 1248 712962, +44 (0) 7746 290775, Skype: gerry.wolff, Web: www.cognitionresearch.org.

Viewing the slides

Unless it is self-explanatory, each slide has one or more notes, each one shown with a 'speech bubble' icon, normally in the top left-hand corner. To see a note, position the cursor over the icon. If the whole note is not visible, right-click on the icon to see it all.

To view a set of slides, it is probably best to download the file and open it in the Adobe Reader (adobe.ly/1ae8KZ). Other systems may not show the notes properly.

The following controls may be useful: Full screen: CTRL-L; Escape from full screen: ESC; Zoom in: CTRL-plus; Zoom out: CTRL-minus; Next slide: left-click; Previous slide: right-click; Scrolling left or right, up or down: use the 'hand'.
Message has been deleted
Message has been deleted
Message has been deleted
Message has been deleted

keghn feem

unread,
Jul 25, 2015, 9:10:16 AM7/25/15
to
On Friday, July 24, 2015 at 10:40:08 PM UTC-7, sean....@gmail.com wrote:
> Maybe this relates to some of what you are doing?:
I chasing pattern too. I deal with the rawest of data and then appling:


Kolmogorov Complexity pattern:
https://en.wikipedia.org/wiki/Kolmogorov_complexity

Solomonoff's theory of inductive inference:
https://en.wikipedia.org/wiki/Solomonoff%27s_theory_of_inductive_inference


SP theory is the same thing.




Message has been deleted

keghn feem

unread,
Jul 28, 2015, 7:24:15 PM7/28/15
to
Nothing wrong with SP theory. I think, i could make work with my agi theories.
But for me, my agi theory is for chasing pattern loops. That would be a sequence
or pictures that form a temporal loop. The lower rows, left to right, of
pixels in the image would be the audio recording track, and
the next row would be the touch track, and one below that would be
arm position track and so on...

Then use Kolmogorov classifiers would be used
to find repeating features. Then a editing distance algorithm to compare
Kolmogorov classification against one another for unsupervised learning. Then
use a transformation algorithm, and Kalman filter for how classified
feature change from one image to the next.

With these above classification, a number can generated for up close objects, such as cup, finger, arms and people. Also, a mid range and background
classification number for trees, mountains, hills, terrains, skys, and clouds.
Theses back ground classification values will be used like a gps address
number on the internal map of the AGI.
The close up classification numbers will be the higher details, for quick
prediction and manipulation of smaller objects.

For reading, writing, speaking, and signing, there is the
grounding of symbols.

For classified object that do nothing and stand
out very clearly from the background and are simple, quick and
easy to make, and recreate, with the littlest of energy, will be picket up by
the AGI brain, very quickly. These cursed dead weight, ghost, echos must be
utilized in some way to make them more efficient.
So AIXI theory is applied.
For object in a image, the AGI logic tries to convert them
to a more compressed well defined object with transformation algorithms, in the
the next or following images. It selects a symbol, but never finds
path to it. And so the object bind to the symbol by association and constant obsessing on it.

When A AGI become more mature and with the use of motors, it will
developed a very complex map of pattern loops, with parallel loop and sub loops.
Once this happens it can start swapping section of pattern loop with
one and another, like DNA, to from new pattern loop. This will advance
in to a 3D simulator to modeling the world around the AGI. The 3D simulator
will give it the ability to imitate others, and learn from its parents, and
dream.

A cascading RNN brain can do all of this.

The emotional system is figured out. I write about that later on.





a more complex pattern loop, engrams


Message has been deleted

keghn feem

unread,
Jul 28, 2015, 10:03:19 PM7/28/15
to
Very interesting. Even standard memory technology are advancing, very well
at the moment.

keghn feem

unread,
Jul 29, 2015, 1:28:00 PM7/29/15
to
Message has been deleted
Message has been deleted

keghn feem

unread,
Aug 4, 2015, 4:58:14 PM8/4/15
to
Thanks for the share.

I like to work with pattern loops. so my tree combine at some point.
The AGI model that i am working with keeps branches that will lead and
pass through rewards. It will keep short branches to Anti-rewards. And yes it
send the meta data back in time, so a course correction can be made, by way
of motors.
Pavlov's dogs had immediate reward symptoms, when food was near. A few
month later, when the dogs heard the dog feeders enter the building, a few hundred feet away and a few minutes to feeding, the dog showed symptoms of reward, salivating.

Also for an insect brain, my branching model would treat complicated patterns
loop like anti-reward branches, for lack of memory and processing power.

P and NP patterns for insects or humans?



keghn feem

unread,
Aug 4, 2015, 4:59:39 PM8/4/15
to
AGI Brain. A cascading RNN.


Take a look at his video of a car racing around a race track:

https://www.youtube.com/watch?v=j_maFaAjcPY

It could be a pattern loop of life, in a AGI brain. It has a beginning and a end and them come
back to the start.

The car completed the loop around the rack within
10,000 image frames.
Each one of these images is recorded or trained into a Neural Network Chip (NNC).
These chips are lined up like dominoes in a loop just like the race track, and tied together
by wires, which are addressing and data buses.
It record by having a program pointer point to the first chip and clocks in date, then the program
pointer is clocked to the next NNC, and then the next etc.....

At a latter time the this can be replayed, froward or backwards at any speed.
NCC can be hardware or software chips, and of any size.
The NNC are trained as auto encoders and classifiers. They can merge to form, later on,
into denser trained NNC.

This race track or pattern loop in side the middle of the AGI brain surrounded by millions and
million of other NNC. All of these ree NNC are waiting to get into the loop, replace one, or be
add in to the race pattern loop.


The NNC can output onto a output bus.

The way it learns is by letting the weight sates in all of the unused NN chips jump around
randomly by action of a program, or by out side electromagnetic nose, or by a little bit
of ionizing radiation, or by out side electrostatic discharge:


http://www.eurekalert.org/pub_releases/2015-07/ru-ndb071615.php

When A image shows up on the bus, from a video camera, the NN chip that
is in the best state at that moment is selected. Also, the weight matrix is locked in to place. If this
capture is better than the one in the video it will be swapped it out. Also, copies of learned
nn chips are copied into unused nn chip, and the there weight matrix are
vibrate very slightly, randomly.

NN logic will forms between NNC to predicted were sub classified features, object and
other stuff will show up.

Pattern loop, or engrams can get very complex with parallel loop, sub loops and so on.
Message has been deleted

keghn feem

unread,
Aug 4, 2015, 10:46:41 PM8/4/15
to
On Tuesday, August 4, 2015 at 4:46:02 PM UTC-7, sean....@gmail.com wrote:
> Thanks for the information, especially about learning anti-rewards.
> The key idea about prefix/context trees for reinforcement learning is that early on it would only have learned very short simple rules that would only boost its ability to get another reward a little bit.

I agree.



> Over time it would acquire more specific rules with higher probabilities.

I agree.


> I presume there would be a snowball effect over time.

I so believe in this.

> I'm not sure if reinforcement learning has been done in that exact way before. The machine learning literature is extensive. Anyway I'll try.

Good luck. I find those paper difficult to read. I would be very pleased to know
what you fine.



For building a tree, i need a source of data. like white noise, music, or
silence, coming form the a radio.

to make a perceptron,
lets say, I have thousand hand crafted svm algorithm to choose from.

I randomly generate a list of svm algorithms that are going to sample the
stream of sound. It will be A perceptron detecting a specific something at a
unknown fidelity.

List could be one line, or a few thousand lines.

There is the data going in but What is coming out?
Is the perceptron a one shot. Or is it activating on anything.

I make pattern loops out of what perceptrons activated.

If a randomly generated perceptron never fires, it will be deleted sometime
later, no hurry.

If there is one that fires all the time and a different that does fire
periodically will replace it.

What could these perceptron detecting? It could be, a auto encoder, edge
detector, feature detector, Object detector, dithered reality, or fantasy.

All life forms must comply to energy management scheme, or other survival
scheme. So this is what i use to get the best percetrons. The reward of
energy.

I do it this way because i can make allot of them threw a automated
process, from simple to complex.

I can make Neural Network perceptron, too. Which i call a NN chip.
Theses peceptron can be strung together to make a cascading RNN.


Message has been deleted

keghn feem

unread,
Aug 5, 2015, 5:55:16 PM8/5/15
to
I would be very interest to hear more, when you get results, or new ideas
come to you.

Do you know any body who can write science papers for a reasonable rate, buddy.


I am storing my sequential data in Jpeg images and then encoding them into
a MPEG video for mostly compression, not really viewing.

I have sequential images and with extra data of in the lower
part of the image. Like what image to branch to next, and audio track, a arm
position indicator and a turn on motor and so on.
Then i compress the image to a jpeg format and then tread it into a MPEG video:)

https://www.youtube.com/watch?v=TWEXCYQKyDc

I kind of taking a chance with meta data because it is lossy compression.


I using OpenCV software to prove my pattern theories, in C/c++ code. All on
a linux system Mint 17 64 bit OS.

For sound i am using portaudio, sox, and ALSA.
For a break out board to feel the world and work the motor plan on using
Rasphy PI 2 and/or teensy.
And for vision a USB 3.0 video camera or built in webcam.
Message has been deleted

keghn feem

unread,
Aug 8, 2015, 7:54:46 PM8/8/15
to
Thanks. Good luck luck with your work too.
Thanks for letting me talk about my work on super unsupervised learning.
You did not say if your work was unsupervised or supervised or somewhere
in between.

keghn feem

unread,
Aug 25, 2015, 4:17:15 PM8/25/15
to
0 new messages