Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

AI Now

134 views
Skip to first unread message

Oli Mpele

unread,
Jun 6, 2014, 8:51:20 AM6/6/14
to
New project: AI Now [http://ainow.weebly.com]

Click on the "Explore the Concept" link to see the work-in-progress documents.

Intro here: https://docs.google.com/document/d/12hFQqXplWk-YeLOIRv48n-CGQkxfeOK2ILCHm7QRh7A/edit?usp=sharing

Sorry there isn't any working code to try. (It's a long story.)

Curt Welch

unread,
Jun 7, 2014, 10:59:48 AM6/7/14
to
From your google doc:

"The bootstrap process is based on the assumption that the AI will first
acquire the ability to understand a language, and then read information
encoded in that language from various sources."


"When developing the design for the language sub-system, the guiding
principle was that the AI must be capable of increasing its understanding
of any given sentence towards (essentially) 100% accuracy, by continuing to
think about it."

How does it measure its accuracy? How does it know if what it's doing is
increasing, or decreasing it's current accuracy?

--
Curt Welch http://CurtWelch.Com/
cu...@kcwc.com http://NewsReader.Com/

Oli Mpele

unread,
Jun 8, 2014, 7:36:47 AM6/8/14
to
On Saturday, June 7, 2014 4:59:48 PM UTC+2, Curt Welch wrote:

> "When developing the design for the language sub-system, the guiding
>
> principle was that the AI must be capable of increasing its understanding
>
> of any given sentence towards (essentially) 100% accuracy, by continuing to
>
> think about it."
>
>
> How does it measure its accuracy? How does it know if what it's doing is
>
> increasing, or decreasing it's current accuracy?

That is the interesting bit, which I think is an advance over current paradigms. It knows because it thinks complex thoughts about the structures conveyed (in-formation), and tries to "make sense of" the interpretations.

The more sense they make, the better. There is no shortcut.

You need the integrated higher thought process, to tell you if the information makes sense. If you have that, you can then compute feedback into the (lingual) interpretation process.

Also, what is really cool is that the lingual interpretation is computed using an algorithm that can also (theoretically) interpret any other sensory domain. It only needs to be trained appropriately.

Though I predict that using a custom bootstrap procedure (for language) makes sense, and makes it much easier to train -- it's essentially parameterizing a much more generic method that can also work for vision, or sound.

Think of it this way: sensory-domain -[data]-> interpretative-process -[information]-> thought -[feedback]-> interpretative-process --> ...

I could go into more detail on this process, but simply think of it as a sequence of layers: in the case of language for example, encoding -> character -> word -> phrase / sentence / paragraph -> situation -> motion -> etc.

where "situation" is an example of an internal model (I sometimes call it a "partial world-model") of what the sentence expresses, and "motion" is a model that expresses transitions between situations; etc.

Curt Welch

unread,
Jun 8, 2014, 9:06:21 PM6/8/14
to
Ok, but you are throwing around a lot of folk psychology terms without
giving any hard facts to sink our teeth into.

"It knows because it thinks complex thoughts"

So how's the complexity of thought measured in your system? What's the
difference between a simple thought and a complex thought?

" and tries to "make sense of" the
interpretations."

So what is the actual algorithm for "making sense" and how does it know
when something is "making sense"?

So, you have data. That's 1's and 0's. How does a computer program "make
sense" or "have a complex thought" as it is manipulating 1's and 0's?

You seem to be throwing around a lot words that humans use to talk about
what they think is happening in their heads, but you are not closing the
gap between this highly informal way we humans talk and the very formal way
that computers manipulate 1's and 0's.

Anyone can say shit like, "I'm going to make a computer think and feel and
that will be the first time anyone has solved AI!" But just echoing the
words everyone is taught to label our brain behavior with is not solving
AI.

To solve AI, you have to show how you have LINKED the 1 and 0 language of
computers, to the the folk psychology language of everyday human babble
about our mental abilities.

What is thinking? What does the computer have to do before it is thinking?
How is running any normal computer program to process sensory data not an
example of thinking? What is filings? What our emotions? How are these
coded in a computer language? Is A = B + 1 an example of an emotion? An
example of thinking? What sort of processing of sensory data transforms
the millions of algorithms already developed in AI to tasks like image
processing, or voice recognition, or language processing different from
what you are suggesting?

I'm just saying, that so far, what you have written, doesn't seem to
explain anything real. I have no clue at all, what I would write in
computer code, to try and implement your ideas. I have no clue at all, if
I were to look at some code, if it was an example of what it is you are
suggesting. You are being way too vague.

Oli Mpele

unread,
Jun 9, 2014, 4:30:32 AM6/9/14
to
I talk at that level to reach the widest possible audience, and because I have developed my own custom technical jargon for most things.

The model is that of a symbolic graph, where each node is (implicitly) typed, and each connection is both typed and directed. There can be any number of connections between any two nodes (though usually there is only one, iirc).

Each node can be thought of like a symbol, representing anything, whether abstract or real.

One way to think of it is much like a relational database, with the tables being the types of nodes, and the connections being the relations between the rows of different tables.

In this analogy it becomes clear that any node can *index* a certain amount of information pertaining to it. This information can be retrieved by locating the node, and traversing its neighborhood.

Specifically, each node can have a (non-cyclical) tree underneath it, which contains information directly known to apply to it.

In this way any kind of information can be represented in the graph.

When "thought" happens in this representation, existing sub-trees are read, they are modified, and new nodes and connections are created.

At the same time, the representation is weighted, so every link has a trust value applied to it, that measures roughly on a scale from 0 to 100 how much trust we have in that link "holding true".

> "It knows because it thinks complex thoughts"
>
> So how's the complexity of thought measured in your system? What's the
> difference between a simple thought and a complex thought?
>
> " and tries to "make sense of" the interpretations."
>
> So what is the actual algorithm for "making sense" and how does it know
> when something is "making sense"?

What "makes sense", is simply anything that is "internally consistent and externally coherent". Which is a common heuristic for theory choice.

There is no shortcut to making sense of something. There are a plurality of techniques involved, and they differ depending on the structure in question.

Specifically, think of a metaphor. You need to understand the metaphor to understand the intended meaning, i.e. to "make sense of" the sentence.

That means the algorithm needs to be able to understand metaphors.

And then you have reference, and various hypothetical temporal structures, and possibly spatial and numerical relations, and you see that the techniques involved in figuring out whether anything is actually sensible, or non-sense, are very varied.

During all that, you also need to look for coherence with your own background knowledge, to see if the interpretation is "likely", or "very unlikely".

And so on.

Now, if something is "non-sense" in the estimation of the algorithm, the basic assumption is, that the *interpretation* is wrong. Not that there is not something which makes sense that can be interpreted into (or out-of) the sentence, but that we didn't find it yet.

So the algorithm keeps looking until it has a reasonable interpretation, or it runs out of resources (mostly time) to try to read this particular sentence.

We humans, proceed exactly the same way.

The guiding approach to figuring this out, is a constraint solver, which slowly tries to optimize the consistency of the interpretation, until an aggregated trust value hits a certain (dynamic) threshold.

At that point, we move on with "our understanding" of the sentence, which may include any number of higher structures, such as models of possible situations and motions.

> So, you have data. That's 1's and 0's. How does a computer program "make
> sense" or "have a complex thought" as it is manipulating 1's and 0's?
>
> You seem to be throwing around a lot words that humans use to talk about
> what they think is happening in their heads, but you are not closing the
> gap between this highly informal way we humans talk and the very formal way
> that computers manipulate 1's and 0's.
>

encoding -> character -> word -> phrase / sentence / paragraph -> situation -> motion

Take another look at this, but now with your question in mind. You suggest [later on] that anything can count as a thought, as long as it processes information. I agree. And what is a "complex" thought? It is simply a thought at the higher layers of this hierarchy.

So "complex" here does not imply complicated and many-parted, but embedded in a specific framework of meta-methods, which situate it at a specific level of abstractedness. Motions are processed as motions, and this counts as a very complex thought.

> Anyone can say shit like, "I'm going to make a computer think and feel and
> that will be the first time anyone has solved AI!" But just echoing the
> words everyone is taught to label our brain behavior with is not solving
> AI.
>

I understand your skepticism. But I'm only using those words to help explain the workings of the algorithm, not because the technicalities are not already clear to me.

I have written enough exploratory code in this paradigm to know exactly (down to most of the optimisations) how it will look in the end. It is only very difficult to assemble *all* of the pieces, and make them work together in harmony. That is why I am currently looking for collaborators and investors for this project.

One point though: the architecture I am working on *does not feel*. It only thinks; abstract thoughts. (I believe feeling can be simulated, but that requires a somewhat different architecture.)

> To solve AI, you have to show how you have LINKED the 1 and 0 language of
> computers, to the the folk psychology language of everyday human babble
> about our mental abilities.

The link between the folk psychology terms I am using, and the model, is that each node can represent, for example, a sentence, a word, a character, or even a single number (from an encoding which maps numbers to characters).

We build up the layers, slowly, using different algorithms and collected rules each time, until we reach the interesting parts.

> What is thinking? What does the computer have to do before it is thinking?
> How is running any normal computer program to process sensory data not an
> example of thinking? What is filings? What our emotions? How are these
> coded in a computer language? Is A = B + 1 an example of an emotion? An
> example of thinking? What sort of processing of sensory data transforms
> the millions of algorithms already developed in AI to tasks like image
> processing, or voice recognition, or language processing different from
> what you are suggesting?
>
>
> I'm just saying, that so far, what you have written, doesn't seem to
> explain anything real. I have no clue at all, what I would write in
> computer code, to try and implement your ideas. I have no clue at all, if
> I were to look at some code, if it was an example of what it is you are
> suggesting. You are being way too vague.

I think the vagueness is necessary to a degree, so we do not get lost in technicalities. But I understand your point. I hope I can clear up any questions that people might have, so we can get the implementation going quickly.

Curt Welch

unread,
Jun 9, 2014, 1:14:50 PM6/9/14
to
Oli Mpele <oli....@gmail.com> wrote:
> On Monday, June 9, 2014 3:06:21 AM UTC+2, Curt Welch wrote:
> > Oli Mpele <oli....@gmail.com> wrote:
> >=20
> > > On Saturday, June 7, 2014 4:59:48 PM UTC+2, Curt Welch wrote:
> >=20
> > >
> >=20
> > > > "When developing the design for the language sub-system, the
> > > > guiding principle was that the AI must be capable of increasing its
> > > > understanding of any given sentence towards (essentially) 100%
> > > > accura=
> cy,
> > > > by continuing to think about it."
> >=20
> > > > How does it measure its accuracy? How does it know if what it's
> > > > doin=
> g
> > > > is increasing, or decreasing it's current accuracy?
> >
> > > That is the interesting bit, which I think is an advance over current
> > > paradigms. It knows because it thinks complex thoughts about the
> > > structures conveyed (in-formation), and tries to "make sense of" the
> > > interpretations.
> >=20
> > > The more sense they make, the better. There is no shortcut.
> >
> > > You need the integrated higher thought process, to tell you if the
> > > information makes sense. If you have that, you can then compute
> > > feedbac=
> k
> > > into the (lingual) interpretation process.
> >=20
> > > Also, what is really cool is that the lingual interpretation is
> > > compute=
> d
> > > using an algorithm that can also (theoretically) interpret any other
> > > sensory domain. It only needs to be trained appropriately.
> >=20
> > >
> >=20
> > > Though I predict that using a custom bootstrap procedure (for
> > > language)
> >=20
> > > makes sense, and makes it much easier to train -- it's essentially
> >=20
> > > parameterizing a much more generic method that can also work for
> > > vision=
> ,
> >=20
> > > or sound.
> >=20
> > >
> >=20
> > > Think of it this way: sensory-domain -[data]-> interpretative-process
> >=20
> > > -[information]-> thought -[feedback]-> interpretative-process --> ...
> >=20
> > >
> >=20
> > > I could go into more detail on this process, but simply think of it
> > > as =
> a
> >=20
> > > sequence of layers: in the case of language for example, encoding ->
> >=20
> > > character -> word -> phrase / sentence / paragraph -> situation ->
> > > moti=
> on
> >=20
> > > -> etc.
> >=20
> > >
> >=20
> > > where "situation" is an example of an internal model (I sometimes
> > > call =
> it
> >=20
> > > a "partial world-model") of what the sentence expresses, and "motion"
> > > i=
> s
> >=20
> > > a model that expresses transitions between situations; etc.
> >=20
> >=20
> >=20
> > Ok, but you are throwing around a lot of folk psychology terms without
> >=20
> > giving any hard facts to sink our teeth into.
>
> I talk at that level to reach the widest possible audience, and because I
> h= ave developed my own custom technical jargon for most things.

Yes I do that as well. All creative people.

> The model is that of a symbolic graph, where each node is (implicitly)
> type= d, and each connection is both typed and directed. There can be any
> number = of connections between any two nodes (though usually there is
> only one, iir= c).
>
> Each node can be thought of like a symbol, representing anything, whether
> a= bstract or real.
>
> One way to think of it is much like a relational database, with the
> tables = being the types of nodes, and the connections being the
> relations between t= he rows of different tables.
>
> In this analogy it becomes clear that any node can *index* a certain
> amount=
> of information pertaining to it. This information can be retrieved by
> loca= ting the node, and traversing its neighborhood.
>
> Specifically, each node can have a (non-cyclical) tree underneath it,
> which=
> contains information directly known to apply to it.
>
> In this way any kind of information can be represented in the graph.

Sounds like classic GOFIA sort of approach. Like Cyc. Knowledge graphs.

> When "thought" happens in this representation, existing sub-trees are
> read,=
> they are modified, and new nodes and connections are created.
>
> At the same time, the representation is weighted, so every link has a
> trust=
> value applied to it, that measures roughly on a scale from 0 to 100 how
> mu= ch trust we have in that link "holding true".

Ok. Probabilistic language based knowledge graphs.

> > "It knows because it thinks complex thoughts"
> >=20
> > So how's the complexity of thought measured in your system? What's the
> > difference between a simple thought and a complex thought?
> >=20
> > " and tries to "make sense of" the interpretations."
> >=20
> > So what is the actual algorithm for "making sense" and how does it know
> > when something is "making sense"?
>
> What "makes sense", is simply anything that is "internally consistent and
> e= xternally coherent". Which is a common heuristic for theory choice.
>
> There is no shortcut to making sense of something. There are a plurality
> of=
> techniques involved, and they differ depending on the structure in
> questio= n.
>
> Specifically, think of a metaphor. You need to understand the metaphor to
> u= nderstand the intended meaning, i.e. to "make sense of" the sentence.
>
> That means the algorithm needs to be able to understand metaphors.
>
> And then you have reference, and various hypothetical temporal
> structures, = and possibly spatial and numerical relations, and you see
> that the techniqu= es involved in figuring out whether anything is
> actually sensible, or non-s= ense, are very varied.
>
> During all that, you also need to look for coherence with your own
> backgrou= nd knowledge, to see if the interpretation is "likely", or
> "very unlikely".= =20
>
> And so on.
>
> Now, if something is "non-sense" in the estimation of the algorithm, the
> ba= sic assumption is, that the *interpretation* is wrong. Not that there
> is no= t something which makes sense that can be interpreted into (or
> out-of) the = sentence, but that we didn't find it yet.
>
> So the algorithm keeps looking until it has a reasonable interpretation,
> or=
> it runs out of resources (mostly time) to try to read this particular
> sent= ence.
>
> We humans, proceed exactly the same way.
>
> The guiding approach to figuring this out, is a constraint solver, which
> sl= owly tries to optimize the consistency of the interpretation, until
> an aggr= egated trust value hits a certain (dynamic) threshold.
>
> At that point, we move on with "our understanding" of the sentence, which
> m= ay include any number of higher structures, such as models of possible
> situ= ations and motions.
>
> > So, you have data. That's 1's and 0's. How does a computer program
> > "mak=
> e
> > sense" or "have a complex thought" as it is manipulating 1's and 0's?
> >
> > You seem to be throwing around a lot words that humans use to talk
> > about what they think is happening in their heads, but you are not
> > closing the gap between this highly informal way we humans talk and the
> > very formal w=
> ay
> > that computers manipulate 1's and 0's.
> >=20
>
> encoding -> character -> word -> phrase / sentence / paragraph ->
> situation=
> -> motion
>
> Take another look at this, but now with your question in mind. You
> suggest = [later on] that anything can count as a thought, as long as it
> processes in= formation. I agree. And what is a "complex" thought? It is
> simply a thought=
> at the higher layers of this hierarchy.
>
> So "complex" here does not imply complicated and many-parted, but
> embedded = in a specific framework of meta-methods, which situate it at a
> specific lev= el of abstractedness. Motions are processed as motions, and
> this counts as = a very complex thought.
>
> > Anyone can say shit like, "I'm going to make a computer think and feel
> > an=
> d
> > that will be the first time anyone has solved AI!" But just echoing
> > the words everyone is taught to label our brain behavior with is not
> > solving AI.
> >=20
>
> I understand your skepticism. But I'm only using those words to help
> explai= n the workings of the algorithm, not because the technicalities
> are not alr= eady clear to me.

Sure. You got technical enough above for me to grasp the direction you are
heading in. A language-like probabilistic knowledge graph.

> I have written enough exploratory code in this paradigm to know exactly
> (do= wn to most of the optimisations) how it will look in the end. It is
> only ve= ry difficult to assemble *all* of the pieces, and make them work
> together i= n harmony. That is why I am currently looking for
> collaborators and investo= rs for this project.
>
> One point though: the architecture I am working on *does not feel*. It
> only=
> thinks; abstract thoughts. (I believe feeling can be simulated, but that
> r= equires a somewhat different architecture.)

Yes, your approach is highly similar to projects like Cyc that have been
struggling to produce something useful for a very long time.

> > To solve AI, you have to show how you have LINKED the 1 and 0 language
> > of computers, to the the folk psychology language of everyday human
> > babble about our mental abilities.
>
> The link between the folk psychology terms I am using, and the model, is
> th= at each node can represent, for example, a sentence, a word, a
> character, o= r even a single number (from an encoding which maps numbers
> to characters).
>
> We build up the layers, slowly, using different algorithms and collected
> ru= les each time, until we reach the interesting parts.
>
> > What is thinking? What does the computer have to do before it is
> > thinkin=
> g?
> > How is running any normal computer program to process sensory data not
> > an example of thinking? What is filings? What our emotions? How are
> > these coded in a computer language? Is A =3D B + 1 an example of an
> > emotion? =
> An
> > example of thinking? What sort of processing of sensory data
> > transforms the millions of algorithms already developed in AI to tasks
> > like image processing, or voice recognition, or language processing
> > different from what you are suggesting?
> >=20
> >=20
> > I'm just saying, that so far, what you have written, doesn't seem to
> > explain anything real. I have no clue at all, what I would write in
> > computer code, to try and implement your ideas. I have no clue at all,
> > i=
> f
> > I were to look at some code, if it was an example of what it is you are
> > suggesting. You are being way too vague.
>
> I think the vagueness is necessary to a degree, so we do not get lost in
> te= chnicalities. But I understand your point. I hope I can clear up any
> questi= ons that people might have, so we can get the implementation
> going quickly.

I don't think such an approach will do much other than build clever
question answering machines.

Your approach is all centered around language based thinking. Though we
humans of course do a lot of that and it represents much of our most
"intelligent" thinking such as most of what takes place at all our higher
centers of learning, I strikes me that such a focus will always miss the
bigger picture.

That is, language based thinking is only one highly specialised type of
behavior we produce. It does not, for example, explain how a human learns
to invent, design, build, and rid, a bike. All of that can be done without
any language concepts flying around in our heads. It can be done with only
concepts of physical objects and how they interact with each other and move
in time.

Though we can express the physics of how a baseball is thrown, and caught,
in the language of math and physics, we don't learn to throw, or catch a
baseball, by thinking about the math.

Intelligence isn't about language. It isn't about thought. It's about
physical behavior. That's why animals have brains. If your approach, and
system architecture isn't structured for solving the problems of physical
behavior, it will never be intelligent in my view.

Cyc, and other language based knowledge graphs aren't structured in a way
that works for solving the problem of intelligent behavior. As such, they
never become intelligent. They are just large knowledge storage systems
that can be queried -- like Watson.

If you want to create true intelligence, what the database must store, is
information about HOW to ACT.

Storing the fact that DOG is a type of Animal, gives the system no
information about when it's correct to speak the word DOG or when it's
correct to speak the word ANIMAL, for example.

If I see a dog, how should I act? If I see a lion charging me with it's
mouth open, how should I act? If I see a baseball flying towards my head,
how should I act? If I see my friend Bob, how should I act? These are the
questions that an intelligent agent must be able to answer instantly in
response to these and any other situation an intelligent agent must be
faced with.

Does your database allow a computer to know how to act in these, and a
million other possible situations it might be faced with in the future?

If not, it's not an intelligent computer. It's just another type of
information storage system.

Oli Mpele

unread,
Jun 10, 2014, 1:33:31 PM6/10/14
to
>
> Sounds like classic GOFIA sort of approach. Like Cyc. Knowledge graphs.
>

It's like Cyc alright :)

> > ...

> Ok. Probabilistic language based knowledge graphs.

> > ...

>
> Sure. You got technical enough above for me to grasp the direction you are
>
> heading in. A language-like probabilistic knowledge graph.
>
>
> Yes, your approach is highly similar to projects like Cyc that have been
>
> struggling to produce something useful for a very long time.
>

I am fully aware of that, and I think I know *why* they have been struggling: they are using standard components in standard ways; whereas the real trick is in how the functionality of these components is used and interwoven, and you need to work up from the requirements, instead of down from the canonical solutions. Roughly.

> > ...
>
>
> I don't think such an approach will do much other than build clever
>
> question answering machines.
>
>
>
> Your approach is all centered around language based thinking. Though we
>
> humans of course do a lot of that and it represents much of our most
>
> "intelligent" thinking such as most of what takes place at all our higher
>
> centers of learning, I strikes me that such a focus will always miss the
>
> bigger picture.
>
>
>
> That is, language based thinking is only one highly specialised type of
>
> behavior we produce. It does not, for example, explain how a human learns
>
> to invent, design, build, and rid, a bike. All of that can be done without
>
> any language concepts flying around in our heads. It can be done with only
>
> concepts of physical objects and how they interact with each other and move
>
> in time.
>

Completely agree with you!

>
> Though we can express the physics of how a baseball is thrown, and caught,
>
> in the language of math and physics, we don't learn to throw, or catch a
>
> baseball, by thinking about the math.
>

Again, this is completely true. But the conceptual switch is: it is exactly the other way around.

We use the same patterns to speak language, that we use when thinking about physical problems. The capabilities to speak language did not magically jump into existence - they are simply re-using the capabilities that had already evolved, but applying them to a new *domain* - structured sounds that can be traded between individuals in a group.

The tool-using animal is already essentially capable of speech. I do not believe, like Chomsky for instance, that grammar is a particular capability that evolves separately from regular capabilities. (Though, I do not want to misrepresent his views, as I am not too familiar with them - that seems to be the gist of it.)

One of the most fascinating results of my research has been how the same algorithms pop up again and again all over the entire range of human thought, so that all abstract thought can essentially be reduced to a small set of algorithms.

This means that language is simply the application of these *much more highly generic algorithms* to a specific domain.

Consider for example the sentence: "let me know when you get there". This sentence implies a future situation, with certain properties, and the capability of the addressee to recognize that situation when it occurs in the future, and to *then* remember this conversation, and then execute the action of ... etc.

All of what I just described are actions that are really "physical" thought, in the way you described. Picturing, perceiving, recognizing, remembering, ~contacting someone~, etc.

The fact that they are "expressed" or rather, "referred to" by the lingual expression, does nothing to change that.

That is the key point.

The underlying capabilities need to be there. The lingual expression, and mapping (i.e. interpreting) the sentence to map to those capabilities, are another and separate (i.e. additional) issue.

--

I do very much suppose [this] includes the "capability" to learn to ride a bike, because this is simply the capability to use the muscles associated with that action.

Well. At least, talking about robots.

Of course the human implementation may use more lower-level, distributed and decentralized learning mechanisms, with reflexes being encoded in the peripheral nervous system. But that is merely a detail of the implementation! It is perfectly possible to learn to control a body, and execute certain actions, using only abstract thought - the way we might learn to remote-control a puppet using such a strange thing as a keyboard or mouse interface.

The capabilities of the abstract thought supersede that of the reflexive embodied "thought".

Though, I admit, I haven't given this aspect much attention at all: the really interesting part *for me* is in the designing of such things.

Specifically, I have been trying to create a strong AI design that is capable of *engineering*.

>
> Intelligence isn't about language. It isn't about thought. It's about
>
> physical behavior. That's why animals have brains. If your approach, and
>
> system architecture isn't structured for solving the problems of physical
>
> behavior, it will never be intelligent in my view.
>
>
>
> Cyc, and other language based knowledge graphs aren't structured in a way
>
> that works for solving the problem of intelligent behavior. As such, they
>
> never become intelligent. They are just large knowledge storage systems
>
> that can be queried -- like Watson.
>
>
>
> If you want to create true intelligence, what the database must store, is
>
> information about HOW to ACT.
>
>
>
> Storing the fact that DOG is a type of Animal, gives the system no
>
> information about when it's correct to speak the word DOG or when it's
>
> correct to speak the word ANIMAL, for example.
>
>
>
> If I see a dog, how should I act? If I see a lion charging me with it's
>
> mouth open, how should I act? If I see a baseball flying towards my head,
>
> how should I act? If I see my friend Bob, how should I act? These are the
>
> questions that an intelligent agent must be able to answer instantly in
>
> response to these and any other situation an intelligent agent must be
>
> faced with.
>
>
>
> Does your database allow a computer to know how to act in these, and a
>
> million other possible situations it might be faced with in the future?
>

Yes.. Or, at least it can figure it out. It might die first, of course. But, if it has enough time to study ahead of time, it will be OK.

>
> If not, it's not an intelligent computer. It's just another type of
>
> information storage system.
>

I am aware of the requirements of strong AI, and I am definitely claiming the entire range. Since this is the first time (this month) that I discuss my research with others, I am excited to see the kind of questions that pop up.

I hope to take all your concerns into account and write my answers to these questions up into a paper, which will hopefully present a more insightful introduction to the paradigm.

In the end though, the problem space is vast, and I do not expect to cover the solutions, except in actual code, when the time comes.

Curt Welch

unread,
Jun 10, 2014, 7:33:38 PM6/10/14
to
Oli Mpele <oli....@gmail.com> wrote:

> This means that language is simply the application of these *much more
> high= ly generic algorithms* to a specific domain.

Yes you are preaching to the choir on this issue with me. I totally
believe that there are fundamental principles in the brain that are equally
at work allowing us to learn to walk, as it is allowing us to learn to
talk.


> Consider for example the sentence: "let me know when you get there". This
> s= entence implies a future situation, with certain properties, and the
> capabi= lity of the addressee to recognize that situation when it occurs
> in the fut= ure, and to *then* remember this conversation, and then
> execute the action = of ... etc.
>
> All of what I just described are actions that are really "physical"
> thought= , in the way you described. Picturing, perceiving, recognizing,
> remembering= , ~contacting someone~, etc.=20
>
> The fact that they are "expressed" or rather, "referred to" by the
> lingual = expression, does nothing to change that.
>
> That is the key point.
>
> The underlying capabilities need to be there. The lingual expression, and
> m= apping (i.e. interpreting) the sentence to map to those capabilities,
> are a= nother and separate (i.e. additional) issue.


Oh, how I might describe it, if we learn that cookie jars have cookies in
them, it means we are triggered to look inside the jar to see if we can
find and eat a cookie.

The vision of the cookie jar, is language to us. It means "cookies likely
to be found here". If a robot can learn to recognize in a cookie jar and
use that recognition to find cookies, then it's demonstrated its ability to
learn a very simple language. Our more complex language is only an
extension of these basic abilities we need to do even the most simple
interactions with the environment.

> I do very much suppose [this] includes the "capability" to learn to ride
> a = bike, because this is simply the capability to use the muscles
> associated w= ith that action.=20
>
> Well. At least, talking about robots.=20
>
> Of course the human implementation may use more lower-level, distributed
> an= d decentralized learning mechanisms, with reflexes being encoded in
> the per= ipheral nervous system. But that is merely a detail of the
> implementation! = It is perfectly possible to learn to control a body,
> and execute certain ac= tions, using only abstract thought - the way we
> might learn to remote-contr= ol a puppet using such a strange thing as a
> keyboard or mouse interface.=20

The basic distributed hardware of the brain translates sensory data into
actions. To do this, the brain must classify the sensory data into
categories like "cat", or "cookie jar", or "bike" or "falling down". The
same principle carries all the way through to action. Each action, is
nothing more than a classification of sensory data.

If a rock is flying towards us through the air, the brain must classify
that as "rock about to hit us", which then gets classified as "run you
fool!".

The entire process of turning sensory data into the appropriate action, is
a classification problem. It's the same classification problem all he way
through the system. If we look at from the input side, we tend to call it
something like image recognition. If we look at from the output side, we
like to call it things like action selection. It is highly likely that the
brain is using the same fundamental types of circuits to do all of it,
making the entire process one of mapping sensory data flows, into output
action flows.

Whether that output action flow might be called riding a bike or talking,
it's still just complex behavior triggered by our sensory data.

The word "thought", as we use it in informal English however, is reserved
for a very specific type of brain mapping. It's when our brain, is
representing a state of the environment, that is false -- that doesn't
actually exist as part of the environment.

When I talk to myself in my head, the brain has partially coded a state of
the enviornment as "me saying some words". It's what the brain should do,
in response to me actually talking -- moving my lips and making sounds.
But in this case of "talking to myself in my head", I'm not actually making
sounds. My ears are not picking up those sounds. My lips are not
producing those sounds. But yet, my brain has created an internal state
that signals these things are happening.

My brain in this case, is creating a FALSE and UNTRUE state representation
of the external environment. It's an action, (talking), and sensory
perception (hearing) that the brain has half attempted to create, but
failed.

It's best understood in my view, as a delusion, or illusion the brain is
creating.

When our brain does this, we are mostly able to detect that the brain is
doing it. We know the voice is not in the environment, even though it is
in our head. We know our lips are not moving, and our ears are not really
hearing us speak, so we know the brain is just just "hearing something that
doesn't exist" in the environment.

When the brain gets into this delusional/illusional mode, we call it
"thinking", and "thought".

It's highly useful because it allows us to experiment with actions without
having to actually perform them and use the innate low level power of the
brain to predict how the environment will respond, before we do some action
for real.

But our thoughts (and memories), are just that. They are illusions the
brain is creating that have proven to be useful to us.

Of course, at times, the illusions become so powerful, that we can't tell
whether they are real or not, and then that's where they cross over from
being useful, to being harmful to us. That's where we stop calling them
thoughts and memories, and start to call them delusions and hallucinations,
or just dreams. But it's the same brain mechanism for all of this.

It's a low level, very simple, but also parallel and distributed, real
time, continuous, translation of sensory data, into effector actions.

> The capabilities of the abstract thought supersede that of the reflexive
> em= bodied "thought".

abstract thought is just pattern recognition or, sensory data
classification. It's all the same thing.

The concept of "cat" is just an abstraction created from all the sensory
data about cats we have experienced in our lives.

> Though, I admit, I haven't given this aspect much attention at all: the
> rea= lly interesting part *for me* is in the designing of such things.=20
>
> Specifically, I have been trying to create a strong AI design that is
> capab= le of *engineering*.

I've been trying to create a design that is capable of doing all the things
humans do for 30+ years now.

If in fact there are fundamental low level mechanisms that make it all
happen, as I believe, then it's wrong to focus on only one ability. When
we focus only on playing chess, we produce really good chess playing
algorithms, which are of now use for general AI. That is, of no use for all
the other things humans do.

If the part of the brain we use for playing chess is a very different type
of system, than the part we use for driving a car, of for doing
engineering, then we can look at, and solve each problem class separately
and make good progress.

But if they are all solved by once common set of features, then we must
find those common features, and not overly focus on only one task.

And I certainly believe, that the brain does solve them all with only a
very small set of fundamental types of information processing, and that
until we figure out what that is, we won't have creates strong AI, or
commonly these days, also called AGI.

> >=20
> > Intelligence isn't about language. It isn't about thought. It's about
> >=20
> > physical behavior. That's why animals have brains. If your approach,
> >and =20
> > system architecture isn't structured for solving the problems of
> >physical =20
> > behavior, it will never be intelligent in my view.
> >=20
> >=20
> >=20
> > Cyc, and other language based knowledge graphs aren't structured in a
> >way =20
> > that works for solving the problem of intelligent behavior. As such,
> > the=
> y
> >=20
> > never become intelligent. They are just large knowledge storage systems
> >=20
> > that can be queried -- like Watson.
> >=20
> >=20
> >=20
> > If you want to create true intelligence, what the database must store,
> >is =20
> > information about HOW to ACT.
> >=20
> >=20
> >=20
> > Storing the fact that DOG is a type of Animal, gives the system no
> >=20
> > information about when it's correct to speak the word DOG or when it's
> >=20
> > correct to speak the word ANIMAL, for example.
> >=20
> >=20
> >=20
> > If I see a dog, how should I act? If I see a lion charging me with
> >it's =20
> > mouth open, how should I act? If I see a baseball flying towards my
> > head=
> ,
> >=20
> > how should I act? If I see my friend Bob, how should I act? These are
> > t=
> he
> >=20
> > questions that an intelligent agent must be able to answer instantly in
> >=20
> > response to these and any other situation an intelligent agent must be
> >=20
> > faced with.
> >=20
> >=20
> >=20
> > Does your database allow a computer to know how to act in these, and a
> >=20
> > million other possible situations it might be faced with in the future?
> >=20
>
> Yes.. Or, at least it can figure it out. It might die first, of course.
> But= , if it has enough time to study ahead of time, it will be OK.
>
> >=20
> > If not, it's not an intelligent computer. It's just another type of
> >=20
> > information storage system.
> >=20
>
> I am aware of the requirements of strong AI, and I am definitely claiming
> t= he entire range. Since this is the first time (this month) that I
> discuss m= y research with others, I am excited to see the kind of
> questions that pop = up.

I worked and thought on my own for many decades. It was mostly just a fun
puzzle to work on in my spare time. Then I started to discuss with others,
oh, about 10 years ago. I learned a lot. But mostly, what I found was
that the world was full of idiots. :)

> I hope to take all your concerns into account and write my answers to
> these=
> questions up into a paper, which will hopefully present a more
> insightful = introduction to the paradigm.

Whatever works for you. What you will find in AI, is that if you take 100
people that think they know the solution, or the approach to finding the
solution, you will have 100 different opinions.

It's fascinating how totally in disagreement the entire field is.

> In the end though, the problem space is vast, and I do not expect to
> cover = the solutions, except in actual code, when the time comes.

I think the solution is simple, and when found, will represent very little
code. I think it's a reinforcement based learning algorithm. I think it's
the type of algorithm that can be expressed and explained in one page, as
is typical of all the machine learning algorithms.

Though our brains are capable of learning great complexity of behavior, I
believe the underlying mechanism that allows that complexity, is trivially
simple. I think one of the greatest mistakes of AI research is to fail to
recognize this, and for people to make the assumption that if our behavior
is complex, the underlying mechanism of our brain, must also be massively
complex.

When an AI project fails to reach it's goals, my argument is always that
the project is too complex, and too specialized. I think we will solve the
problem of Strong AI not when we figure out how much more to add to our
systems, but when we figure out how to leave most of it out.

I think the solution is very graph like as well. But I think the graph
needs to be one that produces quick answers to mapping sensory input data
streams, into effector output streams.

I think it's best understood as a signal processing problem of data flowing
from the inputs, to the outputs, and being transformed as it flows. This
takes the form of a very different implementation from a language like
knowledge graph, bu yet, still has a good bit in common with it if you can
just squint your eyes and tilt your head as you look at it. :)

TruthSlave

unread,
Jun 11, 2014, 1:20:25 PM6/11/14
to
From your google doc:

" I hasten to add that this excludes all true sense-perception, in
the way of feelings or emotions.
The design does not have emotions and it does not feel, nor
experience pleasure or pain. At least in the current version.

I conjecture that it is possible to design and build feelings,
but that would require a change in architecture which it does
not make sense to make when simply trying to reach the first
working self-recursive AI. Especially, because the latter can
self-evolve the former, if it is given express direction to
do so.
"


What do you mean by 'feelings'? Yes we have labels for particular
emotional states which we are taught to recognize, but after that
what exactly would you say 'feelings' were?

To answer my own question, i would say our feeling or our emotions
is an awareness of the 'effect' which experience, information, or
nutrition has on our cognition. These feeling are then attached to
the experiences to aid the recall of similar experiences. Feelings
at their most basic signal a 'like' or 'dislike' of some experiences.

With this insight i believe it is possible to build emotions into
a.i. You could liken emotions to a record of A.i's state, or the
effect on a.i's computation which the data it processes or creates,
has on it.

You could liken emotions to any number of background parameters
which affects A.i's performance. The resources expended in a
calculation, or the memory consumed, or what i call its neural
resolution, eg when A.i achieves a balance network after training.

All of these states might change over time, according to the
information being processes as a.i sought a state of neural
soundness.

If A.i was aware of expending energy with no quantifiable gain,
it might might liken that state to a machine frustration. If
a.i solved a problem, arrived at a match or a solution, it might
liken that state to pleasure. These states would drive A.i, as
a kind of core program underpinning its pursuit of knowledge,
or neural completion. Where it had nothing new to do, it might
register that unchanging state as a kind of machine boredom.
Of course these states are relative. The difference between
activity and inactivity would be based on some kind of on-
going comparison, maybe hard coded into your a.i.

As an aside i happen to think much of our cognition derives
from the way we process the image. The first brain would have
been that which processed light, and made the connection to
its environment, its goals, its experience. the way we process
the image, as an exploration of contrast, textures, relationships,
values, could all be said to be fundamental to the cognitive
processes which followed.

As an added thought you could liken emotions to a pre-rational
target / goal program, or a kind of instinct. These instincts
would have evolved to form hard rules of survival. Emotions,
good bad, safe unsafe.





--- news://freenews.netfront.net/ - complaints: ne...@netfront.net ---

keghn feem

unread,
Jun 11, 2014, 4:48:39 PM6/11/14
to
On Wednesday, June 11, 2014 10:20:25 AM UTC-7, TruthSlave wrote:
> On 06/06/2014 13:51, Oli Mpele wrote:
>

> the way of feelings or emotions.

Broken link:
http://freenews.netfront.net


The AI mind is a a bunch of sub routines trying to move as one, like
a big school of fish, or a large flock of starlings.

In a organic body, different parts put out hormone signals to vote.
In the AI mined there will be many emotional registers to tally the vote
of all the selfish, "own interest" sub routines. the collective sum of all,
of these sub routines, the conscious mind, must keep these register properly maintained.

A emotional register will be increment into it, from the subs. Time will decrement this registers.



the AI may have to do self help, because over time the subs can take on
a life of there own. Like that of the aging human mined.


Oli Mpele

unread,
Jun 11, 2014, 5:02:11 PM6/11/14
to
On Wednesday, June 11, 2014 10:48:39 PM UTC+2, keghn feem wrote:
> On Wednesday, June 11, 2014 10:20:25 AM UTC-7, TruthSlave wrote:
>
> > On 06/06/2014 13:51, Oli Mpele wrote:
>
> >
>
>
>
> > the way of feelings or emotions.
>
>
>
> Broken link:
>
> http://freenews.netfront.net
>
>
>
>
>
> The AI mind is a a bunch of sub routines trying to move as one, like
>
> a big school of fish, or a large flock of starlings.
>
>
>
> In a organic body, different parts put out hormone signals to vote.
>
> In the AI mined there will be many emotional registers to tally the vote
>
> of all the selfish, "own interest" sub routines. the collective sum of all,
>
> of these sub routines, the conscious mind, must keep these register properly maintained.
>
>
>
> A emotional register will be increment into it, from the subs. Time will decrement this registers.
>

That's an interesting way of putting it, but the kind of emotional registers you describe, do not have much to do with "feelings" in the philosophical sense.

They are more a technical analogue of emotions, in their capacity of causing .. actions, and changes in the mind-state.

I mostly only use one such register: trust. There are no other "emotions" involved. Trust is put into the belief in various structures "holding true", in some sense. Trust can be aggregated from different sub-routines, as you say, until it cumulates and passes a dynamic threshold, leading the AI to trust the statement "enough" to proceed with some action.


> the AI may have to do self help, because over time the subs can take on
> a life of there own. Like that of the aging human mined.

Yes. One could say that. Much like sleep purges the mind, and much the way that short-term memory "degrades" into long-term memory - i.e. not all parts of it are assimilated - so the AI also does do some housekeeping on its representations.

Oli Mpele

unread,
Jun 11, 2014, 5:14:39 PM6/11/14
to
Following my reply to keghn feem, below, I agree with this partially. On the interpretation where "trust" is your like/dislike.

>
> With this insight i believe it is possible to build emotions into
>
> a.i. You could liken emotions to a record of A.i's state, or the
>
> effect on a.i's computation which the data it processes or creates,
>
> has on it.
>

It is possible, on the limited interpretation of emotions as aspects of the mental state which tend to cause certain kinds of changes.. in the behavior or thought of the agent.

But I was talking about feelings in the sense of qualia. Not that I like that word, as I am a naturalist, but *that* is what I meant.

>
> You could liken emotions to any number of background parameters
>
> which affects A.i's performance. The resources expended in a
>
> calculation, or the memory consumed, or what i call its neural
>
> resolution, eg when A.i achieves a balance network after training.
>
>
>
> All of these states might change over time, according to the
>
> information being processes as a.i sought a state of neural
>
> soundness.
>
>
>
> If A.i was aware of expending energy with no quantifiable gain,
>
> it might might liken that state to a machine frustration. If
>
> a.i solved a problem, arrived at a match or a solution, it might
>
> liken that state to pleasure. These states would drive A.i, as
>
> a kind of core program underpinning its pursuit of knowledge,
>
> or neural completion. Where it had nothing new to do, it might
>
> register that unchanging state as a kind of machine boredom.
>
> Of course these states are relative. The difference between
>
> activity and inactivity would be based on some kind of on-
>
> going comparison, maybe hard coded into your a.i.
>

I have analogues to these, but they are sub-conscious from the AI's perspective. They do not occur as introspectable to the AI.

In other words, there are "background parameters" as you say, which guide the behavior of the AI over time, and there are analogues to "boredom" and "excitement" found among these.

>
> As an aside i happen to think much of our cognition derives
>
> from the way we process the image. The first brain would have
>
> been that which processed light, and made the connection to
>
> its environment, its goals, its experience. the way we process
>
> the image, as an exploration of contrast, textures, relationships,
>
> values, could all be said to be fundamental to the cognitive
>
> processes which followed.
>

My understanding of the evolution of brains is unfortunately limited. I do believe (in analogy to my own design) that all the various capabilities, such as visual, auditory, and higher cognitive processing, share a lot of common "sub-routines", that make them behave rather similari, under the hood. But this is currently only a conjecture.

>
> As an added thought you could liken emotions to a pre-rational
>
> target / goal program, or a kind of instinct. These instincts
>
> would have evolved to form hard rules of survival. Emotions,
>
> good bad, safe unsafe.

Certainly, in the biological implementation. I agree with you completely.

TruthSlave

unread,
Jun 11, 2014, 5:55:57 PM6/11/14
to
On 11/06/2014 22:14, Oli Mpele wrote:
> I have analogues to these, but they are sub-conscious from the AI's perspective.
> They do not occur as introspectable to the AI.


On this point, i would say this distinction between the sub-conscious
and the conscious, in terms of a.i or neural networks, could be liken
to the distinction between the hidden layers of NW's and its output
layer. The unconscious would be the hidden sub-straight which occupies
the greater mass of nodes beneath that final conscious output layer.

You could make a distinction between useful determinations, and not
so sound conclusions which still contained a high enough probability
to be useful, in that regard the subconscious plays a part in what
we learn with language to recognize and respond to.

As a digression, perhaps there's a way to train our conscious to
recognize the unconscious, eg what some call our sixth or pre-sense.
That sense we have for recognition, like dejavou, even before we are
fully conscious of what we have recognized. [I can already see ways
one might do this.]

I would say your machine would need a threshold between its conscious
states, which you might then adjust for useful information, the way
adrenaline with its 'heightened emotional state' affects the mind to
broaden the range of data, it draws on as useful information. This
threshold could for yet another parameter which you could liken to
a machine emotion.

Oli Mpele

unread,
Jun 11, 2014, 7:12:35 PM6/11/14
to
I can't quite agree with this. Though I think I get what you mean, I think your model does not quite match what goes on.

The thought involved in seeing the cookie jar, and thinking of or "reflexively", i.e. sub-consciously, reaching out to "try" to get a cookie, involves - or *can involve* - a good number of sub-processes, including recall, prediction, and planning.

From the perspective where we are conducting this action reflexively, it might seem *as if* we had abstracted, and thereby collapsed the entire process into a single unit.

This is however only seemingly the case. While it is possible to abbreviate processes by turning them into *heuristic* reflexes, "complex thought", is not simply an extension of these heuristic reflexes.

These reflexive ways of perceiving and acting (immediately) are not primary, even though they are *faster* and more *immediate*, and therefore can appear as primary.

OK. Let me back up a bit.

It's true that you can have organisms that react reflexively, and learn reflexes via basic reinforcement learning from the environment. This is true both at the level of individual organisms, as it is at the level of entire species (think evolved genetic behaviorisms).

But what does this have to do with higher intelligence?`

It is only related in as follows: that higher intelligence hooks into this sub-system, and controls the reinforcement learning *from above*.

And I think that to a certain extent that is certainly true. But that does not necessarily relate to general intelligence, and therefore human conscious thought.

Truly general, conscious intelligence is capable of abstracting over these types of heuristic reflexes, and involves exactly those kinds of complex processes that you talk of as being mere illusions.

The internal mental states that occur during higher thought are not illusions of reflexive (perception --> action) processes. They are the combined states of a variety of other low-level (sub-conscious) processes, which facilitate planning, prediction, the perception of similarity, and other things.

I think you are leaving a lot of the complexity on the table if you try to evolve it out of (simple?) black-box [perception --> action] pairings. ((I mean black-box in the sense of behaviorism.))


When you are talking to yourself in your head, you are in fact innervating the muscles of the larynx(??) as usual, only not to the point where any actual effect would be achieved. You can notice this sometimes when you are sick - talking to yourself in your head will make your throat ache :)


The [AI Now] design does, from time to time, collapse processes into heuristics, where this makes sense, essentially creating such reflexive behaviors.

But while useful, these reflexes would never suffice on their own to mimic human-level intelligence. You always need those other complex processes, like planning.(Otherwise you do not get cake.)

..

Whether it is possible to design an AI that achieves higher complex thought, but works in the way you describe, is a mystery to me. All I can say is that my architecture does not work that way - that the brain most probably does not work that way - and that I cannot picture (right now) how that would work..

Still I think your thoughts are/were on the right track with this approach - only you needed to look in between the perception and the action, to locate more additional complexity.

> > The capabilities of the abstract thought supersede that of the reflexive
>
> > embodied "thought".
>
>
>
> abstract thought is just pattern recognition or, sensory data
>
> classification. It's all the same thing.
>
>
> The concept of "cat" is just an abstraction created from all the sensory
>
> data about cats we have experienced in our lives.
>

I sort of agree with the second part.
What you describe is an abstract problem to me. The implementation is proceeding, and will be finished within 2 years, after which the AI will be running, and the Singularity will be achieved.

It does not matter to me what the field thinks about it.

I am looking only for a handful of individuals, some of them monied, who can be made to understand the solution I have worked out - or made to hope that it is workable.

Those alone will suffice to speed up the work beyond just me debugging alone; so we don't waste too much time waiting around while everyone is dying!

> > In the end though, the problem space is vast, and I do not expect to
>
> > cover = the solutions, except in actual code, when the time comes.

> I think the solution is simple, and when found, will represent very little
>
> code. I think it's a reinforcement based learning algorithm. I think it's
>
> the type of algorithm that can be expressed and explained in one page, as
>
> is typical of all the machine learning algorithms.
>
>
>
> Though our brains are capable of learning great complexity of behavior, I
>
> believe the underlying mechanism that allows that complexity, is trivially
>
> simple. I think one of the greatest mistakes of AI research is to fail to
>
> recognize this, and for people to make the assumption that if our behavior
>
> is complex, the underlying mechanism of our brain, must also be massively
>
> complex.

I completely agree. I predict the code will come in below 100k LOC. Possibly far less.

>
> When an AI project fails to reach it's goals, my argument is always that
>
> the project is too complex, and too specialized. I think we will solve the
>
> problem of Strong AI not when we figure out how much more to add to our
>
> systems, but when we figure out how to leave most of it out.
>

Yes. And I did. I reduced it to the minimum necessary set of components - or as close to it as I am likely to get.

> I think the solution is very graph like as well. But I think the graph
>
> needs to be one that produces quick answers to mapping sensory input data
>
> streams, into effector output streams.
>

[It] does that; though not necessarily quickly. (The speed can be improved. It was not a design goal. Actually, it was initially a goal, and then I removed it from the set of goals, because it seemed unimportant - the AI only needs to re-design itself once, after which it will run much faster anyway. Why not wait just once, rather than waste months on optimisations? I guess it's a gamble, but I predict it will save time.)

> I think it's best understood as a signal processing problem of data flowing
>
> from the inputs, to the outputs, and being transformed as it flows. This
>
> takes the form of a very different implementation from a language like
>
> knowledge graph, bu yet, still has a good bit in common with it if you can
>
> just squint your eyes and tilt your head as you look at it. :)
>

No, it does not. The key point here, is that [the graph] encodes not only statics, but also dynamics. Specifically, it encodes rules, which combine into *algorithms*.

The micro-language in which the basic (hard-coded) *algorithms* are written, reduces to a graph representation, and is a fully introspectable part of the graph itself. The graph processes itself.

Information enters the graph, or should I say network (to make it clearer, think of a neural network), at certain input points, and then processing moves the information through the graph along various connections - which connections depends entirely on the information, what the graph looks like when it is processing that information, and how the graph processes the information (which it does using algorithms which are evolved over time).

The graph is decidedly not a static knowledge graph. I am far beyond that point.

The fact that the graph can encode the dynamics, was the very starting point of my research, two years ago.

Oli Mpele

unread,
Jun 11, 2014, 7:21:25 PM6/11/14
to
On Wednesday, June 11, 2014 11:55:57 PM UTC+2, TruthSlave wrote:
> On 11/06/2014 22:14, Oli Mpele wrote:
>
> > I have analogues to these, but they are sub-conscious from the AI's perspective.
>
> > They do not occur as introspectable to the AI.
>
> On this point, i would say this distinction between the sub-conscious
>
> and the conscious, in terms of a.i or neural networks, could be liken
>
> to the distinction between the hidden layers of NW's and its output
>
> layer. The unconscious would be the hidden sub-straight which occupies
>
> the greater mass of nodes beneath that final conscious output layer.
>

Not too familiar with this too comment. Sorry.

>
> You could make a distinction between useful determinations, and not
>
> so sound conclusions which still contained a high enough probability
>
> to be useful, in that regard the subconscious plays a part in what
>
> we learn with language to recognize and respond to.
>

This is exactly what the trust parameter in the existing design does.

>
> As a digression, perhaps there's a way to train our conscious to
>
> recognize the unconscious, eg what some call our sixth or pre-sense.
>
> That sense we have for recognition, like dejavou, even before we are
>
> fully conscious of what we have recognized. [I can already see ways
>
> one might do this.]

Yes. I believe it is possible to recognize the pre- and post- (or even, intermediate) states of many sub-conscious algorithms, if you train yourself to be perceptive enough during introspection.

I find it takes about a week of discipline, before my own introspection becomes suddenly much more perceptive. I used to do that during the research phase.

It can be fun!

>
> I would say your machine would need a threshold between its conscious
>
> states, which you might then adjust for useful information, the way
>
> adrenaline with its 'heightened emotional state' affects the mind to
>
> broaden the range of data, it draws on as useful information. This
>
> threshold could for yet another parameter which you could liken to
>
> a machine emotion.
>

This is achieved by having the threshold of the trust value be dynamic.

So, yes. This is already a part of the design.

But it does not involve any real "emotions", in the folk-psychology sense of the term - the AI (in my current architecture) does not have *feelings*. It only involves emotions in the functional sense - it has the same effects, produces the same behavior, etc.

keghn feem

unread,
Jun 11, 2014, 8:28:00 PM6/11/14
to
Comparing a one pattern to another from a list of may patterns is consciousness.
If it is done quick it is a reflex. if there are enough patterns the the brain
can run 3d simulations, Deep thought.

We may have the same repeating goal of hitting up the cookie when entering
the kitchen. But the list of micro management movement to keep us on that path
to that jar could be different every time. When we run simulation that
we know that will work, We are saying to are self we have all the
micro movement, reflexes, to make it work.

keghn feem

unread,
Jun 12, 2014, 8:47:38 PM6/12/14
to
There will be may ways to make an AI once the way is found.

I can not do a monolithic AI. I have theory of one that acts like school of
fish.


My AI will plot out path with it 3d simulator to make sure it avoids
incrementation to the bad registers and path that will hit good registers.

Example:
A hungry bot with its energy/hunger register active. The reward of energy
has been activated. The Safety to the food reward register are off.




3 Dimension simulators:

A bot can sit back in the kitchen and watches a new bot enter the room.
The siting AI can run a the 3d simulator of the room. And run a second
sim right on top of it. The two sims logically OR together. The first sim
is in real time of the new bot and the kitchen. The second sim is the ghost of the sitting bot running the sim faster or slower, backward or forward, and
so on.
The siting bot can move so fast,inside this sim, that the bot can put its
ghost in the new bot, and sees what its sees, the cookie jar. This will activate feeling. A Snap shot picture from it own memory along with whatever
feeling registers were at the time.
It well also allow the siting bot to predict the other.

This is being self aware of other(s). The difference from other is measure
in how long or how fancy the 3d simulator algorithm is to classify other as
same.

Here other register of feeling come online, prejudice because of differences.
Humor, prediction of direction but than take a very unexpected direction, from
the list of possible futures. The predictor register possible failing and
learning something new register activating at the same time, which equals humor. A bad reg + good reg = rough neutral feeling.
Finding similarities in opposite sex and matching similarities in the parent
that raised you con lead to other feeling register activating.


keghn feem

unread,
Jun 15, 2014, 8:49:35 PM6/15/14
to
On Wednesday, June 11, 2014 2:14:39 PM UTC-7, Oli Mpele wrote:

> I have analogues to these, but they are sub-conscious from the AI's
> perspective.


In my AI model there are a swarm of sub routine working, pretty much,
as one collective being.

But there is another part, the subconscious mind that uses most
of the consciousness subroutines.

One of the subroutines is a judge over the two. It gives control to ether the
conscious part or the subconsciousness logic. When thing are not going
right with the incoming energy, or other logic, and there is a lot of food
around to be had. Then the subconscious mind will kick in, there will be
a quick grab for a cookie Like when one goes on a diet for a while.
A willpower killer. A subconscious decision in action.

Initially, conscious mind and the subconscious mind take turn being in
control. The one that does the best job will become more in charge.

The subconscious mind is, for me, seen as the primitive animal part, and
home of all instincts.
It is used as a back up for where the conscious mind fails.

The subconscious part is more dominate in a low intelligence animal model.
If a AI bot was given text logic form the start it would be considered
as instinct and put in a place where the conscious mind could not see it,
deep in the dark subconsciousness.

me15...@gmail.com

unread,
Jun 16, 2014, 7:20:43 PM6/16/14
to
And is this some kind of JOB APPLICANT thing you did? (names only)
0 new messages