Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Molecules and Neurons

2 views
Skip to first unread message

Alpha

unread,
Dec 11, 2006, 2:34:45 PM12/11/06
to
Now this is sensitivity! And now we have yet another means of information
representation and transfer. Processes that are molecules vibrating are
processes the brain can manage to represent as patterns of APs. Nice
transform!

http://www.nature.com/news/2006/061204/full/061204-10.html

Published online: 7 December 2006; | doi:10.1038/news061204-10
Rogue theory of smell gets a boost
Physicists check out a bold hypothesis for how the nose works.


--
Posted via a free Usenet account from http://www.teranews.com

N

unread,
Dec 11, 2006, 3:32:49 PM12/11/06
to

:) that brings to mind a quick survey one girly night out on the
best smells ever, I mean ever....germoline anseptic ointment,
creosote, juicyfruit gum and a lavender air freshener came
absolute tops of our list ! - I wondered why they were so different,
qm dimensions will explain it all now I suppose,
N.

John H.

unread,
Dec 13, 2006, 6:51:54 AM12/13/06
to
Is this such a big deal? If vibration is the key to odour sensing is it
conceivable that heat will change our sense of smell? Heat will in the
very least alter the rate of vibration.


John.

forbi...@msn.com

unread,
Dec 13, 2006, 9:05:44 AM12/13/06
to

On Dec 13, 3:51 am, "John H." <j_hasen...@yahoo.com.au> wrote:
> Is this such a big deal? If vibration is the key to odour sensing is it
> conceivable that heat will change our sense of smell? Heat will in the
> very least alter the rate of vibration.

Doesn't it depend upon what the theory is talking about?
For instance, radio signals aren't much affected by cold or
hot days themselves. They can be affected by other things
that are somewhat correlated with cold and hot days.

Since the assertion was related to the quantum level it seems
like it's talking about electrons jumping from one shell to another,
absorbing and releasing photons. These would be pretty finely
tuned oscillators. Since it's smells rather than colors the effects
would be pretty low energy photons, putting them lower than
infrared.

John H.

unread,
Dec 13, 2006, 9:24:29 AM12/13/06
to
>From the article

odour molecules by sensing their molecular vibrations makes sense in
terms of the physics involved


Molecular vibrations!!!

Perhaps they just threw in the quantum mechanics because it sounds
groovy.

John.

feedbackdroid

unread,
Dec 13, 2006, 11:36:06 AM12/13/06
to

N wrote:

>
> :) that brings to mind a quick survey one girly night out on the
> best smells ever, I mean ever....germoline anseptic ointment,
> creosote, juicyfruit gum and a lavender air freshener came
> absolute tops of our list ! - I wondered why they were so different,
> qm dimensions will explain it all now I suppose,
> N.

Or not.

http://www.hhmi.org/research/investigators/buck.html
http://www.google.com/custom?&q=linda.buck

PeskyBee

unread,
Dec 13, 2006, 1:21:51 PM12/13/06
to
"feedbackdroid" <feedba...@yahoo.com> escreveu na mensagem
news:1166027766....@f1g2000cwa.googlegroups.com...

Also,

http://biology.caltech.edu/Members/Laurent

zzbu...@netscape.net

unread,
Dec 13, 2006, 10:23:41 PM12/13/06
to

Alpha wrote:
> Now this is sensitivity! And now we have yet another means of information
> representation and transfer. Processes that are molecules vibrating are
> processes the brain can manage to represent as patterns of APs. Nice
> transform!
>
> http://www.nature.com/news/2006/061204/full/061204-10.html
>
> Published online: 7 December 2006; | doi:10.1038/news061204-10
> Rogue theory of smell gets a boost
> Physicists check out a bold hypothesis for how the nose works.

That's quite impossible,
Since the reason we invented radon detectors
is because physcists have no clue how
noses work, and the reason we invented
robots is because psychologists have
no clue how ears work, and the reason
we invented tv, is because the only
thing philosophers know about vision,
dinosaurs knew 100,000,000 years aqo.

forbi...@msn.com

unread,
Dec 14, 2006, 6:35:29 AM12/14/06
to
PeskyBee wrote:
> "feedbackdroid" <feedba...@yahoo.com> escreveu na mensagem
> news:1166027766....@f1g2000cwa.googlegroups.com...

> Also,
>
> http://biology.caltech.edu/Members/Laurent

It's so hard to keep up with the language, let alone the concepts.

>From the site:

We use several techniques to monitor and manipulate activity in
these brain circuits: intracellular, whole-cell patch-clamp,
extracellulalar tetrode recordings in vivo; two-photon imaging.
Having recently developed in vivo Drosophila brain electrophysiology
(Wilson, Turner and Laurent, 2004), we can also now combine
physiology with the tools of fly genetics.

What in the heck is "whole-cell patch-clamp" let alone
"Drosophila brain electrophysiology" and how it's related to genetics.
And how does one use "two-photon imaging"?

J.A. Legris

unread,
Dec 14, 2006, 8:54:08 AM12/14/06
to

feedbackdroid

unread,
Dec 14, 2006, 10:04:37 AM12/14/06
to


At least half these techniques have been around for 20+ years. Also,
nice to see they're continually developing new ones, and attacking the
problems from several fronts simultaneously. Serious lab work. Takes
lots of elbow grease.

Looking at the actual experimental work being done in modern-day
science seriously undermines many arguments carried over from before
this time, and repeated endlessly, based upon outdated pre-modern
science ideas. Perspective.

Sean Carroll's ["The Making of the Fittest", 2006] favorite colloquial
phrase regarding natural selection --> Use it or lose it.

feedbackdroid

unread,
Dec 14, 2006, 10:14:30 AM12/14/06
to

zzbu...@netscape.net wrote:

>
> the only
> thing philosophers know about vision,
> dinosaurs knew 100,000,000 years aqo.
>
>


Whatdayathink, ZZ? My contention is that dogs, for instance, "see" the
outside world in very much the same way that humans see it. They open
their eyes, and wham!!!

They see something such that the best philosophical "metaphor" [notice
I said metaphor here] is of the little doggy-homunculus viewing the
internal Cartesian Theater screen. Which is obviously impossible, but
happens to be the best metaphor the philosophers have provided us, yet,
for how vision works.

And likely dinosaur vision was not very different from this either.
Hard to imagine any other possibility.

Alpha

unread,
Dec 14, 2006, 10:47:16 AM12/14/06
to

"feedbackdroid" <feedba...@yahoo.com> wrote in message
news:1166109270.5...@73g2000cwn.googlegroups.com...

>
> zzbu...@netscape.net wrote:
>
>>
>> the only
>> thing philosophers know about vision,
>> dinosaurs knew 100,000,000 years aqo.
>>
>>
>
>
> Whatdayathink, ZZ? My contention is that dogs, for instance, "see" the
> outside world in very much the same way that humans see it. They open
> their eyes, and wham!!!
>
> They see something such that the best philosophical "metaphor" [notice
> I said metaphor here] is of the little doggy-homunculus viewing the
> internal Cartesian Theater screen. Which is obviously impossible, but
> happens to be the best metaphor the philosophers have provided us, yet,
> for how vision works.

Jeff Hawkins in On Intelligence, claiming that the cortex is a
memory-prediction machine (a very interesting hypothesis backed by a lot of
info Hawkins provides in the book), also posits that when we sense
something, brain/cortex starts its filling in process immediately,
completing tunes, visual scenes etc., on the basis of incomplete data or no
data at all (when predicting what will come next - the next note in a tune
etc.). The memory is a storage of patterns of prior activation potentials
as they coursed through the cortex. All parts of cortex do essentailly the
same thing (the cortical algorithm) when representing (hierachically) prior
experience that is then recalled in an auto-associative manner *as* new
stimuli are encountered. When a difference is noticed (between the
recalled/already-present patterns of APs and the new pattern of APs
presented as a recept/percept), attention is brought to bear on the
difference and we notice it immediately (like a melody with a note
difference from the way we heard it before). He has lots of examples of
this way of functioning in the book. A clear and engaging read.

To the point made by feedbackdroid vis theater-homunculi, it seems that the
theater can be thought of as the recall of prior experience patterns and the
homunculus is the attentional function. After all, we can and do recall past
experiences and seem to visualize those scenarios etc. in our "mind's eye".


>
> And likely dinosaur vision was not very different from this either.
> Hard to imagine any other possibility.
>

--

feedbackdroid

unread,
Dec 14, 2006, 11:42:51 AM12/14/06
to


In short, "mental imagery" is a fact. The problem regards how it
actually works.

Alpha

unread,
Dec 14, 2006, 1:36:10 PM12/14/06
to

"feedbackdroid" <feedba...@yahoo.com> wrote in message
news:1166114571.7...@16g2000cwy.googlegroups.com...

A couple more observations from the book: memory of patterns is stored as
invariant representations, in hierarchies (using LTM-like processes).
Feedback down the various hierarchies is the predicted pattern which is
compared with the ascending/forward-feeding stimulus signal. I am about
half-way through the book and the remainder is more about the hows. So I
will update when done.


>
>
>
>> >
>> > And likely dinosaur vision was not very different from this either.
>> > Hard to imagine any other possibility.
>> >
>

--

zzbu...@netscape.net

unread,
Dec 14, 2006, 2:50:35 PM12/14/06
to

feedbackdroid wrote:
> zzbu...@netscape.net wrote:
>
> >
> > the only
> > thing philosophers know about vision,
> > dinosaurs knew 100,000,000 years aqo.
> >
> >
>
>
> Whatdayathink, ZZ? My contention is that dogs, for instance, "see" the
> outside world in very much the same way that humans see it. They open
> their eyes, and wham!!!

No, dogs see the world very differently,
Since they tend to see the trees quite clearly,
and filter out humans like they were trees,


>
> They see something such that the best philosophical "metaphor" [notice
> I said metaphor here] is of the little doggy-homunculus viewing the
> internal Cartesian Theater screen. Which is obviously impossible, but
> happens to be the best metaphor the philosophers have provided us, yet,
> for how vision works.

But, phlosophers always choose that metaphor,
given that DeCarte was very conveniently
a good Lawyer and a horrible doctor,

>
> And likely dinosaur vision was not very different from this either.
> Hard to imagine any other possibility.

Dinosaur vision was always different,
Since it was from dinosaurs that the
whole idea of species came from.

feedbackdroid

unread,
Dec 14, 2006, 2:56:45 PM12/14/06
to


I've read Hawkin's' book a couple of times. Everything is fine ....
however ... this still doesn't quite explain the illusion [whatever]
analogous to the homunculus-cartesiantheater problem. Why do we
actually "see" the way we do. Sticking Hawkins model into a computer -
which is eminently doable - doesn't seem to solve the problem. there is
something missing.

feedbackdroid

unread,
Dec 14, 2006, 3:02:04 PM12/14/06
to

zzbu...@netscape.net wrote:
> feedbackdroid wrote:
> > zzbu...@netscape.net wrote:
> >
> > >
> > > the only
> > > thing philosophers know about vision,
> > > dinosaurs knew 100,000,000 years aqo.
> > >
> > >
> >
> >
> > Whatdayathink, ZZ? My contention is that dogs, for instance, "see" the
> > outside world in very much the same way that humans see it. They open
> > their eyes, and wham!!!
>
> No, dogs see the world very differently,
> Since they tend to see the trees quite clearly,
> and filter out humans like they were trees,
>
>

What? I've rarely seen a dog lick a tree the way it licks a human face.


> >
> > They see something such that the best philosophical "metaphor" [notice
> > I said metaphor here] is of the little doggy-homunculus viewing the
> > internal Cartesian Theater screen. Which is obviously impossible, but
> > happens to be the best metaphor the philosophers have provided us, yet,
> > for how vision works.
>
> But, phlosophers always choose that metaphor,
> given that DeCarte was very conveniently
> a good Lawyer and a horrible doctor,
>
>
>
> >
> > And likely dinosaur vision was not very different from this either.
> > Hard to imagine any other possibility.
>
> Dinosaur vision was always different,
> Since it was from dinosaurs that the
> whole idea of species came from.
>
>

What has this to do with vision??

There will be differences of quality, regards vision in humans and dogs
or dinos, as for instance, dogs don't have as many different functional
areas in their visual cortex, are color-blind, etc, but I'm sure all 3
groups would have something akin to the homunculus-CartesianTheater
problem.

Alpha

unread,
Dec 14, 2006, 3:56:50 PM12/14/06
to

"feedbackdroid" <feedba...@yahoo.com> wrote in message
news:1166126205.3...@16g2000cwy.googlegroups.com...

Are you referring to the experience of seeing - the quale/qualia issue?

Alpha

unread,
Dec 14, 2006, 4:14:11 PM12/14/06
to

"feedbackdroid" <feedba...@yahoo.com> wrote in message
news:1166126205.3...@16g2000cwy.googlegroups.com...


Hmmm seems you may have been referring to the issues surrounding an
homunculus/CT metaphor. I never saw much philosophical harm in the metaphor.
But you also ask why we see that way - that may be a question we may not be
able to get an answer for other than something like - it evolved that way.
It could be a natural consequent of having the existence of a self; it is an
attentional entity within an architecture that includes feedback signals.

The whole percept-representation-recall scenario as you get the first pass
through neural paths (as patterns of APs), thence a second pass due to
feedback (if that feedback signal is veridical). It is the second pass (the
theater) that is being activated at the same time the attentional pattern is
activated, giving one a sense of oneness with theat being experienced. For
what is more natural to think, for a surviving entity with a sense of self,
than that one has some identity relationship with one's own
thoughts/visualizations/projections (onto that theater).

There is conferrance of some survival advantage apparently, of the existence
of a self (an homunculus) situated in an environment of which there is a
model within the brain. Helps those predicitions and gives a sense of what
the prediction will affect (me!). So I can be ready for some behavior if
needed (like running away from a lion).

BTW, Hawkins had an interesting take on behavior vs intellignece, arguing
that one can be intelligent without exhibiting any behavior. He says to
reflect on that and it is apparent that it is true. One can be thinking of
writing another book and what concepts one wants to include. And other than
that, the person is doing nothing else besides maintaining (autonomically) a
physiological milieu. Yet that person is clearly intelligent -
intelligencing - without behavior. Lots of examples of that.

feedbackdroid

unread,
Dec 14, 2006, 4:53:18 PM12/14/06
to

Alpha wrote:

>
> >
> > I've read Hawkin's' book a couple of times. Everything is fine ....
> > however ... this still doesn't quite explain the illusion [whatever]
> > analogous to the homunculus-cartesiantheater problem. Why do we
> > actually "see" the way we do. Sticking Hawkins model into a computer -
> > which is eminently doable - doesn't seem to solve the problem. there is
> > something missing.
>
>
> Hmmm seems you may have been referring to the issues surrounding an
> homunculus/CT metaphor. I never saw much philosophical harm in the metaphor.
> But you also ask why we see that way - that may be a question we may not be
> able to get an answer for other than something like - it evolved that way.
> It could be a natural consequent of having the existence of a self; it is an
> attentional entity within an architecture that includes feedback signals.
>
> The whole percept-representation-recall scenario as you get the first pass
> through neural paths (as patterns of APs), thence a second pass due to
> feedback (if that feedback signal is veridical). It is the second pass (the
> theater) that is being activated at the same time the attentional pattern is
> activated, giving one a sense of oneness with theat being experienced. For
> what is more natural to think, for a surviving entity with a sense of self,
> than that one has some identity relationship with one's own
> thoughts/visualizations/projections (onto that theater).
>
>


Yes, I also believe this must be the basic mechanism, but still it
doesn't quite explain it.

Regards the qualia issue, which ties *non-biologist* philosophers up in
interminable gordian knots, to me, that's a simple non-starter. Humans
can manipluate symbols and language, and so they assign words to
perceptions. Not much mystery there. OTOH, it's almost certain that
other animals, which don't use symbols, also "see" the way we see, and
experience their own analog of the H/CT phenonmenon. How this part of
the problem works is the real mystery.

No disagreement with your following comments.

Curt Welch

unread,
Dec 14, 2006, 7:01:39 PM12/14/06
to
"Alpha" <OmegaZ...@yahoo.com> wrote:

> BTW, Hawkins had an interesting take on behavior vs intellignece, arguing
> that one can be intelligent without exhibiting any behavior. He says to
> reflect on that and it is apparent that it is true. One can be thinking
> of writing another book and what concepts one wants to include. And other
> than that, the person is doing nothing else besides maintaining
> (autonomically) a physiological milieu. Yet that person is clearly
> intelligent - intelligencing - without behavior. Lots of examples of
> that.

That's just a stupid use of the word "behavior". It's so stupid I had to
do a double take on that section of the book to understand what his point
was. It's as if brain behavior isn't behavior! How can that be? why is
"lip behavior" behavior but "brain behavior" is not part of our behavior?
It's as if the brain wasn't part of our body so anything it does can't be
counted as part of our human behavior. How stupid.

It's like trying to pretend a computer isn't behaving if you turn off the
monitor.

No on with any intelligence limits their use of the word "behavior" to only
the behavior we can externally observe. All the behavior of a body is part
of the behavior of a body.

Would you call "wheel spinning" a car behavior but call "engine running"
not a behavior of a car just because it's hidden under the hood where you
can't see it? Why would you do it for a human?

You can limit your study of behavior of any machine to any subset of the
full behavior of the machine for the purpose of understanding that one
aspect of the behavior, but no one in the right mind would assume the rest
of the behavior was not there, or that it was not "behavior".

Anyway, most of what Hawkins writes I agree with. His use of the word
"behavior" and his comments about it were just a bit silly however.

When you finish the book, you might find the white paper on the numenta web
site interesting as well if you have not already read it....

http://www.numenta.com/Numenta_HTM_Concepts.pdf

The other thing Hawkins misses is that his model fails to define the
agent's purpose - it fails to define why an agent built using his HTM would
choose to perform one action over another - which of course is the prime
question that must be answered about all intelligent agents - how do they
answer the question about what to do next? Until you answer that question,
the agent won't have any intelligence. he seems to assume that if you
built an HTM, it would just naturally "act intelligent" even though it has
no purpose simply because it would do such a good job "understanding" the
environment. He seems to be making the assumption that it has purpose
without explaining where it comes from. I of course argue we give these
machines purpose by making them reinforcement learning machines. If you
add reinforcement learning to Hawkins ideas, you basically have what I see
as the answer to general machine intelligence.

--
Curt Welch http://CurtWelch.Com/
cu...@kcwc.com http://NewsReader.Com/

J.A. Legris

unread,
Dec 14, 2006, 9:57:01 PM12/14/06
to
Curt Welch wrote:
> "Alpha" <OmegaZ...@yahoo.com> wrote:
>
> > BTW, Hawkins had an interesting take on behavior vs intellignece, arguing
> > that one can be intelligent without exhibiting any behavior. He says to
> > reflect on that and it is apparent that it is true. One can be thinking
> > of writing another book and what concepts one wants to include. And other
> > than that, the person is doing nothing else besides maintaining
> > (autonomically) a physiological milieu. Yet that person is clearly
> > intelligent - intelligencing - without behavior. Lots of examples of
> > that.
>
> That's just a stupid use of the word "behavior". It's so stupid I had to
> do a double take on that section of the book to understand what his point
> was. It's as if brain behavior isn't behavior! How can that be? why is
> "lip behavior" behavior but "brain behavior" is not part of our behavior?
> It's as if the brain wasn't part of our body so anything it does can't be
> counted as part of our human behavior. How stupid.
>
> It's like trying to pretend a computer isn't behaving if you turn off the
> monitor.
>
> No on with any intelligence limits their use of the word "behavior" to only
> the behavior we can externally observe. All the behavior of a body is part
> of the behavior of a body.
>

You've been seduced by the merological fallacy. Remember, whole
organisms behave, not their constituent parts - brains only mediate
behaviour.

Your behaviourist cadet standing is hereby reduced to probational
status until further notice.

--
Joe Legris

Michael Olea

unread,
Dec 14, 2006, 10:02:54 PM12/14/06
to
J.A. Legris wrote:

> You've been seduced by the mer[e]ological fallacy. Remember, whole


> organisms behave, not their constituent parts - brains only mediate
> behaviour.
>
> Your behaviourist cadet standing is hereby reduced to probational
> status until further notice.

Can he get time off for good behavio[u]r?

focus...@yahoo.com

unread,
Dec 15, 2006, 12:13:41 AM12/15/06
to

Hi Curt,

First of all I would like to congratulate you for same
quite interesting attempts to create general purpose
AI system. It is very difficult to find a person that
can grasp some ideas unless that person really tried
building the system himself. I think the second person that had
similar idea was Louis Savain as I can remember.

I think you correctly identified some of the key issues of the
AI system. One of them is of course time. This can be
equivalently put as action or behavior of the system.
It is clear that all approaches lacking time will eventually
fail and will not be logically consistent - just think
about frame problem or symbol grounding.

Another important element to consider is of course
the system motivation - related to RL paradigm.
A little about it follows below.

OK, after this let me go to the point of your network
solution as well as Jeff's Hawkins. I don't think it has
a chance of working. One of the biggest problem is lack
of the STM functionality. What the STM has to do with
all this stuff? Well it has a lot and is very important.
>From this perspective Jeff's system has the same
problem as yours and in my opinion no chance.
To be more specific, just ask yourself what is STM
and what is the purpose of STM in our brain.
There are many more problems but just this one is
sufficient enough to render the whole model unworkable.

Secondly, I'm very pleased to notice the inclusion
of RL functionalities. However, the model you are
trying to apply will fail for the same reason for which
GOFAI failed. I can just add to this that RL models
described in books are simply laughable forget
about them and start from scratch.

Finally, you may ask yourself what that guy is
talking about, you may think I'm a crackpot, it's up to you

Foucus

Curt Welch

unread,
Dec 15, 2006, 12:26:59 AM12/15/06
to

:)

Glen M. Sizemore

unread,
Dec 15, 2006, 12:37:39 AM12/15/06
to

"J.A. Legris" <jale...@sympatico.ca> wrote in message
news:1166151421.6...@16g2000cwy.googlegroups.com...

Whereas you remain a dickhead in good standing.


>
> --
> Joe Legris
>


Curt Welch

unread,
Dec 15, 2006, 1:15:43 AM12/15/06
to
focus...@yahoo.com wrote:
> Curt Welch wrote:

> OK, after this let me go to the point of your network
> solution as well as Jeff's Hawkins. I don't think it has
> a chance of working. One of the biggest problem is lack
> of the STM functionality. What the STM has to do with
> all this stuff? Well it has a lot and is very important.
> From this perspective Jeff's system has the same
> problem as yours and in my opinion no chance.
> To be more specific, just ask yourself what is STM
> and what is the purpose of STM in our brain.
> There are many more problems but just this one is
> sufficient enough to render the whole model unworkable.

Both my networks and Jeff's have short term memory (I assume STM means
short term memory - if not, everything I'm about to write can be ignored...
:).

What I'm trying to build with my networks, is a machine that produce
reactions to the current context of the environment. But the "context" can
not be defined by current sensory inputs alone. It could if the inputs
have the mark of property and gave the agent a 100% accurate picture of the
total environment (which is typical for RL algorithms applied to toy
environments - but not possible in the real environment). So, in order to
produce a better internal representation of the current context which it
should react to, the system must define the context based on some
combination of current and recent past inputs. And that's where STM comes
into the picture. The system must have a memory of recent past events so
that it's current behavior, can be a function of recent past events.

Ideally, it would produce reactions based on all past inputs - but that of
course is never practical, so it must use as much of the recent past inputs
as is possible, and it must use some system to allocate data to it's finite
short term memory.

In my network, every node in the network has short term memory built into.
Each node remembers the last pulse to pass through the node - it remembers
when it happened (temporal STM). The STM of the entire network is simply
the combination of all this memory. Though the memory of a single node is
very small, the combined STM of the entire network made up of millions or
billions of these simple nodes, becomes substantial.

In addition, the new design I've been exploring uses a pulse sorting
algorithm that includes feedback from down stream nodes back to upstream
nodes. This extends the temporal length of the STM that already existed in
the nodes because once a pulse is sorted down a given path, it can cause
other pulses to follow. In effect, once a pulse is sorted down a path, the
network had determined that some feature currently exists in the
environment (say a dog to use a very high level concept). With the feed
back, the probability that other pulses will be classified the same (as a
dog) increases because the system already has a short term memory (lasting
for the length of a single pulse) that it just saw a dog. And if it just
saw a dog, then some small feature in the network is more likely to be
classified as "more dog". So once it "sees" a dog, it's likely to keep
classifying following pulses as "more dog". This extends the STM memory of
DOG from the single pulse (built into the node) to many pulses.

So yes, I agree, STM memory is very important. It is what triggers our
behavior. We react not only to the current environment, but instead, we
react to what is currently in our STM - we react to what has recently
happened.

Now, the way I'm attempting to build a reaction machine with STM with my
pulse sorting networks may or not turn out to be very useful, but I do very
much believe you can produce a temporal reaction machine without STM, and I
have very much included it.

Now, Jeff's system also includes STM in a slightly more round about way -
but the end result is very similar. It performs sequential temporal
pattern matching. It creates invariant representation by performing
temporal patten prediction. For a simple abstract example, if his network
sees the temporal pattern A B C D repeated, his network gives is a name
"DOG", and it's output is transformed from the A B C D raw data, to DOG DOG
DOG DOG as the invariant representation. Indirectly, this means the
system, when it sees the D input, and knows it's been tracking the DOG
pattern, has a STM of A B C. It has a STM that tells it there is a "dog"
in the environment.

The lower levels of the hierarchy have a very short memory, where as higher
levels of the hierarchy, will have longer memories because as you get
higher in the hierarchy, the system responds to large, more complex causes,
which tend to be more persistent in the environment.

I don't remember how obvious the STM was in his description from the book,
but it's very obvious in the numental white paper that outlines the actual
technology they are working on.

> Secondly, I'm very pleased to notice the inclusion
> of RL functionalities. However, the model you are
> trying to apply will fail for the same reason for which
> GOFAI failed. I can just add to this that RL models
> described in books are simply laughable forget
> about them and start from scratch.

The standard ones in books are only able to solve toy problems with small
state spaces and where the sensory inputs have the Markov property - they
tell the algorithm the exact and complete current state of the environment.
Those types of algorithms can't solve any of the interesting problems which
all must deal with sensory signals that give only spatial awareness of the
environment and where the environments are way too large to be represented
as individual states in the algorithm. All my work is looking at how we
can solve the interesting problems.

One of the big difference from the simple book RL algorithms and the hard
problems, is that book algorithms typically don't need, and don't use, STM.
To solve the hard problems with huge state spaces and partial sensory
awareness, you have to include STM to create a better context for the
system to create the best understanding possible of the current state of
the environment given the limitations.

A lot of what Jeff's ideas are about, is how such a system will go about
compressing as much state information as possible into a given size STM by
using both spatial and temporal prediction. But as I said, what he seems
to fail to do, is give the machine a purpose so it has something useful to
do with the information once it has gathered it.

> Finally, you may ask yourself what that guy is
> talking about, you may think I'm a crackpot, it's up to you

:) lots of crack pots here. I'm one of the biggest! Just ask anyone! I
:even just had my junior cadet behavior club card taken away!

So now that I've told you want I think STM is for, and why we have it, what
do you think it's for and how do you suggest it be implemented in an RL
algorithm?

focus...@yahoo.com

unread,
Dec 15, 2006, 10:29:28 AM12/15/06
to
Curt Welch wrote:
> focus...@yahoo.com wrote:
> > Curt Welch wrote:
>
> > OK, after this let me go to the point of your network
> > solution as well as Jeff's Hawkins. I don't think it has
> > a chance of working. One of the biggest problem is lack
> > of the STM functionality. What the STM has to do with
> > all this stuff? Well it has a lot and is very important.
> > From this perspective Jeff's system has the same
> > problem as yours and in my opinion no chance.
> > To be more specific, just ask yourself what is STM
> > and what is the purpose of STM in our brain.
> > There are many more problems but just this one is
> > sufficient enough to render the whole model unworkable.
>
> Both my networks and Jeff's have short term memory (I assume STM means
> short term memory - if not, everything I'm about to write can be ignored...
> :).
>

Yes, I meant short term memory (STM)

> What I'm trying to build with my networks, is a machine that produce
> reactions to the current context of the environment. But the "context" can
> not be defined by current sensory inputs alone. It could if the inputs
> have the mark of property and gave the agent a 100% accurate picture of the
> total environment (which is typical for RL algorithms applied to toy
> environments - but not possible in the real environment). So, in order to

Here I would define context differently, but it
is not important for the STM discussion

> produce a better internal representation of the current context which it
> should react to, the system must define the context based on some
> combination of current and recent past inputs. And that's where STM comes
> into the picture. The system must have a memory of recent past events so
> that it's current behavior, can be a function of recent past events.
>

It is correct

> Ideally, it would produce reactions based on all past inputs - but that of
> course is never practical, so it must use as much of the recent past inputs
> as is possible, and it must use some system to allocate data to it's finite
> short term memory.
>

Hmm, and here we have a problem.

Many think that biological implementations of STM
include localized well defined structures. They are
searching for those structures and try to discover them.
Sadly, they can't be more wrong about it.
The STM within our brain as well as within networks
of honey bee or other animals is a product of
a "3D field-like" effect that creates memory trace.
In fact the name STM is very unfortunate here
and in my opinion the term memory trace (MT)
much better reflects underlying processes.
The purpose of this "3D field" is to provide a very
effective mechanism of correlating multidimensional
signals within the context of its past. The "brain"
didn't have a much choice here, careful consideration
(reverse engineering) of the temporal binding problem
indicate that it is the only possible solution
available - neurons create "temporal-bind" with
each other within this field. Consequently
the correlation outcome is the superposition
result of past memory traces ( "3D fields" )
where new neurons are "hired" and
incorporated/build into the network.

You can't partition this process, you can't simplify
this process, you can't localize this process.
Your model has to capture complete "temporal-bind"
of the field with many its distribution properties.


> In my network, every node in the network has short term memory built into.

And that is why there is a problem

> Each node remembers the last pulse to pass through the node - it remembers
> when it happened (temporal STM). The STM of the entire network is simply
> the combination of all this memory. Though the memory of a single node is
> very small, the combined STM of the entire network made up of millions or
> billions of these simple nodes, becomes substantial.
>

Yeah, see above

> In addition, the new design I've been exploring uses a pulse sorting
> algorithm that includes feedback from down stream nodes back to upstream
> nodes. This extends the temporal length of the STM that already existed in
> the nodes because once a pulse is sorted down a given path, it can cause
> other pulses to follow. In effect, once a pulse is sorted down a path, the
> network had determined that some feature currently exists in the
> environment (say a dog to use a very high level concept). With the feed
> back, the probability that other pulses will be classified the same (as a
> dog) increases because the system already has a short term memory (lasting
> for the length of a single pulse) that it just saw a dog. And if it just
> saw a dog, then some small feature in the network is more likely to be
> classified as "more dog". So once it "sees" a dog, it's likely to keep
> classifying following pulses as "more dog". This extends the STM memory of
> DOG from the single pulse (built into the node) to many pulses.
>

In theory Yes :), in practice you will not extract those features

> So yes, I agree, STM memory is very important. It is what triggers our
> behavior. We react not only to the current environment, but instead, we
> react to what is currently in our STM - we react to what has recently
> happened.
>
> Now, the way I'm attempting to build a reaction machine with STM with my
> pulse sorting networks may or not turn out to be very useful, but I do very
> much believe you can produce a temporal reaction machine without STM, and I
> have very much included it.
>

No, you will not be able to fully correlate temporal
signals without sort of "3D field-like" properties I have
described above.

> Now, Jeff's system also includes STM in a slightly more round about way -
> but the end result is very similar. It performs sequential temporal
> pattern matching. It creates invariant representation by performing
> temporal patten prediction. For a simple abstract example, if his network
> sees the temporal pattern A B C D repeated, his network gives is a name
> "DOG", and it's output is transformed from the A B C D raw data, to DOG DOG
> DOG DOG as the invariant representation. Indirectly, this means the
> system, when it sees the D input, and knows it's been tracking the DOG
> pattern, has a STM of A B C. It has a STM that tells it there is a "dog"
> in the environment.
>
> The lower levels of the hierarchy have a very short memory, where as higher
> levels of the hierarchy, will have longer memories because as you get
> higher in the hierarchy, the system responds to large, more complex causes,
> which tend to be more persistent in the environment.
>

Well, not really, however I will leave fixing up his
network to him :)

> I don't remember how obvious the STM was in his description from the book,
> but it's very obvious in the numental white paper that outlines the actual
> technology they are working on.
>
> > Secondly, I'm very pleased to notice the inclusion
> > of RL functionalities. However, the model you are
> > trying to apply will fail for the same reason for which
> > GOFAI failed. I can just add to this that RL models
> > described in books are simply laughable forget
> > about them and start from scratch.
>
> The standard ones in books are only able to solve toy problems with small
> state spaces and where the sensory inputs have the Markov property - they
> tell the algorithm the exact and complete current state of the environment.

Assuming that you are talking about Markov Chain
it is clear that you can't represent it that way since
network depends on its past and Markov simplification
doesn't hold here.


> Those types of algorithms can't solve any of the interesting problems which
> all must deal with sensory signals that give only spatial awareness of the
> environment and where the environments are way too large to be represented
> as individual states in the algorithm. All my work is looking at how we
> can solve the interesting problems.
>

:)

> One of the big difference from the simple book RL algorithms and the hard
> problems, is that book algorithms typically don't need, and don't use, STM.
> To solve the hard problems with huge state spaces and partial sensory
> awareness, you have to include STM to create a better context for the
> system to create the best understanding possible of the current state of
> the environment given the limitations.
>


However this is not the main problem with those
models nor is the fact that they are simple.
The main problem lies in the way in which the
goal and reward is defined. It is illusion to define
a goal or reward within such a complex time-varying
system. It is analogous to the hard coded definition
of the dog attempted by many in our GOFAI systems.
So, in the moment you use the word critic
or goal or sub-goal you created a wall that sooner
or later will stop you.

Just for fun, try to model with your RL framework
a "curiosity mechanism".


focus

feedbackdroid

unread,
Dec 15, 2006, 10:34:34 AM12/15/06
to

Curt Welch wrote:

> The other thing Hawkins misses is that his model fails to define the
> agent's purpose - it fails to define why an agent built using his HTM would
> choose to perform one action over another - which of course is the prime
> question that must be answered about all intelligent agents - how do they
> answer the question about what to do next? Until you answer that question,
> the agent won't have any intelligence. he seems to assume that if you
> built an HTM, it would just naturally "act intelligent" even though it has
> no purpose simply because it would do such a good job "understanding" the
> environment. He seems to be making the assumption that it has purpose
> without explaining where it comes from.
>
>


As I remember, Hawkins wasn't trying to build fully autonomous systems,
but rather systems that could perform certain limited tasks, such as
flight-controller functions, etc. IOW, at the time of writing, his
theories encompassed only a few operations analogous to brain. The
human decided which problems the system would execute.

Alpha

unread,
Dec 15, 2006, 11:20:22 AM12/15/06
to

"Curt Welch" <cu...@kcwc.com> wrote in message
news:20061214190624.855$b...@newsreader.com...

> "Alpha" <OmegaZ...@yahoo.com> wrote:
>
>> BTW, Hawkins had an interesting take on behavior vs intellignece, arguing
>> that one can be intelligent without exhibiting any behavior. He says to
>> reflect on that and it is apparent that it is true. One can be thinking
>> of writing another book and what concepts one wants to include. And other
>> than that, the person is doing nothing else besides maintaining
>> (autonomically) a physiological milieu. Yet that person is clearly
>> intelligent - intelligencing - without behavior. Lots of examples of
>> that.
>
> That's just a stupid use of the word "behavior". It's so stupid I had to
> do a double take on that section of the book to understand what his point
> was. It's as if brain behavior isn't behavior!

Behavior that is constrained as observable overt behavior. Like the type
present in/as the Chinese Room (of course that was/is behavior without
understanding !).

You fail to see that calling everything behavior is like calling everything
a process. It is but only in the most trivial and uninteresting sense.


>How can that be? why is
> "lip behavior" behavior but "brain behavior" is not part of our behavior?
> It's as if the brain wasn't part of our body so anything it does can't be
> counted as part of our human behavior. How stupid.
>
> It's like trying to pretend a computer isn't behaving if you turn off the
> monitor.
>
> No on with any intelligence limits their use of the word "behavior" to
> only
> the behavior we can externally observe. All the behavior of a body is
> part
> of the behavior of a body.
>
> Would you call "wheel spinning" a car behavior but call "engine running"
> not a behavior of a car just because it's hidden under the hood where you
> can't see it? Why would you do it for a human?
>
> You can limit your study of behavior of any machine to any subset of the
> full behavior of the machine for the purpose of understanding that one
> aspect of the behavior, but no one in the right mind would assume the rest
> of the behavior was not there, or that it was not "behavior".

It is behavior just like it is a process. Mostly bereft of explanatory
value.


>
> Anyway, most of what Hawkins writes I agree with. His use of the word
> "behavior" and his comments about it were just a bit silly however.
>
> When you finish the book, you might find the white paper on the numenta
> web
> site interesting as well if you have not already read it....

Thanks I'll check it out.

>
> http://www.numenta.com/Numenta_HTM_Concepts.pdf
>
> The other thing Hawkins misses is that his model fails to define the
> agent's purpose - it fails to define why an agent built using his HTM
> would
> choose to perform one action over another - which of course is the prime
> question that must be answered about all intelligent agents - how do they
> answer the question about what to do next? Until you answer that
> question,
> the agent won't have any intelligence. he seems to assume that if you
> built an HTM, it would just naturally "act intelligent" even though it has
> no purpose simply because it would do such a good job "understanding" the
> environment. He seems to be making the assumption that it has purpose
> without explaining where it comes from. I of course argue we give these
> machines purpose by making them reinforcement learning machines. If you
> add reinforcement learning to Hawkins ideas, you basically have what I see
> as the answer to general machine intelligence.

And a bunch of other ideas as well; there are many ideas related to
brain/mind/intellignece thatt have merit. But none of them is all
inclusive!
.

--

JGCASEY

unread,
Dec 15, 2006, 2:36:32 PM12/15/06
to

On Dec 15, 1:57 pm, "J.A. Legris" <jaleg...@sympatico.ca> wrote:
> Curt Welch wrote:

> > "Alpha" <OmegaZero2...@yahoo.com> wrote:
>
> > > BTW, Hawkins had an interesting take on behavior vs intellignece, arguing
> > > that one can be intelligent without exhibiting any behavior. He says to
> > > reflect on that and it is apparent that it is true. One can be thinking
> > > of writing another book and what concepts one wants to include. And other
> > > than that, the person is doing nothing else besides maintaining
> > > (autonomically) a physiological milieu. Yet that person is clearly
> > > intelligent - intelligencing - without behavior. Lots of examples of
> > > that.
>
> > That's just a stupid use of the word "behavior". It's so stupid I had to
> > do a double take on that section of the book to understand what his point
> > was. It's as if brain behavior isn't behavior! How can that be? why is
> > "lip behavior" behavior but "brain behavior" is not part of our behavior?
> > It's as if the brain wasn't part of our body so anything it does can't be
> > counted as part of our human behavior. How stupid.
>
> > It's like trying to pretend a computer isn't behaving if you turn off the
> > monitor.
>
> > No on with any intelligence limits their use of the word "behavior" to only
> > the behavior we can externally observe. All the behavior of a body is part

> > of the behavior of a body.You've been seduced by the merological fallacy. Remember, whole


> organisms behave, not their constituent parts - brains only mediate
> behaviour.
>
> Your behaviourist cadet standing is hereby reduced to probational
> status until further notice.
>
> --
> Joe Legris

How was what Curt was saying a mereological fallacy?

The mereological fallacy I would understand in this case
to be an attribution of the properties of the whole (person)
to a part (the brain). An analogy might be to suggest a
logic gate having by itself the ability to add numbers
whereas this is a property of a group of logic gates
that are wired up correctly. However each logic gate has
its own way of behaving independently of any group effect.

I never really understood "behaviorism" as such because
as far as I can see everything changes in time and thus
"behaves" and in that sense I am a behaviorist just as
I might say every piece of matter is made of atoms thus
making me an atomist. It doesn't in itself explain much
to me at all.

We have all sorts of behaviors just as we have all sorts
of tunes. We can talk about learning behaviors, adaptive
behaviors, intelligent behaviors, stupid behaviors which
refer to the whole person or move down through groups of
neural behaviors until we hit the behaviors of different
kinds of neurons. The word "behavior" seems to me to be
without any kind of explanatory power in itself. It is
simply a description of how some set of observations
change over time.

--
JC

Glen M. Sizemore

unread,
Dec 15, 2006, 3:08:30 PM12/15/06
to

"JGCASEY" <jgkj...@yahoo.com.au> wrote in message
news:1166211392.6...@79g2000cws.googlegroups.com...

But the real question is: "How many times must you be instructed in this
regard"? Not by Curt (no slur on Curt), but by me, or someone else that
could credibly be called a behaviorist? What have I said on this issue? And
who ever claimed that "behavior was an explanation" (see your statement
below)? It (behavior) is the dependent variable (though this term is
troublesome when we start talking about dynamic systems - the term is most
useful when we are talking about exogenous variables as the "causes"). The
issue is not whether or not you agree, the issue is whether you can even
paraphrase what you have been told dozens of times. I just don't get this
kind of somnambulism.

PeskyBee

unread,
Dec 15, 2006, 3:29:33 PM12/15/06
to
<focus...@yahoo.com> escreveu na mensagem
news:1166196568.3...@73g2000cwn.googlegroups.com...

That leaves open the definition of what a "memory trace" is.
Also, "correlating multidimensional signals in the context of
its past" is an attribution that LTM also fits. STM would
perhaps be better associated to a particular state of
activation (a transient state) of groups of neurons which
decay very fast, having not enough time (or "strength") to
"embed" their information definitively on the synapses of
participating neurons.

*PB*

focus...@yahoo.com

unread,
Dec 15, 2006, 4:20:46 PM12/15/06
to

Not much really,

It is simply a change in a state of the physical
3D media between neurons that represent there
mutual activation histories.

> Also, "correlating multidimensional signals in the context of
> its past" is an attribution that LTM also fits. STM would

Yes with one exception, STM create conditions
to create temporal-bounding LTM is a process
that uses these created conditions.

> perhaps be better associated to a particular state of
> activation (a transient state) of groups of neurons which
> decay very fast, having not enough time (or "strength") to

It is just a byproduct of working network.
By itself it would create nothing.

focus

JGCASEY

unread,
Dec 15, 2006, 4:31:56 PM12/15/06
to

On Dec 16, 7:08 am, "Glen M. Sizemore" <gmsizemo...@yahoo.com> wrote:
> "JGCASEY" <jgkjca...@yahoo.com.au> wrote in messagenews:1166211392.6...@79g2000cws.googlegroups.com...

> > The mereological fallacy I would understand in this case
> > to be an attribution of the properties of the whole (person)
> > to a part (the brain). An analogy might be to suggest a
> > logic gate having by itself the ability to add numbers
> > whereas this is a property of a group of logic gates
> > that are wired up correctly. However each logic gate has
> > its own way of behaving independently of any group effect.
>
> > I never really understood "behaviorism" as such because
> > as far as I can see everything changes in time and thus

> > "behaves".


> But the real question is: "How many times must you be
> instructed in this regard"? Not by Curt (no slur on Curt),
> but by me, or someone else that could credibly be called a
> behaviorist? What have I said on this issue? And who ever
> claimed that "behavior was an explanation" (see your
> statement below)? It (behavior) is the dependent variable
> (though this term is troublesome when we start talking
> about dynamic systems - the term is most useful when we
> are talking about exogenous variables as the "causes").


Although I am fascinated by the idea of artificial intelligence
and like talking and reading about it I am no expert. Also I
don't have your level of education so many of your terse and
technical replies simply don't register.

Looking up "exogenous variable" I see it is one that effects
the value of other variables (dependent variables) but is not
itself affected by those variables.

Thus crop output is dependent on rainfall but rainfall is
not dependent on the crop output.

You manipulate the independent variable (e.g. dosage) and
observe the dependent variable (symptoms). From the above
I take behavior means how the dependent variable changes.

I may never get a full understanding of your viewpoint
anymore than I would ever get a full understanding of the
theory Relativity or Quantum Mechanics. I do have some
idea about both from the books written by experts for the
layman but I would never claim the in depth understanding
a mathematician or physicist would have of the subject.

> The issue is not whether or not you agree, the issue is
> whether you can even paraphrase what you have been told
> dozens of times.


Clearly if I don't correctly get what is being written I
can't express the ideas correctly in my own words.


> I just don't get this kind of somnambulism.


And I thought you were an expert in behavior analysis :)

J.A. Legris

unread,
Dec 15, 2006, 4:32:51 PM12/15/06
to

The prototypical example in the mereological fallacy fracas is that a
person thinks, not a brain. This is a valid criticism if it is not
understood that thinking has different meanings in the two contexts.
For example it might refer to a set of external behaviours in first
instance and a set of neurological behaviours in the second.
Analogously, using the word behaviour to cover both the activities of a
whole person and the events in a person's brain attracts the same
critcism unless it is understood that the meaning of the term shifts
with the context. This ambiguity in the unqualified use of the term
suggests, for example, that the brain exhibits self-similarity across
levels, that it is a fractal object, which is an empirical matter, not
a terminological one.

--
Joe Legris

Alpha

unread,
Dec 15, 2006, 5:32:23 PM12/15/06
to

"JGCASEY" <jgkj...@yahoo.com.au> wrote in message
news:1166211392.6...@79g2000cws.googlegroups.com...
<snip>

> The word "behavior" seems to me to be
> without any kind of explanatory power in itself.

Exactly; and behaviorism is similarly bereft of explanatory value. As such,
behaviorists do little to advance understanding or explanation of anything.

> It is
> simply a description of how some set of observations
> change over time.
>
> --
> JC
>

--

JGCASEY

unread,
Dec 15, 2006, 6:13:35 PM12/15/06
to

On Dec 15, 5:15 pm, c...@kcwc.com (Curt Welch) wrote:

> In my network, every node in the network has short term memory built into.
> Each node remembers the last pulse to pass through the node - it remembers
> when it happened (temporal STM). The STM of the entire network is simply
> the combination of all this memory. Though the memory of a single node is
> very small, the combined STM of the entire network made up of millions or
> billions of these simple nodes, becomes substantial.

This could be an example of the mereological fallacy mentioned in the
other posts. You are extending the stm of a node to the stm of the
whole
system. In fact I think with the nets you explained to me it all
combines
into one meaningless stm mess.

An example I might give is the short term memory in the form of soil
fertility with regards to spinifex, a spiky bush found in the arid
countryside of Australia. Each part of the bush uses up the fertility
of the soil and the plants will propagate out as an ever expanding
circle.
The expanding circles of plants soon meet other expanding circles and
the whole field becomes a textured pattern of a recognizable kind.


http://www.diamantina-tour.com.au/outback_info/land_sys/spinifex/spinifex_page.htm

I could claim the soil fertility is a stm of the recent existence
of a piece of spinifex grass and claim that it all combines into
a substantial stm of the field. Just as your nodes keep a time stamp
so to the soil in the form of the soil slowly regains its fertility so
the next
time a seed (pulse) appears it will once again grow or not grow
depending on how long since the last plant was there and the rate at
which the fertility is being returned to the soil. I wrote a simulation
of this and like your nets it produces ever changing but recognizable
patterns which as a whole amount to nothing amazing.


--
jc

Curt Welch

unread,
Dec 15, 2006, 6:36:27 PM12/15/06
to
"JGCASEY" <jgkj...@yahoo.com.au> wrote:
> On Dec 15, 5:15 pm, c...@kcwc.com (Curt Welch) wrote:
>
> > In my network, every node in the network has short term memory built
> > into. Each node remembers the last pulse to pass through the node - it
> > remembers when it happened (temporal STM). The STM of the entire
> > network is simply the combination of all this memory. Though the
> > memory of a single node is very small, the combined STM of the entire
> > network made up of millions or billions of these simple nodes, becomes
> > substantial.
>
> This could be an example of the mereological fallacy mentioned in the
> other posts.

No it's not.

> You are extending the stm of a node to the stm of the
> whole system.

How can memory work any other way? Do you think human short term memory
all exists in one location in the brain - the STM neuron in the center?

And if it's distributed, is not the human STM nothing more than the sum of
the distributed memory?

Is not computer memory nothing more than a collection of individual memory
cells each which "store" a very small amount of information (one bit)?

Memory can't work any other way. It's physically impossible.

> In fact I think with the nets you explained to me it all
> combines into one meaningless stm mess.

It's not a meaningless mess. It's a meaningful mess. I think I need to
build a few examples for you to look at so you can understand what these
"mess" networks can already do with their STM.

> An example I might give is the short term memory in the form of soil
> fertility with regards to spinifex, a spiky bush found in the arid
> countryside of Australia. Each part of the bush uses up the fertility
> of the soil and the plants will propagate out as an ever expanding
> circle.
> The expanding circles of plants soon meet other expanding circles and
> the whole field becomes a textured pattern of a recognizable kind.
>
> http://www.diamantina-tour.com.au/outback_info/land_sys/spinifex/spinifex
> _page.htm
>
> I could claim the soil fertility is a stm of the recent existence
> of a piece of spinifex grass and claim that it all combines into
> a substantial stm of the field.

And you would be correct in doing so. Of course, the context of "short" is
relative. If the ground regains it's fertility automatically over time
(aka causing the memory to be lost), then the use of the term STM is
applicable because the memory fades in a a short time. But if it takes 5
years for the ground to recover, then "short" of course is a completely
different time scale than human STM (but on a geologic or biological time
scale, short seems to fit).

> Just as your nodes keep a time stamp
> so to the soil in the form of the soil slowly regains its fertility so
> the next
> time a seed (pulse) appears it will once again grow or not grow
> depending on how long since the last plant was there and the rate at
> which the fertility is being returned to the soil. I wrote a simulation
> of this and like your nets it produces ever changing but recognizable
> patterns which as a whole amount to nothing amazing.

And why would you expect STM to be amazing? It's nothing more than what
you talked about above and what my networks do. What else would you expect
it to be?

N

unread,
Dec 15, 2006, 9:04:17 PM12/15/06
to

taking this thread by date, I'm a wondering what kinda 'time scapes'
involved in all this, you know, a lot of what I'm taking in seems to be
governed by a descriptive finite analysis and no hint that the
particles
involved can be part of an active synthesis in beings. At such a tiny
level of reaction between molecules we're forced to describe our
'consciousness' and 'entity' as a sequence of events and equasions
(mathematical, hey! thats symbols ain't it?) These things happened
so Goddam fast in us they can't even have reached consciousness
can they? and wheres the upper and lower limit of 'liminal'? s'cuse
me but if we're discussing STM and LTM there should be a time.
There should also be a time between molecular catalyses and even
those between so called 'perceptions' of photons and phonons within
a system shouldnt there? Basically, should we be describing molecular
event in term of memory or consciousnesses?
Ta.

Glen M. Sizemore

unread,
Dec 16, 2006, 2:44:20 AM12/16/06
to

"Curt Welch" <cu...@kcwc.com> wrote in message
news:20061215184123.951$G...@newsreader.com...

> "JGCASEY" <jgkj...@yahoo.com.au> wrote:
>> On Dec 15, 5:15 pm, c...@kcwc.com (Curt Welch) wrote:
>>
>> > In my network, every node in the network has short term memory built
>> > into. Each node remembers the last pulse to pass through the node - it
>> > remembers when it happened (temporal STM). The STM of the entire
>> > network is simply the combination of all this memory. Though the
>> > memory of a single node is very small, the combined STM of the entire
>> > network made up of millions or billions of these simple nodes, becomes
>> > substantial.
>>
>> This could be an example of the mereological fallacy mentioned in the
>> other posts.
>
> No it's not.
>
>> You are extending the stm of a node to the stm of the
>> whole system.
>
> How can memory work any other way? Do you think human short term memory
> all exists in one location in the brain - the STM neuron in the center?
>
> And if it's distributed, is not the human STM nothing more than the sum of
> the distributed memory?


The notion that "memory is distributed" is based tacitly on the notion that
somehow the environmental events, that produce what we observe when we call
behavior "memory," are somehow "stored." An alternative view of STM is that
we learn to behave in particular ways when the to-be-remembered stimulus is
present, and we behave in that fashion during the retention interval, and it
is the behavior that exerts stimulus control over other behavior (the
behavior that occurs during the "choice" portion of the procedure). STM is
all about operant response classes and, no doubt, the physiology that
mediates operant behavior is "distributed" throughout the brain, but there
is no copy of the past that is somehow written across the brain.

Allan C Cybulskie

unread,
Dec 16, 2006, 8:32:17 AM12/16/06
to

Glen M. Sizemore wrote:
> "Curt Welch" <cu...@kcwc.com> wrote in message
> news:20061215184123.951$G...@newsreader.com...
> > And if it's distributed, is not the human STM nothing more than the sum of
> > the distributed memory?
>
>
> The notion that "memory is distributed" is based tacitly on the notion that
> somehow the environmental events, that produce what we observe when we call
> behavior "memory," are somehow "stored." An alternative view of STM is that
> we learn to behave in particular ways when the to-be-remembered stimulus is
> present, and we behave in that fashion during the retention interval, and it
> is the behavior that exerts stimulus control over other behavior (the
> behavior that occurs during the "choice" portion of the procedure). STM is
> all about operant response classes and, no doubt, the physiology that
> mediates operant behavior is "distributed" throughout the brain, but there
> is no copy of the past that is somehow written across the brain.

Memorize something and then recall it later -- even lines for a play --
and then tell me that the stereotypical forms of memory are not
properly described as storage and retrieval. Whether STM fits into
that is another matter, but adding numbers seems to be similar
behaviour to how a calculator adds numbers and no one can claim that a
calculator does not in some sense store the number while it waits for
the next number that you wish to add.

In short, your alternative view is only credible if you ignore the
cases of memory that can definitely be called memory.

Glen M. Sizemore

unread,
Dec 16, 2006, 8:52:38 AM12/16/06
to

"Allan C Cybulskie" <allan_c_...@yahoo.ca> wrote in message
news:1166275937.7...@l12g2000cwl.googlegroups.com...

OK. The stereotypical forms of memory are not properly described as storage

J.A. Legris

unread,
Dec 16, 2006, 9:29:45 AM12/16/06
to

Suppose a forgetful old codger has a tendency to misplace his bifocals,
so he makes a point of putting them in a special place. He stores and
retrieves his glasses!

But he's also got a sharp-witted wife, so sometimes he just gives them
to her, and she reliably produces them on command. Another way of
storing and retrieving glasses!

He also has trouble remembering phone numbers, so he writes them down.
He stores and retrieves phone numbers!

Sometimes he gives the numbers to his wife and she reliably produces
them when asked. Another way of storing and retrieving phone numbers!

Now here's the tricky part, so pay attention: He reads a practical book
about mitigating the effects of aging on memory. It works - he stores
and retrieves phone numbers!

--
Joe Legris

PeskyBee

unread,
Dec 16, 2006, 9:41:17 AM12/16/06
to
"J.A. Legris" <jale...@sympatico.ca> escreveu na mensagem
news:1166279385....@l12g2000cwl.googlegroups.com...

Joe, I liked very much your way of putting it. Unfortunately,
it will not be enough to counter Glen Sizefreak's rationale. His adherence
to behavioristic dogma is such that it is to be comparable to the
Holocaust deniers SOBs. How one is supposed to argue with such "chaps"?

*PB*


>
> --
> Joe Legris
>


Glen M. Sizemore

unread,
Dec 16, 2006, 10:13:26 AM12/16/06
to

"J.A. Legris" <jale...@sympatico.ca> wrote in message
news:1166279385....@l12g2000cwl.googlegroups.com...

Only if she writes them down and puts them somewhere. Otherwise you have
merely assumed what you are trying to prove. And that is what the storage
and retrieval metaphors are; assumptions. What is the experiment that tests
the "storage" versus "change" arguments? If there is one, it has never been
performed, because I know of no data that run counter to the notion that our
brain is simply changed by the conditions that produce "memories." And one
could certainly say that the Roediger/McDermott effect shows that storage
need not take place for one to "remember something," since what is
remembered was not present. Obviously you can BS an explanation that
involves "storage," but it is clear that all you are doing is trying to hang
on to your metaphor. It is true that I am hanging on to mine, but I don't
need to invent shit merely to save it. Know what I mean Ptolemy?

Alpha

unread,
Dec 16, 2006, 11:01:18 AM12/16/06
to

"Glen M. Sizemore" <gmsiz...@yahoo.com> wrote in message
news:4583a35e$0$31276$ed36...@nr2.newsreader.com...

Of course there is (a copy of the past written across the brain) dunce! You
are simply unable to understand how. The "learn to behave in particular ways
when the to-be-remembered stimulus is present" is done via memory.

--

Alpha

unread,
Dec 16, 2006, 11:01:50 AM12/16/06
to

"Glen M. Sizemore" <gmsiz...@yahoo.com> wrote in message
news:4583f9af$0$31292$ed36...@nr2.newsreader.com...

Actually, dunce, they are!

>
>>Whether STM fits into
>> that is another matter, but adding numbers seems to be similar
>> behaviour to how a calculator adds numbers and no one can claim that a
>> calculator does not in some sense store the number while it waits for
>> the next number that you wish to add.
>>
>> In short, your alternative view is only credible if you ignore the
>> cases of memory that can definitely be called memory.
>>
>
>

--

Alpha

unread,
Dec 16, 2006, 11:02:25 AM12/16/06
to

"J.A. Legris" <jale...@sympatico.ca> wrote in message
news:1166279385....@l12g2000cwl.googlegroups.com...

Glen will not understand what this means!

>
> --
> Joe Legris

Alpha

unread,
Dec 16, 2006, 11:07:58 AM12/16/06
to

"Glen M. Sizemore" <gmsiz...@yahoo.com> wrote in message
news:45840c9f$0$31262$ed36...@nr2.newsreader.com...

They facts about how our memory works based on introception as well as
conceptualization!

> What is the experiment that tests the "storage" versus "change" arguments?

The change is the storage dunce.

> If there is one, it has never been performed, because I know of no data
> that run counter to the notion that our brain is simply changed by the
> conditions that produce "memories."

And it is that change that is the storage.

> And one could certainly say that the Roediger/McDermott effect shows that
> storage need not take place for one to "remember something," since what is
> remembered was not present.

We already told you over and over again (but of course you cannot
understand), that such an effect ONLY shows that memory is not veridical in
all cases. That we know already because we know that brain does a lot of
filling in of details (or even higher-level aspects of the memory) when they
are not necessarily present or when other stimuli may interfere with
veridical storage.

>Obviously you can BS an explanation that involves "storage," but it is
>clear that all you are doing is trying to hang on to your metaphor. It is
>true that I am hanging on to mine, but I don't need to invent shit merely
>to save it. Know what I mean Ptolemy?

You don't invent shit because you borrow decades old shit that has already
been shown false.


>
>
>>
>> Now here's the tricky part, so pay attention: He reads a practical book
>> about mitigating the effects of aging on memory. It works - he stores
>> and retrieves phone numbers!
>>
>> --
>> Joe Legris
>>
>
>

--

J.A. Legris

unread,
Dec 16, 2006, 11:27:01 AM12/16/06
to

In each example storage and retrieval occurred with respect to the
husband's observable behaviour, even when it was confined to his own
body. The mechanism by which it occurs is another issue. The only
assumption is a common understanding of storage and retrieval - i.e.
postponement of degradation.

The experimental test for storage versus change is that the former
allows retrieval what was stored while the latter makes no commitments.
Putting your money in a shoe box or in a paper shredder both involve
change, but only one is storage. The Roediger/McDermott effect must
involve a degree of storage, because even false memories are not random
memories.

--
Joe Legris


Joe Legris

JGCASEY

unread,
Dec 16, 2006, 2:52:36 PM12/16/06
to

On Dec 16, 10:36 am, c...@kcwc.com (Curt Welch) wrote:

> "JGCASEY" <jgkjca...@yahoo.com.au> wrote:
> > On Dec 15, 5:15 pm, c...@kcwc.com (Curt Welch) wrote:
>
> > > In my network, every node in the network has short term memory built
> > > into. Each node remembers the last pulse to pass through the node - it
> > > remembers when it happened (temporal STM). The STM of the entire
> > > network is simply the combination of all this memory. Though the
> > > memory of a single node is very small, the combined STM of the entire
> > > network made up of millions or billions of these simple nodes, becomes
> > > substantial.
>
> > This could be an example of the mereological fallacy mentioned in the
> > other posts.No it's not.

>
> > You are extending the stm of a node to the stm of the
> > whole system.
>
>
> How can memory work any other way? Do you think human short term memory
> all exists in one location in the brain - the STM neuron in the center?
>
> And if it's distributed, is not the human STM nothing more than the sum of
> the distributed memory?
>
> Is not computer memory nothing more than a collection of individual memory
> cells each which "store" a very small amount of information (one bit)?
>
> Memory can't work any other way. It's physically impossible.


Of course in order to manifest a short term memory on a high
level you need components that can store the parts of those
memory patterns. But just because a computer has those parts
doesn't mean it will demonstrate short term memory as a
consequence if it is not wired up to do so by a program.

Computers actually have perfect short term and long term
wrote memories (as do tape recorders) which is one reason
we use computers.

Storing everything by wrote is not the most intelligent thing
to do because it is not what is stored but what we do with it
that matters. People with a computer like wrote memory might be
able to remember every formulae and word in a book and yet not
understand any of it.

In order to understand a sentence you must remember the beginning
of the sentence while reading the end of the sentence. This short
term memory of the actual words must then be converted to the
meaning of the sentence to be retained as a medium or long term
memory of what the whole text was about.


> > In fact I think with the nets you explained to me it all
> > combines into one meaningless stm mess.

> It's not a meaningless mess. It's a meaningful mess. I think I need to
> build a few examples for you to look at so you can understand what these
> "mess" networks can already do with their STM.


Now that would be interesting providing it is an objective observable
behavior involving the input/output of the net not hand waving about
what is happening inside the net. Demonstrate short term memory in
your net using the same kind of tests you would use to demonstrate
short term memory in a rat.

> > An example I might give is the short term memory in the form of soil
> > fertility with regards to spinifex, a spiky bush found in the arid
> > countryside of Australia. Each part of the bush uses up the fertility
> > of the soil and the plants will propagate out as an ever expanding
> > circle.
> > The expanding circles of plants soon meet other expanding circles and
> > the whole field becomes a textured pattern of a recognizable kind.
>

> >http://www.diamantina-tour.com.au/outback_info/land_sys/spinifex/spin._page.htm


>
> > I could claim the soil fertility is a stm of the recent existence
> > of a piece of spinifex grass and claim that it all combines into
> > a substantial stm of the field.
>
>
> And you would be correct in doing so. Of course, the context of "short" is

> relative. If the ground regains its fertility automatically over time


> (aka causing the memory to be lost), then the use of the term STM is

> applicable because the memory fades in a short time. But if it takes 5


> years for the ground to recover, then "short" of course is a completely
> different time scale than human STM (but on a geologic or biological time
> scale, short seems to fit).
>
>
> > Just as your nodes keep a time stamp
> > so to the soil in the form of the soil slowly regains its fertility so
> > the next
> > time a seed (pulse) appears it will once again grow or not grow
> > depending on how long since the last plant was there and the rate at
> > which the fertility is being returned to the soil. I wrote a simulation
> > of this and like your nets it produces ever changing but recognizable
> > patterns which as a whole amount to nothing amazing.
>
>
> And why would you expect STM to be amazing? It's nothing more
> than what you talked about above and what my networks do. What
> else would you expect it to be?


I never said I expected STM to be amazing what I said is that the
the simulation didn't do anything amazing. And it did not show what
we call stm at the higher level even though it had stm at the lower
level. Feed patterns into the field and they would vanish. There
would be no short or long term retention of those patterns. The
spinifex simulation could not produce the behavior we call stm
even though I could claim it had stm components.


--
jc

feedbackdroid

unread,
Dec 16, 2006, 3:15:03 PM12/16/06
to

JGCASEY wrote:


>
>
> I never said I expected STM to be amazing what I said is that the
> the simulation didn't do anything amazing. And it did not show what
> we call stm at the higher level even though it had stm at the lower
> level. Feed patterns into the field and they would vanish. There
> would be no short or long term retention of those patterns. The
> spinifex simulation could not produce the behavior we call stm
> even though I could claim it had stm components.
>
>


If he actually wired it up, I think Curt would find that just cascading
a few feedback loops, each with a couple of synaptic delays, would not
automaticaly produce something on the order of biological STM, which
lasts for up to several seconds, at least.

He would either need to provide some special loop gain settings, which
might produce prolonged reverberations [and which I previously asked
him how he might accomplish - but I don't recall receiving an answer],

or he might add in some specialized circuitry for which STM was its
main function. In fact, this latter already has been found, regards LTP
[long-term potentiation] and similar cellular phenomena.

Allan C Cybulskie

unread,
Dec 16, 2006, 4:50:06 PM12/16/06
to

So then explain why memorizing something and then recalling something
later is not properly called "storage and retrieval". The only way to
claim what you claimed is to completely ignore and fail to understand
what memorization and remembering in those cases IS.

Allan C Cybulskie

unread,
Dec 16, 2006, 4:52:32 PM12/16/06
to

Glen M. Sizemore wrote:
> "J.A. Legris" <jale...@sympatico.ca> wrote in message
> news:1166279385....@l12g2000cwl.googlegroups.com...
> Only if she writes them down and puts them somewhere. Otherwise you have
> merely assumed what you are trying to prove. And that is what the storage
> and retrieval metaphors are; assumptions. What is the experiment that tests
> the "storage" versus "change" arguments? If there is one, it has never been
> performed, because I know of no data that run counter to the notion that our
> brain is simply changed by the conditions that produce "memories."

That's because you are setting up a false dichotomy: of COURSE
something in the brain must change, just like something on a hard disk
must change to store text files or anything else. The question is if
that change can be described at a higher level as being storage and
retrieval. And it can. And so the question then becomes: how does the
brain change to produce the storage and retrieval behaviour of memory?

JGCASEY

unread,
Dec 16, 2006, 5:53:43 PM12/16/06
to

JGCASEY wrote:

>> I never said I expected STM to be amazing what I said is that the
>> the simulation didn't do anything amazing. And it did not show what
>> we call stm at the higher level even though it had stm at the lower
>> level. Feed patterns into the field and they would vanish. There
>> would be no short or long term retention of those patterns. The
>> spinifex simulation could not produce the behavior we call stm
>> even though I could claim it had stm components.
>
>
> If he actually wired it up, I think Curt would find that just
> cascading a few feedback loops, each with a couple of synaptic
> delays, would not automaticaly produce something on the order
> of biological STM, which lasts for up to several seconds, at
> least.


If Curt knew exactly how it should be wired up I am sure he
would just do it. I certainly would. After playing with some
simple versions of his net ideas I cannot see any merit in
them over other types of nets. Indeed I think it just makes
doing some things that much harder.

Now I know he likes to make an analogy between the design of an
aeroplane and a bird but with regards to the part required for
flying they both use curved wings. And although spinning wings
may have replaced the flapping bird wings they are still wings!

Neurons have many inputs and one fan out to many other neurons
and a similar pattern is used with logic gates. You need many
inputs to make a decision and you need to fan that decision
out to many other decision making neurons or logic gates as may
be required and that fan out may be lateral or fed back to the
earlier layers.

Curt holds to the belief that all these high level behaviors
will form within a simple network because if he is wrong then
he feels, I think, that he will not achieve his goal to be the
first to figure out how to make human level intelligence. I
don't have such lofty ambitions. If I could get a simple RL
machine to learn anything interesting, like playing chess,
then I would be happy :)

Although I may seem like a negative Curt net critic I do wish
him well and would be happy to see him achieve something
of interest even if it isn't full blown human level AI.


--
jc

Curt Welch

unread,
Dec 17, 2006, 1:01:18 AM12/17/06
to
"JGCASEY" <jgkj...@yahoo.com.au> wrote:

> In order to understand a sentence you must remember the beginning
> of the sentence while reading the end of the sentence. This short
> term memory of the actual words must then be converted to the
> meaning of the sentence to be retained as a medium or long term
> memory of what the whole text was about.

Well, there's no need to convert the "memory" of the event into "meaning".

I assume it simply works as a recognition process in real time where the
state changes caused by the past events (the first words in the sentence)
cause the brain to react correctly to the last words. How it reacts to the
words _is_ the meaning - there's no conversion required. The only thing
such a system needs to "remember" is how it has recently reacted (aka,
which neurons recently fired, aka how long it's been sense a neuron fired)
and the only thing such a system needs to do, is react differently
depending on how long it's been since various neurons have fired each time
a new pulse shows up at a neuron. In other words, whether a neuron fires
when it's stimulated is a function of how it's been stimulated recently in
the past. The short term memory of a neuron modeled as a leaky integrator
is stored in it's current activation level (which is constantly fading over
time).

The short term memory of a network of leaky integrators is maintained as
the fading activation level of all the neurons.

It's like a glow-in-the-dark sticker that you can hold your hand over and
then shine a light on. It has short term memory of the shadow caused by
your hand. The short term memory of the brain is bound to be very similar
with a constantly fading memory of recent past neural activity. The
difference of course is that how the brain reacts to a new stimulus, is a
function of the state of this constantly fading short term memory.

When you hear the first word in a sentence, the brain reacts to it based on
the current context of the network (what the state of the short term memory
currently is in). But how it reacts is also "burned" into the memory. So
when it hears the next word, how the brain reacts to that is again, a
function of the state of the system which is left over from the past words.

So when I hear "car" my brain reacts to it.

When that is followed by "toon" my brain reacts based on the state of the
fading memory. Because "car" was part of the fading memory, the reaction
is different than if "that" was in the memory. The way the higher level
neurons reacts to "toon" when "car" is in the fading memory is the
reaction that means "cartoon". The higher level neurons that fire in this
case are the neurons that mean "cartoon". And when a neuron fires, that
has a fan out of 10,000, it changes the activation levels of 10,000
neurons, that entire pattern of changes to all those 10,000 neurons is the
STM of "cartoon".

So there's no "conversion to meaning" needed. It's simply a reaction
network that reacts differently based on the current state of the STM of
the network. And how it reacts is the "meaning". And since all reactions
are "remembered" by state changes in the neurons, all "meaning" is
inherently stored in STM automatically.

> > > In fact I think with the nets you explained to me it all
> > > combines into one meaningless stm mess.
>
> > It's not a meaningless mess. It's a meaningful mess. I think I need
> > to build a few examples for you to look at so you can understand what
> > these "mess" networks can already do with their STM.
>
> Now that would be interesting providing it is an objective observable
> behavior involving the input/output of the net not hand waving about
> what is happening inside the net. Demonstrate short term memory in
> your net using the same kind of tests you would use to demonstrate
> short term memory in a rat.

I don't need to see the level of test to know it's working (I can tell it's
working correctly with simpler tests) - but I understand that other people
need to see it.

Remember for example how your simple sorting net with a single input would
sweep the outputs from left to right when the frequency of the input
changed from low to high suddenly instead of the network instantly changing
to the high output? That sweeping effect caused a delay in the response of
the network - it didn't just instantly sort the high frequency pulses to
the other side of the network. That's because that network (as a whole)
had short term memory of multiple past pulses.

I think part of your problem in understanding how the network has short
term memory comes from the very problem that Glen keeps trying to show
people. At the higher levels of behavior, we understand how we can
remember a phone number in our short term memory long enough to be able to
dial it. Talking like this about what we do with phone numbers makes us
think of our memory as something that stores data that can be later
retrieved. At this type of thinking is what gets a lot of people confused
about how humans actually behave.

So, I suspect you think for an AI program to have STM, you expect to see a
"working memory module" where data that is received is somehow converted,
and then stored in the working memory module. And my networks don't seem
to have any sort of "memory module" so you don't believe my networks (or
anything designed something like that) could possibly have human like short
term memory.

But this is the reason looking at memory as operant behavior instead gives
us a very different view of how to build AI. What the system needs to
"remember" is past behavior (it must remember which neurons in the network
have recently fired). All current behavior is simply a function of past
behavior (aka memory of recent past neuron activity). This means the
behavior we produce when we dial the phone, is a function of what neurons
fired a few seconds in the past, when we read the phone number and recited
it to ourselves in our head.

Ok, that's fair.

> And it did not show what
> we call stm at the higher level even though it had stm at the lower
> level. Feed patterns into the field and they would vanish. There
> would be no short or long term retention of those patterns. The
> spinifex simulation could not produce the behavior we call stm
> even though I could claim it had stm components.

Sure it does. I think this is just the problem Glen keeps talking about
that many people in this group just can't understand.

People keep thinking our STM is something that allows us to store and
retrieve fixed information (like a phone number). Our STM is in fact very
different from that. The purpose of STM is the regulation of future
behavior. If a system's current behavior changes as a function of what has
happened in the recent past, then the system has STM.

A NAND gate for example has no STM. It's behavior (output) is not a
function of what has happened in the recent past (in this case I'm choosing
to ignore the 10 ns it takes for information to propagate through the
gate).

But a simple RC delay circuit that will produce a 1 output 500 ms after the
input changed to one, does have STM because it's output is a function of
what has happened in the recent past.

An algorithm that implements a moving average is another example of a
system with STM. If it's output is always an average of the last 10 input
values, this system has STM - but it's not the type of memory that "stores"
and "retrieves" past information.

Or, a better example than a moving average is one which produces an
exponentially weighted average of all past inputs. Something like:

avg = avg + (current_input - avg) * 0.1;

Where the avg value is the output but it's also a memory of all past inputs
(if avg is a real value in mathematics it's value will be a function of all
past inputs, but for real world computer versions where it has limited
precision, it's value will only be a function of a finite number of past
inputs).

The fact that the output of this procedure is a function of all past
inputs, and not just a function of the current_input value, is what tells
us the system as short term memory. And this is exactly the type of short
term memory humans have - our behavior is regulated by recent past inputs,
as well as our own recent past reactions to those past inputs).

Any system who's behavior is a function of not only current inputs, but
past inputs, is a system with STM similar to what the brain is using to
create our short term memory.

And again, I think by failing to understand the real role of STM, people
probably make the error of thinking we actively control our use of our STM
when we do things like recall a phone number. Nothing could be further
from the truth. All our conscious (operant) behavior is a function of our
STM. Without STM, none of our behavior would be possible. Our basic power
of perception would fail to work without the help of our STM. We wouldn't
be able to understand a single word (let alone an entire sentence) and we
wouldn't be able to speak a single word without using our STM.

Anyone that thinks of our STM as a system for "storage" and "retrieval" is
simply lost due to using a very bad model of human behavior.

If you look at a network like my and don't instantly understand that it's
got STM, you don't understand what human STM is.

The question is not whether my networks have STM of roughly the type
needed, the question is whether the network is 1) reacting to it's memory
in a valid way, and 2) whether the network is being trained correctly in
response to rewards (aka are it's reactions being changed in a valid and
useful way in response to rewards).

Curt Welch

unread,
Dec 17, 2006, 2:07:40 AM12/17/06
to

Well, it's trivial to build a circuit with a 10 second short term memory
(as I known you know). So I'm not sure what you are actually asking.

In my old style networks which only used feedback within the nodes, the
memory was limited one pulse. But, because pulses are preserved (don't
fork), then a net with a fan-out topology had increasing long memory as you
got deeper into the network.

So a single input, with an average pulse rate of 100 per second, when fed
into a network that has a 1 to 500 fan out, would create average pulse
rates on the order of one per every 5 seconds in the highest level nodes -
with some always lasting much longer since the pulses were not distributed
evenly over short periods meaning some high level nodes might not see a
pulse for a minute. So the STM of such a system (even though it has no
feedback) could last for minutes.

So even before adding feedback, such a network could easily have STM
lasting for many minutes - simply because the network is designed for nodes
to sit unused for many minutes at a time (or even hours).

With feedback however, a given high level node, once active, is more likely
to stay active for an extend period - leaving other high level nodes to sit
idle for even longer. It makes such a network more "sticky" - extending
the effective memory of the entire network in the process.

For example, lets say we have high level nodes that represent "cat" and
another one that represents "dog". Now, without feedback, the low level
features that show up when we see a dog, might sometimes look a bit like a
cat. So without feedback, the "dog" feature might be activated 80% of the
time, and the "cat" future might be activated 20% of the time - simply
because 20% of the time, some of the dog features look like they might be a
"cat".

But with feedback increasing the accuracy of the classification, once it
saw that the features were mostly "dog", it would classify the rest of the
features as "dog" instead of as "cat". We don't see the dog as being
"maybe cat" even though parts look cat-like because our feedback has forced
all the features to be "dog" features. Once we "believe" we are looking at
a dog, we just naturally see the whiskers as being dog-whiskers instead of
cat-whiskers because of the context established.

This happens because of feedback. And in my type of network, the same
thing should be be happening - once a high level node activates, it
encourages more data to be routed to it. Once we think we are looking at a
dog, the features would have to become extremely cat-like before we could
believe it was a cat.

And if the dog walks behind a box, and the box starts to shake, we see the
shaking box as "dog behavior" instead of "box behavior" because the high
level "dog detector" will remain active. Had we seen the shaking box
before seeing the dog, the odds are we would not have classified it as "dog
behavior".

So the feedback is what creates this "short term memory" of "dog" that
allows us to see the shaking box as "more dog". And it's the type of thing
that would allow my type of network to continue to cause the same high
level node to remain active even as low level features constantly changed.
So even though each node only had a memory of the last pulse, the feedback
loops would allow the "memory" of "dog" to last much longer.

Above, you asked this question:

> He would either need to provide some special loop gain settings, which
> might produce prolonged reverberations [and which I previously asked
> him how he might accomplish - but I don't recall receiving an answer],

Which brings up the important point. It's easy to understand how we would
want such a system to keep the high level "dog" node active, even as the
low level features constantly changed. We don't want it to constantly
switch to "cat" and them back to "dog" just because the dog walked out a
door and the last part of the tail we saw was more "cat-like" than
"dog-like". Even if the tail looked like a cat tail, we don't see it as a
cat tail when we still have a short term memory of dog being there.

So, how do you build such a feedback network to create these invariant
loops that self stimulate themselves but yet don't get locked into infinite
feedback?

Well, if I knew the complete answer, I would have solved AI. (or wait, I
did solve AI didn't I? :)). But the short answer is simple. The loops
must all be tuned to have less than unity gain so that without external
stimulation, they all die out. They only stay active, as long as the
incoming stimulus data is attracted stronger to the active loops, than to
the inactive loops.

They work just like Schmitt triggers (since I know you understand
electronics). They classify input into different output behaviors, but
once an output behavior becomes active, it's classification range widens so
that a wider range of stimulus signals will be classified the same way -
data has to be further outside the range to get it to switch to a different
classification.

And Schmitt triggers do it the exact same way - they use positive feedback
with less than unity gain (combined with range clipping of the output).
Schmitt triggers have no problem with infinite feedback do they? The
feedback loops in our classification networks are tuned to work exactly the
same way for exactly the same reason we use Schmitt triggers in electronic
circuits (to add noise immunity to our classifiers - we don't want a little
noise to make us think the dog we were just look at has suddenly turned
into a cat).

Another way to look at the role of the feedback in the classification
system is to understand how the system makes use of temporal prediction.
The feedback is used to improve temporal prediction. When the classifier
sees a whisker, how does it know whether it's a dog whisker or a cat
whisker? Let assume dogs and cats have identical whiskers so there's
nothing about the whisker that tells the classifier which type it is. But
yet, the classifier is still forced to make a decision and parse this
whisker as a dog or a cat. It uses past experience to guide it. If 99% of
the time, whiskers are dog whiskers (this classifier hasn't been exposed to
many cats), then the "best guess" the classifier can make, is "dog-whisker"
- which is what it will do.

But if other features all add up to "cat" more than "dog" then the whisker
features is going to end up as "cat-whisker".

Once a "cat" has been identified, then what happens when "whisker" is
identified 3 seconds later? The fact that "cat" was "seen" just 3 seconds
ago, and that "dog" hasn't been seen in a week, means the probability of
this feature being a "cat-whisker" is much higher now, than it normally is.
So, the short term memory of "cat" is used to change the bias of the
classifier. The temporal memory of "cat" is used to adjust the bias of
classify - the whisker will count more towards the "cat vote" than it
normally does.

But how much should the network change the bias of the whisker classify to
see it as a "cat" whisker instead of seeing it as dog-whisker just because
it was recently classified as cat-whisker? How is the temporal bias which
is created with these feedback loops tuned? The answer is that they should
be tuned based on experience as far as I can tell. How typical is it for a
whisker to be a cat-whisker just because it was classified as cat-whisker a
minute ago? How often do cat-whiskers turn into dog-whiskers?

The answer to your "special loop gain" comment is hidden in here somewhere.
The feedback loops should 1) never be so strong they self resonate, and 2)
they should be tuned based on real world experience.

JGCASEY

unread,
Dec 17, 2006, 3:30:43 AM12/17/06
to

On Dec 17, 5:01 pm, c...@kcwc.com (Curt Welch) wrote:
> "JGCASEY" <jgkjca...@yahoo.com.au> wrote:
...

> > In order to understand a sentence you must remember the beginning
> > of the sentence while reading the end of the sentence. This short
> > term memory of the actual words must then be converted to the
> > meaning of the sentence to be retained as a medium or long term
> > memory of what the whole text was about.


> Well, there's no need to convert the "memory" of the event into "meaning".
>
> I assume it simply works as a recognition process in real time where the
> state changes caused by the past events (the first words in the sentence)
> cause the brain to react correctly to the last words. How it reacts to the
> words _is_ the meaning - there's no conversion required. The only thing
> such a system needs to "remember" is how it has recently reacted (aka,

> which neurons recently fired, aka how long it's been since a neuron fired)


Ok. Now you have to show how your net can read a short story and then
retell the story the way we do.

===================
...

> I don't need to see the level of test to know it's working (I can tell it's
> working correctly with simpler tests) - but I understand that other people
> need to see it.


Yup. Until you can get it playing chess or at least doing some of the
things other RL programs can do we can only assume the powers of your
net are all in your head :)

===================

> Remember for example how your simple sorting net with a single input would
> sweep the outputs from left to right when the frequency of the input
> changed from low to high suddenly instead of the network instantly changing
> to the high output? That sweeping effect caused a delay in the response of
> the network - it didn't just instantly sort the high frequency pulses to
> the other side of the network. That's because that network (as a whole)
> had short term memory of multiple past pulses.


I know exactly why the sweeping effect happens. It is not a short term
memory of any external event. It is due to the way the nodes work. It
is
a flaw in the sense you cannot train the net to behave any other way.
It
is an innate behavior not a learnt behavior.

===================

> I think part of your problem in understanding how the network has short
> term memory comes from the very problem that Glen keeps trying to show
> people. At the higher levels of behavior, we understand how we can
> remember a phone number in our short term memory long enough to be able to
> dial it. Talking like this about what we do with phone numbers makes us
> think of our memory as something that stores data that can be later
> retrieved. At this type of thinking is what gets a lot of people confused
> about how humans actually behave.
>
> So, I suspect you think for an AI program to have STM, you expect to see a
> "working memory module" where data that is received is somehow converted,
> and then stored in the working memory module. And my networks don't seem
> to have any sort of "memory module" so you don't believe my networks (or
> anything designed something like that) could possibly have human like short
> term memory.
>
> But this is the reason looking at memory as operant behavior instead gives
> us a very different view of how to build AI. What the system needs to
> "remember" is past behavior (it must remember which neurons in the network
> have recently fired). All current behavior is simply a function of past
> behavior (aka memory of recent past neuron activity). This means the
> behavior we produce when we dial the phone, is a function of what neurons
> fired a few seconds in the past, when we read the phone number and recited
> it to ourselves in our head.


I don't expect anything with regards to how it should be done. You can
call dialing a phone number a function of what neurons fired a few
seconds
in the past if you like. I just want to see this theory in action. Then
maybe looking at the actual code I will see what is actually happening.

=====================


Such as the retrieval of a phone number?

=============

> If a system's current behavior changes as a function of what has
> happened in the recent past, then the system has STM.
>
> A NAND gate for example has no STM. It's behavior (output) is not a
> function of what has happened in the recent past (in this case I'm choosing
> to ignore the 10 ns it takes for information to propagate through the
> gate).
>
> But a simple RC delay circuit that will produce a 1 output 500 ms after the
> input changed to one, does have STM because it's output is a function of
> what has happened in the recent past.
>
> An algorithm that implements a moving average is another example of a
> system with STM. If it's output is always an average of the last 10 input
> values, this system has STM - but it's not the type of memory that "stores"
> and "retrieves" past information.
>
> Or, a better example than a moving average is one which produces an
> exponentially weighted average of all past inputs. Something like:
>
> avg = avg + (current_input - avg) * 0.1;
>
> Where the avg value is the output but it's also a memory of all past inputs

But this is not a memory of all past inputs only a memory of the
average
of all past inputs. There are many input patterns that might give the
same average.

0 5 1 8 5 4 3 8 8 7 average 5
1 6 4 3 0 6 7 8 5 3 average 5
...

> (if avg is a real value in mathematics it's value will be a function of all
> past inputs, but for real world computer versions where it has limited
> precision, it's value will only be a function of a finite number of past
> inputs).


Yes, a function, a computed value. Different functions will give a
different result for the same sequence of input patterns. If your
net is an averaging machine then that is what it is.

> The fact that the output of this procedure is a function of all past
> inputs, and not just a function of the current_input value, is what tells
> us the system as short term memory.


If i ask you to remember a list of digits and all you can do is
respond with the average of those digits, or some other function,
it is not stm it is a computation. The short term memory bit is
the holding of say a number until the another number is given so
you can add, multiply or divide them.

This is where there can be confusion because although I know what
you mean by stm in your nodes it is not what psychologists mean by
short term memory.

Let us say I read out a list of digits of varying lengths and
keep recycling them what is the longest list that a person will
no longer see that there is a repetition? In order to know that there
is
a repetition you must be able to remember the last n number of
digits that make up the sequence.

eg.

1212121212 easy,

293847293847293847293847 getting harder ...

The short term memory psychologists talk about is the ability
we have to hold and repeat an average of seven items.

===========================

> And this is exactly the type of short term memory humans have -
> our behavior is regulated by recent past inputs, as well as our
> own recent past reactions to those past inputs).
>
> Any system who's behavior is a function of not only current inputs, but
> past inputs, is a system with STM similar to what the brain is using to
> create our short term memory.
>
> And again, I think by failing to understand the real role of STM, people
> probably make the error of thinking we actively control our use of our STM
> when we do things like recall a phone number. Nothing could be further
> from the truth. All our conscious (operant) behavior is a function of our
> STM. Without STM, none of our behavior would be possible. Our basic power
> of perception would fail to work without the help of our STM. We wouldn't
> be able to understand a single word (let alone an entire sentence) and we
> wouldn't be able to speak a single word without using our STM.
>
> Anyone that thinks of our STM as a system for "storage" and "retrieval" is
> simply lost due to using a very bad model of human behavior.
>

> If you look at a network like mine and don't instantly understand that it's


> got STM, you don't understand what human STM is.
>
>
> The question is not whether my networks have STM of roughly the type
> needed, the question is whether the network is 1) reacting to it's memory
> in a valid way, and 2) whether the network is being trained correctly in
> response to rewards (aka are it's reactions being changed in a valid and
> useful way in response to rewards).


Describe it how you like Curt. Your network would have to be able to
be given a list of words and then when asked to repeat them back would
have to produce the same kinds of behavior we do. If you really
understand
it you should be able to code it.


--
jc

Glen M. Sizemore

unread,
Dec 17, 2006, 5:08:48 AM12/17/06
to

"Alpha" <OmegaZ...@yahoo.com> wrote in message
news:45840d20$0$15515$8826...@free.teranews.com...

Ohhhhhhhhh! Now I see; your "model" is correct" but the subject matter made
a mistake!

Allan C Cybulskie

unread,
Dec 17, 2006, 7:50:49 AM12/17/06
to

Glen M. Sizemore wrote:
> "Alpha" <OmegaZ...@yahoo.com> wrote in message
> > We already told you over and over again (but of course you cannot
> > understand), that such an effect ONLY shows that memory is not veridical
> > in all cases.
>
> Ohhhhhhhhh! Now I see; your "model" is correct" but the subject matter made
> a mistake!

Well, I looked up that "effect" again, and it seems obvious that the
effect is NOT showing anything about what memory IS, but how memory is
IMPLEMENTED. Memory can still be described as "storage and retrieval"
without ignoring the R-M effect.

The reason is that most of our experiments have shown that the way the
brain implements memory is not by storing a precise representation of
the thing to be remembered. Instead, it seems clear that what the
brain does -- the precise details of this are not necessarily known --
is "store" in some way (yes, yes, by changing) a set of details that
will allow it to REBUILD the thing to be remembered. This can be
clearly seen in recalling images or sense impressions, and the R-M
effect clearly shows it for remembering lists of words. The brain
"picks out" that a common thread between the words is how they are
related. But then when it tries to reproduce the list for "memory", it
uses that criteria primarily and misses out on the termination
condition, and so adds words that weren't there.

One of the keys here is that more detailed and precise memorization
SHOULD result in a limitation of this effect. We know that this is the
case for images, and I would be surprised if it was not the case for
the list of words as well (I'm sure SOMEONE will correct me if that
isn't true, but with references to the later experiments that showed
that that was not the case). This is because more extended
memorization builds more precise "keys" into the system which means
that the reproduction has to fill in less details and so gives a better
memory. Note that this is both consistent with Curt's nets and with
connectionism/neural nets so it does not imply an implementation based
on symbolic processing (and may, in fact, contradict that sort of
implementation).

Glen M. Sizemore

unread,
Dec 17, 2006, 8:23:42 AM12/17/06
to

"Allan C Cybulskie" <allan_c_...@yahoo.ca> wrote in message
news:1166359849....@t46g2000cwa.googlegroups.com...

>
> Glen M. Sizemore wrote:
>> "Alpha" <OmegaZ...@yahoo.com> wrote in message
>> > We already told you over and over again (but of course you cannot
>> > understand), that such an effect ONLY shows that memory is not
>> > veridical
>> > in all cases.
>>
>> Ohhhhhhhhh! Now I see; your "model" is correct" but the subject matter
>> made
>> a mistake!
>
> Well, I looked up that "effect" again, and it seems obvious that the
> effect is NOT showing anything about what memory IS, but how memory is
> IMPLEMENTED. Memory can still be described as "storage and retrieval"
> without ignoring the R-M effect.
>
> The reason is that most of our experiments have shown that the way the
> brain implements memory is not by storing a precise representation of
> the thing to be remembered. Instead, it seems clear that what the
> brain does -- the precise details of this are not necessarily known --
> is "store" in some way (yes, yes, by changing) a set of details that
> will allow it to REBUILD the thing to be remembered.

All this is simply gyrations designed to save what is prima facie false:
"memory" is not a matter of storing representations of what happened. Future
behavior is, in part, a function of what happened in the experiment, a
person's personal history, the current setting, and "motivational
variables." This is all consistent with the notion that we are changed by
exposure to certain events, and it is inconsistent with the notion that
memory is the storage of (representations of) the past.

Allan C Cybulskie

unread,
Dec 17, 2006, 8:41:49 AM12/17/06
to

Glen M. Sizemore wrote:
> "Allan C Cybulskie" <allan_c_...@yahoo.ca> wrote in message
> news:1166359849....@t46g2000cwa.googlegroups.com...
> >
> > Glen M. Sizemore wrote:
> >> "Alpha" <OmegaZ...@yahoo.com> wrote in message
> >> > We already told you over and over again (but of course you cannot
> >> > understand), that such an effect ONLY shows that memory is not
> >> > veridical
> >> > in all cases.
> >>
> >> Ohhhhhhhhh! Now I see; your "model" is correct" but the subject matter
> >> made
> >> a mistake!
> >
> > Well, I looked up that "effect" again, and it seems obvious that the
> > effect is NOT showing anything about what memory IS, but how memory is
> > IMPLEMENTED. Memory can still be described as "storage and retrieval"
> > without ignoring the R-M effect.
> >
> > The reason is that most of our experiments have shown that the way the
> > brain implements memory is not by storing a precise representation of
> > the thing to be remembered. Instead, it seems clear that what the
> > brain does -- the precise details of this are not necessarily known --
> > is "store" in some way (yes, yes, by changing) a set of details that
> > will allow it to REBUILD the thing to be remembered.
>
> All this is simply gyrations designed to save what is prima facie false:
> "memory" is not a matter of storing representations of what happened.

Learn to read. I clearly argued that the implementation of memory is


not by storing a precise representation of the thing to be remembered.

However, what I will argue is that in general the BEHAVIOUR of memory
is attempting to store and retrieve various events, images, and other
such "rememberable" things.

> Future
> behavior is, in part, a function of what happened in the experiment, a
> person's personal history, the current setting, and "motivational
> variables."

And what is the point of this? NO ONE argues that future behaviour is
not, in part, a function of those things. So you vigorously attacking
a strawman. As Pesky tends to say, we are well beyond that. This
"fact" is not particularly interesting; it is HOW all of those things
interact to produce certain behaviours that is of interest.

> This is all consistent with the notion that we are changed by
> exposure to certain events, and it is inconsistent with the notion that
> memory is the storage of (representations of) the past.

Again, you miss the distinction between the implementation and the
classification of the behaviour. Humans implement memory behaviour in
one of the worst ways possible, as evidenced by the inaccuracies in it.
If they DID store it as a representation, they'd have much better
memories, as evidenced by the fact that a computer can keep perfect
memories of any event and humans cannot.

And saying that we are changed by certain events is in fact totally
consistent with what I said, and I in fact out and out said that. So,
again, learn to read.

And while you're at it, address the comment that more detailed and
precise memorization should mitigate the R-M effect. One way of
describing that would be that the indices that we use to reproduced the
memorized event become detailed enough to reproduced the memorized
event with more precision. What is YOUR explanation? The brain
changes some more?

Glen M. Sizemore

unread,
Dec 17, 2006, 8:50:07 AM12/17/06
to

"Allan C Cybulskie" <allan_c_...@yahoo.ca> wrote in message
news:1166362909.5...@l12g2000cwl.googlegroups.com...

Learn to read. I said nothing about the "precision" of the representation.
But, of course, you will latch on to the notion of imprecision because then
you can simply assign to the representation any characteristics you need to
explain what is actually observed.


> However, what I will argue is that in general the BEHAVIOUR of memory
> is attempting to store and retrieve various events, images, and other
> such "rememberable" things.
>
>> Future
>> behavior is, in part, a function of what happened in the experiment, a
>> person's personal history, the current setting, and "motivational
>> variables."
>
> And what is the point of this? NO ONE argues that future behaviour is
> not, in part, a function of those things. So you vigorously attacking
> a strawman. As Pesky tends to say, we are well beyond that. This
> "fact" is not particularly interesting; it is HOW all of those things
> interact to produce certain behaviours that is of interest.


The point of this is that my description is what is observed, and it doesn't
require the notion of storage. That is an assumption which is, by
definition, not an hypothesis, theory, model, etc. etc. etc. etc. etc. One
could guess that it is an assumption, though, since it is clearly not
testable - if it were, the R&D effect would hve caused its rejection.

Allan C Cybulskie

unread,
Dec 17, 2006, 9:38:13 AM12/17/06
to

Glen M. Sizemore wrote:
> "Allan C Cybulskie" <allan_c_...@yahoo.ca> wrote in message
> news:1166362909.5...@l12g2000cwl.googlegroups.com...

> >> All this is simply gyrations designed to save what is prima facie false:
> >> "memory" is not a matter of storing representations of what happened.
> >
> > Learn to read. I clearly argued that the implementation of memory is
> > not by storing a precise representation of the thing to be remembered.
>
> Learn to read. I said nothing about the "precision" of the representation.
> But, of course, you will latch on to the notion of imprecision because then
> you can simply assign to the representation any characteristics you need to
> explain what is actually observed.

Learn to read. I never claimed that you did. I use the term "precise
representation" to distinguish the implementation of memory in humans
from that of digital cameras or computers who DO store a precise,
bit-by-bit representation of the picture stored in memory. I append
"precise" to avoid issues over whether or not such keys or indices
would be representations themselves, since there is a meaningful way to
view those things as representations. But if the term bothers you so
much, we can drop it, and then I'll simply say that the implementation
of memory in humans is not to store representations of the memory. But
if you then turn around and attack the "keys/indices" theory on the
basis that those are representations, you will have simply proven that
you have an ideological bias and a complete lack of any ability to be
reasonable about this subject.

>
>
>
>
> > However, what I will argue is that in general the BEHAVIOUR of memory
> > is attempting to store and retrieve various events, images, and other
> > such "rememberable" things.
> >
> >> Future
> >> behavior is, in part, a function of what happened in the experiment, a
> >> person's personal history, the current setting, and "motivational
> >> variables."
> >
> > And what is the point of this? NO ONE argues that future behaviour is
> > not, in part, a function of those things. So you vigorously attacking
> > a strawman. As Pesky tends to say, we are well beyond that. This
> > "fact" is not particularly interesting; it is HOW all of those things
> > interact to produce certain behaviours that is of interest.
>
>
> The point of this is that my description is what is observed, and it doesn't
> require the notion of storage.

Well, here is my claim, and it is based on observation and not a
theoretical requirement or any assumption: what makes memory behaviour
memory behaviour is that it has storage/retrieval functionality. You
are required, then, to show how your description allows me to
distinguish between memory behaviour and non-memory behaviour WITHOUT
relying in ANY WAY on the fact that the functionality of memory is to
store and retrieve events.

This is one of the really frustrating things about your views, Glen --
you constantly mix levels of description. I agreed from the start that
describing how memory is IMPLEMENTED in humans is NOT by storing a
representation. But we know that memory CAN be implemented that way
because that is how it is done in computers and digital cameras. But
from this, you seem to try to argue that even a FUNCTIONAL description
of memory should not include the terms "storage and retrieval". And
that is ludicrous: the functionality that we are trying to achieve
really IS storage and retrieval, even if we don't need to implement it
using representations.

> That is an assumption which is, by
> definition, not an hypothesis, theory, model, etc. etc. etc. etc. etc. One
> could guess that it is an assumption, though, since it is clearly not
> testable - if it were, the R&D effect would hve caused its rejection.

And if you had read what I had said, you would have understood that the
R&D effect addresses the implemenation, not the functionality. And the
"storage and retrieval" hypothesis is about the functionality, not how
that is implemented.

Glen M. Sizemore

unread,
Dec 17, 2006, 9:57:07 AM12/17/06
to

"Allan C Cybulskie" <allan_c_...@yahoo.ca> wrote in message
news:1166366293.1...@j72g2000cwa.googlegroups.com...

>
> Glen M. Sizemore wrote:
>> "Allan C Cybulskie" <allan_c_...@yahoo.ca> wrote in message
>> news:1166362909.5...@l12g2000cwl.googlegroups.com...
>> >> All this is simply gyrations designed to save what is prima facie
>> >> false:
>> >> "memory" is not a matter of storing representations of what happened.
>> >
>> > Learn to read. I clearly argued that the implementation of memory is
>> > not by storing a precise representation of the thing to be remembered.
>>
>> Learn to read. I said nothing about the "precision" of the
>> representation.
>> But, of course, you will latch on to the notion of imprecision because
>> then
>> you can simply assign to the representation any characteristics you need
>> to
>> explain what is actually observed.
>
> Learn to read. I never claimed that you did. I use the term "precise
> representation" to distinguish the implementation of memory in humans
> from that of digital cameras or computers who DO store a precise,
> bit-by-bit representation of the picture stored in memory.

Why do you make such a distinction when it is 1.) either orthogonal to the
issue or, 2.) assumes what you are trying to prove?

Glen M. Sizemore

unread,
Dec 17, 2006, 1:33:58 PM12/17/06
to

"J.A. Legris" <jale...@sympatico.ca> wrote in message
news:1166286421.5...@j72g2000cwa.googlegroups.com...

Keep telling yourself that, Joe.

Allan C Cybulskie

unread,
Dec 17, 2006, 1:38:06 PM12/17/06
to

Glen M. Sizemore wrote:
> "Allan C Cybulskie" <allan_c_...@yahoo.ca> wrote in message
> news:1166366293.1...@j72g2000cwa.googlegroups.com...
> >
> > Glen M. Sizemore wrote:
> >> "Allan C Cybulskie" <allan_c_...@yahoo.ca> wrote in message
> >> news:1166362909.5...@l12g2000cwl.googlegroups.com...
> >> >> All this is simply gyrations designed to save what is prima facie
> >> >> false:
> >> >> "memory" is not a matter of storing representations of what happened.
> >> >
> >> > Learn to read. I clearly argued that the implementation of memory is
> >> > not by storing a precise representation of the thing to be remembered.
> >>
> >> Learn to read. I said nothing about the "precision" of the
> >> representation.
> >> But, of course, you will latch on to the notion of imprecision because
> >> then
> >> you can simply assign to the representation any characteristics you need
> >> to
> >> explain what is actually observed.
> >
> > Learn to read. I never claimed that you did. I use the term "precise
> > representation" to distinguish the implementation of memory in humans
> > from that of digital cameras or computers who DO store a precise,
> > bit-by-bit representation of the picture stored in memory.
>
> Why do you make such a distinction when it is 1.) either orthogonal to the
> issue or, 2.) assumes what you are trying to prove?

Huh?

Look, computers and digitals cameras have "memory". So contrasting
their implementations to what seems to be going on in humans is clearly
not orthogonal to the issue. And since I argue that computers have as
their implementation of memory bit-by-bit representations -- which is
absolutely true, no assumption required -- as opposed to humans who
don't seem to do that -- based on YOUR own position and arguments --
where is the assumption of what I'm trying to prove? Hell, do you even
know WHAT I'm trying to prove?

Deal with the underlying issues: What are the characteristics of
behaviour that we can call "memory functionality" according to you?
For me, storage and retrieval behaviour is one of the key
differentiators. So either make a counter-claim about those
characteristics or show me that there is no such distinction between
those behaviours at all.

JGCASEY

unread,
Dec 17, 2006, 2:49:54 PM12/17/06
to

On Dec 18, 5:38 am, "Allan C Cybulskie" <allan_c_cybuls...@yahoo.ca>
wrote:
> Glen M. Sizemore wrote:
...


> Deal with the underlying issues: What are the characteristics
> of behaviour that we can call "memory functionality" according
> to you? For me, storage and retrieval behaviour is one of the
> key differentiators. So either make a counter-claim about
> those characteristics or show me that there is no such
> distinction between those behaviours at all.

...

This topic has raged on this newsgroup for a long time
and I don't see any resolution to it so I have learnt
just to ignore it.

In AI we are trying to duplicate the functioning we observe
in biological machines and have to invent the inside of our
machines whereas the exact makeup of the insides of biological
machines is still unknown.

All we can do is compare the behavior of our machines with
biological machines and see how they differ. Glen seems to
be saying that biological machines are not using any kind
of representations but doesn't come up with any useful
alternatives that can actually be tested in a program.

--
jc

Alpha

unread,
Dec 17, 2006, 4:21:43 PM12/17/06
to

"Allan C Cybulskie" <allan_c_...@yahoo.ca> wrote in message
news:1166359849....@t46g2000cwa.googlegroups.com...

>
> Glen M. Sizemore wrote:
>> "Alpha" <OmegaZ...@yahoo.com> wrote in message
>> > We already told you over and over again (but of course you cannot
>> > understand), that such an effect ONLY shows that memory is not
>> > veridical
>> > in all cases.
>>
>> Ohhhhhhhhh! Now I see; your "model" is correct" but the subject matter
>> made
>> a mistake!
>
> Well, I looked up that "effect" again, and it seems obvious that the
> effect is NOT showing anything about what memory IS, but how memory is
> IMPLEMENTED. Memory can still be described as "storage and retrieval"
> without ignoring the R-M effect.
>
> The reason is that most of our experiments have shown that the way the
> brain implements memory is not by storing a precise representation of
> the thing to be remembered. Instead, it seems clear that what the
> brain does -- the precise details of this are not necessarily known --
> is "store" in some way (yes, yes, by changing) a set of details that
> will allow it to REBUILD the thing to be remembered.

Yes - I posted a similar thought a while back. The brain re-represents, by
regeneration, the pattern "stored".

> This can be
> clearly seen in recalling images or sense impressions, and the R-M
> effect clearly shows it for remembering lists of words. The brain
> "picks out" that a common thread between the words is how they are
> related. But then when it tries to reproduce the list for "memory", it
> uses that criteria primarily and misses out on the termination
> condition, and so adds words that weren't there.
>
> One of the keys here is that more detailed and precise memorization
> SHOULD result in a limitation of this effect. We know that this is the
> case for images, and I would be surprised if it was not the case for
> the list of words as well (I'm sure SOMEONE will correct me if that
> isn't true, but with references to the later experiments that showed
> that that was not the case). This is because more extended
> memorization builds more precise "keys" into the system which means
> that the reproduction has to fill in less details and so gives a better
> memory. Note that this is both consistent with Curt's nets and with
> connectionism/neural nets so it does not imply an implementation based
> on symbolic processing (and may, in fact, contradict that sort of
> implementation).
>

--

Alpha

unread,
Dec 17, 2006, 4:24:10 PM12/17/06
to

"Glen M. Sizemore" <gmsiz...@yahoo.com> wrote in message
news:45854464$0$31275$ed36...@nr2.newsreader.com...

>
> "Allan C Cybulskie" <allan_c_...@yahoo.ca> wrote in message
> news:1166359849....@t46g2000cwa.googlegroups.com...
>>
>> Glen M. Sizemore wrote:
>>> "Alpha" <OmegaZ...@yahoo.com> wrote in message
>>> > We already told you over and over again (but of course you cannot
>>> > understand), that such an effect ONLY shows that memory is not
>>> > veridical
>>> > in all cases.
>>>
>>> Ohhhhhhhhh! Now I see; your "model" is correct" but the subject matter
>>> made
>>> a mistake!
>>
>> Well, I looked up that "effect" again, and it seems obvious that the
>> effect is NOT showing anything about what memory IS, but how memory is
>> IMPLEMENTED. Memory can still be described as "storage and retrieval"
>> without ignoring the R-M effect.
>>
>> The reason is that most of our experiments have shown that the way the
>> brain implements memory is not by storing a precise representation of
>> the thing to be remembered. Instead, it seems clear that what the
>> brain does -- the precise details of this are not necessarily known --
>> is "store" in some way (yes, yes, by changing) a set of details that
>> will allow it to REBUILD the thing to be remembered.
>
> All this is simply gyrations designed to save what is prima facie false:
> "memory" is not a matter of storing representations of what happened.
> Future behavior is, in part, a function of what happened in the experiment

Which is remembered...

>, a person's personal history,

which is represented and remembered...

> the current setting, and "motivational variables." This is all consistent
> with the notion that we are changed by exposure to certain events,

which *is* the represented pattern....

> and it is inconsistent with the notion that memory is the storage of
> (representations of) the past.

which is an inane conclusion but is to be expected from you.

>
>>This can be
>> clearly seen in recalling images or sense impressions, and the R-M
>> effect clearly shows it for remembering lists of words. The brain
>> "picks out" that a common thread between the words is how they are
>> related. But then when it tries to reproduce the list for "memory", it
>> uses that criteria primarily and misses out on the termination
>> condition, and so adds words that weren't there.
>>
>> One of the keys here is that more detailed and precise memorization
>> SHOULD result in a limitation of this effect. We know that this is the
>> case for images, and I would be surprised if it was not the case for
>> the list of words as well (I'm sure SOMEONE will correct me if that
>> isn't true, but with references to the later experiments that showed
>> that that was not the case). This is because more extended
>> memorization builds more precise "keys" into the system which means
>> that the reproduction has to fill in less details and so gives a better
>> memory. Note that this is both consistent with Curt's nets and with
>> connectionism/neural nets so it does not imply an implementation based
>> on symbolic processing (and may, in fact, contradict that sort of
>> implementation).
>>
>
>

--

Alpha

unread,
Dec 17, 2006, 4:27:44 PM12/17/06
to

"Allan C Cybulskie" <allan_c_...@yahoo.ca> wrote in message
news:1166362909.5...@l12g2000cwl.googlegroups.com...

Yet the representation is distributed and the distributed parts are
apparently suspect to interference and fading *in part*. That is why brain
has filling in functions to compensate for the not-so-good representation.
When the pattern is replayed (re-presented), the representation can become
attended to by the self process.

--

Alpha

unread,
Dec 17, 2006, 4:36:17 PM12/17/06
to

"Allan C Cybulskie" <allan_c_...@yahoo.ca> wrote in message
news:1166305806.3...@f1g2000cwa.googlegroups.com...

Glen is very good at failing to understand.

> what memorization and remembering in those cases IS.

I think memory process are process of storage and retreival. And the change
that occurs as part of the storage process *is* a representation, perhaps
not veridical, but close enough such that a near-veridical re-presentation
(or retreival) can be accomplished by brain's filling-in mechanisms
(feedback management, hierarchies of representation etc.)

Curt Welch

unread,
Dec 18, 2006, 12:07:20 AM12/18/06
to
"Alpha" <OmegaZ...@yahoo.com> wrote:

> I think memory process are process of storage and retreival. And the
> change that occurs as part of the storage process *is* a representation,
> perhaps not veridical, but close enough such that a near-veridical
> re-presentation (or retreival) can be accomplished by brain's filling-in
> mechanisms (feedback management, hierarchies of representation etc.)

When you become aware of things happening in your head, do you believe that
some of these events are "memories" (data retrieved from storage) and other
events are something else? If so, how do you distinguish between memories
and those other things (when they happen in your own head and if you had
some brain scanning technology, when they happened in someone else's head)?
Do you think there are actually different brain processes at works in the
two cases?

How you might answer these questions might help clear up some points about
the message I think Glen is making here that I think people are failing to
understand. I think you guys are getting too hung up on the use of words
like storage and retrieval to really grasp the bigger issue here.

PeskyBee

unread,
Dec 18, 2006, 6:23:35 AM12/18/06
to
"JGCASEY" <jgkj...@yahoo.com.au> escreveu na mensagem
news:1166384994.8...@80g2000cwy.googlegroups.com...

You got that right, JC. We're trying to figure out how to build
things that perform intelectually like humans. But we're using
different stuff to do that: transistors instead of neurons,
silicon instead of meat. So we have to choose a level of
description where such implementation differences don't affect
the result. We must work on a *functional level*. This requires
that we work on a conceptual level *above* the one so vigorously
held by obsolete radical behaviorists.

Glen would be better trying to say that intelligences cannot be
duplicated by non-biological organisms (as some nutcracks use
to say). That's obviously ridiculous, but it is much more defensible
than the claim that we don't need conceptual structures such as
memory, representation, information, etc.

*PB*

>
> --
> jc
>

Allan C Cybulskie

unread,
Dec 18, 2006, 7:39:36 AM12/18/06
to

Curt Welch wrote:
> "Alpha" <OmegaZ...@yahoo.com> wrote:
>
> > I think memory process are process of storage and retreival. And the
> > change that occurs as part of the storage process *is* a representation,
> > perhaps not veridical, but close enough such that a near-veridical
> > re-presentation (or retreival) can be accomplished by brain's filling-in
> > mechanisms (feedback management, hierarchies of representation etc.)
>
> When you become aware of things happening in your head, do you believe that
> some of these events are "memories" (data retrieved from storage) and other
> events are something else?

Absolutely.

> If so, how do you distinguish between memories
> and those other things (when they happen in your own head and if you had
> some brain scanning technology, when they happened in someone else's head)?

Well, I certainly don't use brain scanning technology to make that
determination for my own memories, and I can guarantee you that no one
here can yet determine if something is a memory or not by scanning the
brains of others. Neuroscience is not so advanced.

So, how do I distinguish them? Well, take visual experiences. If the
visual experience I am having is clearly a direct correlation to an
event that occurred in my past, then it is a memory. If it is not,
then it is not.

> Do you think there are actually different brain processes at works in the
> two cases?

Well, in the case of current visual experiences versus remembered
visual experiences, that is absolutely true. The obvious difference is
that the first layer of information from the visual cortex is either
absent or ineffective: I don't see things while remembering things.

The problem with this question is that whether or not it is even
applicable depends on the implementation of the brain. If the brain is
implemented as a straight connectionist system, then it wouldn't be
possible to determine what was different about memory events versus
perception events because that difference would be encoded into
specific neural "weights" at various parts of the brain. So we
wouldn't be able to tell the difference not because there would be no
difference, but because the differences would be too similar to the
differences between other cases -- like between two different visual
experiences -- to make any such determination. If the brain is
modular, then we should see a module come into play that doesn't come
into play in the other cases, and thus we should have different "brain
processes".

Note that simply reducing it to the level of "well, it's all neural
firings and so it's all the same process" -- an argument that you are
famous for making -- doesn't work. ANY process can be reduced to a
level where everything is the same process, but this reductionism run
amok will not gain any scientific understanding about any topic.

>
> How you might answer these questions might help clear up some points about
> the message I think Glen is making here that I think people are failing to
> understand. I think you guys are getting too hung up on the use of words
> like storage and retrieval to really grasp the bigger issue here.

Funny ... that's actually the mistake I think Glen is making: that he
is worrying too much about certain terms to realize what's true at the
various layers of explanation.

Allan C Cybulskie

unread,
Dec 18, 2006, 7:48:24 AM12/18/06
to

JGCASEY wrote:
> On Dec 18, 5:38 am, "Allan C Cybulskie" <allan_c_cybuls...@yahoo.ca>
> wrote:
> > Glen M. Sizemore wrote:
> ...
> > Deal with the underlying issues: What are the characteristics
> > of behaviour that we can call "memory functionality" according
> > to you? For me, storage and retrieval behaviour is one of the
> > key differentiators. So either make a counter-claim about
> > those characteristics or show me that there is no such
> > distinction between those behaviours at all.
> ...
>
> This topic has raged on this newsgroup for a long time
> and I don't see any resolution to it so I have learnt
> just to ignore it.

I used to ignore it, but lately I've been seeing that crap more and
more and am seeing people getting mislead by it, and so I feel required
to take it wrong. And it's an important issue. If Glen is right, then
it is in fact a critical idea that we need to take into account when
talking about memory and thus about "mind". But he isn't right.

>
> In AI we are trying to duplicate the functioning we observe
> in biological machines and have to invent the inside of our
> machines whereas the exact makeup of the insides of biological
> machines is still unknown.
>
> All we can do is compare the behavior of our machines with
> biological machines and see how they differ. Glen seems to
> be saying that biological machines are not using any kind
> of representations but doesn't come up with any useful
> alternatives that can actually be tested in a program.

Well, as I said in my reply to him it seems clear that humans aren't
using "precise representations" (leaving aside the question of whether
the keys or indices I talk about are themselves representations) and it
seems also clear that storing precise representations allows for better
memory functionality than what we do. Computers are better at
remembering things than we are. The only issue they may have at the
moment is relevance searching ...

PeskyBee

unread,
Dec 18, 2006, 9:02:11 AM12/18/06
to
"Allan C Cybulskie" <allan_c_...@yahoo.ca> escreveu na mensagem
news:1166446104.2...@48g2000cwx.googlegroups.com...

>
> JGCASEY wrote:
>> On Dec 18, 5:38 am, "Allan C Cybulskie" <allan_c_cybuls...@yahoo.ca>
>> wrote:
>> > Glen M. Sizemore wrote:
>> ...
>> > Deal with the underlying issues: What are the characteristics
>> > of behaviour that we can call "memory functionality" according
>> > to you? For me, storage and retrieval behaviour is one of the
>> > key differentiators. So either make a counter-claim about
>> > those characteristics or show me that there is no such
>> > distinction between those behaviours at all.
>> ...
>>
>> This topic has raged on this newsgroup for a long time
>> and I don't see any resolution to it so I have learnt
>> just to ignore it.
>
> I used to ignore it, but lately I've been seeing that crap more and
> more and am seeing people getting mislead by it, and so I feel required
> to take it wrong. And it's an important issue. If Glen is right, then
> it is in fact a critical idea that we need to take into account when
> talking about memory and thus about "mind". But he isn't right.

You are absolutely right about that. A lie told many times starts
to be seen as having "some truth" in it. We must occasionally
argue against those untenable things.

>>
>> In AI we are trying to duplicate the functioning we observe
>> in biological machines and have to invent the inside of our
>> machines whereas the exact makeup of the insides of biological
>> machines is still unknown.
>>
>> All we can do is compare the behavior of our machines with
>> biological machines and see how they differ. Glen seems to
>> be saying that biological machines are not using any kind
>> of representations but doesn't come up with any useful
>> alternatives that can actually be tested in a program.
>
> Well, as I said in my reply to him it seems clear that humans aren't
> using "precise representations" (leaving aside the question of whether
> the keys or indices I talk about are themselves representations) and it
> seems also clear that storing precise representations allows for better
> memory functionality than what we do. Computers are better at
> remembering things than we are. The only issue they may have at the
> moment is relevance searching ...

Agreed. Precise memory storage or precise representations are undesirable.
Human thought cannot function correctly on the basis of precise
exemplars, but only on the basis of generalizations over those
exemplars (in other words, we think using categories, not exemplars).
There's a cute little story from argentine writer Jorge Luis Borges
that deals with this subject. It's called "Funes, el Memorioso":

http://en.wikipedia.org/wiki/Funes,_the_Memorious

Borges knew about this stuff, even being a novel writer...

*PB*


J.A. Legris

unread,
Dec 18, 2006, 12:08:30 PM12/18/06
to

Ahem. This nutcrack is becoming more convinced of it all the time.

Call it radical structuralism, the opposite of functionalism - the
thesis that structural differences always entail functional
differences.

Take the example of the computer program with unreachable code - it is
structurally different from the same program with the unreachable part
deleted, yet apparently functionally identical. But suppose you "ran"
the programs on another computer with a different instruction set - it
might be very difficult to predict the results, but there would almost
certainly be differences.

Suppose we stipulate that the programs must be run on the machines they
were designed for - then we get functional equivalence, right? Not
really. There are all sorts of other physical tests we could do to
determine which is which. We could go further and stipulate that the
only kind of test we can do is to look at the abstract states encoded
by inputs and outputs, ignoring everything else. Yes, that's it. In
other words, functionalism works only on paper. It is a property of
logic and mathmatics, not matter.

So the question is, AI community, how close to natural is good enough?
As I've said before, I think we've already got AI, given the materials
we are working with. Toasters and roombas are just as good
approximations to natural intelligence (and just as bad) as cars and
airplanes are to natural locomotion. Why aren't they closer? Ultimately
it's because they're made of the wrong stuff.

--
Joe Legris

PeskyBee

unread,
Dec 18, 2006, 12:42:35 PM12/18/06
to
"J.A. Legris" <jale...@sympatico.ca> escreveu na mensagem
news:1166461710.2...@j72g2000cwa.googlegroups.com...

I understand what you say, but let's agree that the functionalism
you're portraying here is too strict to be of any value (other than
philosophical armchair conversation). In that strict sense, two Pentium
IV CPUs, same kind of motherboard, same amount of memory, same kind of
HD would not be functionally equivalent (they will differ by some
infinitesimally and cumulative small ways, given that their physical
properties are *not* exactly the same; think of the way a hard disk
responds differently in time to any request of information).

In defense of the functionalism I'm thinking, one may see a Windows XP
running in a Dual Core compared with the same thing running in an old
Athlon, both with extremely comparable functional equivalences, noticeable
differences only in "reaction time". We design software so that they
maintain this kind of equivalence.

> So the question is, AI community, how close to natural is good enough?
> As I've said before, I think we've already got AI, given the materials
> we are working with. Toasters and roombas are just as good
> approximations to natural intelligence (and just as bad) as cars and
> airplanes are to natural locomotion. Why aren't they closer? Ultimately
> it's because they're made of the wrong stuff.

I'd say that nobody would complain (saying "ha, this is not AI") given
a machine capable of coming up with novelties that "make sense". In
other words, AI will demonstrate their existence once these machines
consistently surprise us with intelectually creative acts. Creativity
is, in my way of seeing things, the essential property intelligent
organisms must have.

*PB*


>
> --
> Joe Legris
>


Alpha

unread,
Dec 18, 2006, 1:26:24 PM12/18/06
to

"PeskyBee" <pesk...@gmail.com> wrote in message
news:12od7r0...@corp.supernews.com...

Not only categories, but the repesentations, I believe, are like what
Hawkins elaborates on in his On Intelligence book: invariant representations
at all levels in the varirous hierarchies. The members of the categories are
invariant exemplars.


> There's a cute little story from argentine writer Jorge Luis Borges
> that deals with this subject. It's called "Funes, el Memorioso":
>
> http://en.wikipedia.org/wiki/Funes,_the_Memorious
>
> Borges knew about this stuff, even being a novel writer...
>
> *PB*
>
>

--

feedbackdroid

unread,
Dec 18, 2006, 1:40:56 PM12/18/06
to

PeskyBee wrote:

>
> You are absolutely right about that. A lie told many times starts
> to be seen as having "some truth" in it. We must occasionally
> argue against those untenable things.
>
>


Actually, naive individuals will start believing what they hear after
only 3 repetitions. This is what behaviorism counts on. If I say it 3
times, they must believe.


>
> >
> > Well, as I said in my reply to him it seems clear that humans aren't
> > using "precise representations" (leaving aside the question of whether
> > the keys or indices I talk about are themselves representations) and it
> > seems also clear that storing precise representations allows for better
> > memory functionality than what we do.
>
>


When you argue with a behaviorist, you have to realize you are arguing
about "his" particular and peculiar, and usually straw-man, definition
of said topic. When it comes to representation, they take this to mean
literal picture and, this therefore implies a homunculus to view it.
You get what you define, which is why such arguments just go round in
endless circles.

Bennett and Hacker systematized the approach in "Philosophical
Foundations of Neuroscience", which should properly be titled "Two
Behaviorists Say What's Wrongheaded About Neuroscientist Terminology".

J.A. Legris

unread,
Dec 18, 2006, 1:58:11 PM12/18/06
to

When you compare computers to computers you're doing what I described
above - comparing the abstract functionality. But, in general, we don't
have abstract descriptions of natural phenomena, unless they are
simplified, in which cases they no longer behave like the natural
systems they describe. Organisms are not computers, nor are they
equivalent to computer programs. The best we can do is to simulate
aspects of organisms, but we cannot expect these to be any more similar
to real organisms than, say, physicists expect numerical simulations of
nuclear decay to cause dangerous radiation.

So why doesn't a simulation of a proton act more like a real proton?
Modern physics has delineated a few quantum numbers that completely
specify the state of a subatomic particle. But of course, the models
presume unstated (and mostly poorly understood) properties of real
matter, the very properties that are the subject of physics research.
And even if we had a complete mathematical understanding of physics,
reproducing it numerically would be computationally intractable. The
only way to get at the properties of nature is to look at the real
thing. In that sense, the assumptions of AI are a denial of the
scientific method and a decent into guesswork and hopeful tinkering.

Well, natural selection got there by tinkering, and you know how long
that took.

--
Joe Legris

PeskyBee

unread,
Dec 18, 2006, 2:07:52 PM12/18/06
to

"feedbackdroid" <feedba...@yahoo.com> escreveu na mensagem
news:1166467256....@n67g2000cwd.googlegroups.com...

>
> PeskyBee wrote:
>>
>> You are absolutely right about that. A lie told many times starts
>> to be seen as having "some truth" in it. We must occasionally
>> argue against those untenable things.
>
> Actually, naive individuals will start believing what they hear after
> only 3 repetitions. This is what behaviorism counts on. If I say it 3
> times, they must believe.

On the other hand, those who have dealt with children are aware that
even repeating many times, they may refuse to believe certain things
we say to them...

>>
>> >
>> > Well, as I said in my reply to him it seems clear that humans aren't
>> > using "precise representations" (leaving aside the question of whether
>> > the keys or indices I talk about are themselves representations) and it
>> > seems also clear that storing precise representations allows for better
>> > memory functionality than what we do.
>
>
> When you argue with a behaviorist, you have to realize you are arguing
> about "his" particular and peculiar, and usually straw-man, definition
> of said topic. When it comes to representation, they take this to mean
> literal picture and, this therefore implies a homunculus to view it.
> You get what you define, which is why such arguments just go round in
> endless circles.

They also forget that a homunculus that "sees" something less complicated
than the incoming stimuli may eventually give rise to another simplification
capable of being seen by another homunculus of much less capacity,
until the thing becomes a simple "yes/no/maybe". Cognition in general
is about reducing the complexity in order to deal with regularities
that can be easily processed.

> Bennett and Hacker systematized the approach in "Philosophical
> Foundations of Neuroscience", which should properly be titled "Two
> Behaviorists Say What's Wrongheaded About Neuroscientist Terminology".

What could be worse than a critique to someone else's field with
no offer of how to achieve the same results? From the standpoint
of B&H, neuroscience would have much less progress than it has.

*PB*

PeskyBee

unread,
Dec 18, 2006, 2:14:17 PM12/18/06
to
"Alpha" <OmegaZ...@yahoo.com> escreveu na mensagem
news:4586d09b$0$15518$8826...@free.teranews.com...

Yes, that's about it. But let's further notice that what are
invariant are some of the features in these exemplars. A category
thus formed will "subsume" the presence of many exemplars based
on the features they have in common, including features that are
abstract, and *not* perceptually salient (and also including features
that are, themselves, lower level categories). And the beauty of it
is that all this is variable according to the context. Here's a simple
exercise. I'll list some words and one might easily discover what
I'm talking about:

Category 1:
apple
banana
watermelon
cucumber
orange

Category 2:
apple
watermelon
basketball ball
orange
the Moon

Even having lots of common exemplars, these categories are dealing with
different sets of invariant features. The dynamic nature of the category
forming process is very important during conversation. Lawrence Barsalou
demonstrates the fluid nature of categories describing one of them:
"things you would carry with you if your house was set on fire". Since
I've heard that, I can't stop thinking about the stuff I would carry
in a hurry. One thing's absolutely certain: I would leave in the flames
my two books about behaviorism...

*PB*

PeskyBee

unread,
Dec 18, 2006, 2:23:15 PM12/18/06
to
"J.A. Legris" <jale...@sympatico.ca> escreveu na mensagem
news:1166468291.3...@l12g2000cwl.googlegroups.com...

Joe, I'm not sure I follow you. I agree that processes in nature are
complex and unpredictable in the long term. But what a mystery it is
that we can concoct some laws and models that reduce enormously this
unpredictability on the short term. It is this fact that allows us
to build technological artifacts such as cell phones, computers,
bridges, airplanes and buildings. So I'm extending this kind of
rationale to the possibility of building AIs. Sure, they may simply
be hogwash in the long term. But who wants something that works
perfectly for the next century or so? I want an AI for the time
frame of 10 years. Because in ten years, this AI will help me
produce an even better system, and thus this may follow the path
of progress that we, Homo Sapiens, followed since Gutemberg printed
the first book.

*PB*

>
> --
> Joe Legris
>


Glen M. Sizemore

unread,
Dec 18, 2006, 2:37:45 PM12/18/06
to

"Allan C Cybulskie" <allan_c_...@yahoo.ca> wrote in message
news:1166380685.9...@l12g2000cwl.googlegroups.com...

Yes, I know precisely what you, and the rest of the thugs here, are arguing.
What makes it orthogonal is that the issue is not whether there is such a
thing as storage and retrieval (think Grandma's depression-inspired pantry),
it is whether there is, or could be, anything that warrants applying that
terms to what we call "memory" in animals. We do not witness anything that
requires some notion of "storing a representation" when we label behavioral
events in humans and animals as "memory." We simply see that exposure to
some set of environmental events makes it so the person or animal behaves in
some contexts in ways that it would not otherwise have behaved. At least in
the laboratory that is the case. In the ordinary world we are often
inferring that some past event is responsible; if Joe shows up with the beer
after we told him two days ago to bring it, we are likely to say: "Joe
remembered the beer," but in principle this is pretty much the same thing as
some laboratory demonstrations. Thus, the issue of computer "storage" is
orthogonal, and you have simply assumed that something is stored in human
and animal memory.

The things that must be witnessed to label things in the laboratory as the
different "kinds of memory" are clear; try to publish a paper calling
something "episodic memory" etc. if the operational definition has not been
met. We do not witness anything that necessitates the notion of "storage"
but we are, presumably, required to acknowledge that some aspect of the
animal has been changed, and physiological research suggests that it is the
brain.

My view is that memory involves either the simple establishment of stimulus
control (in the simplest case where memory simply equals conditioning or
habituation/sensitization) or it involves the establishment of complicated
behavioral repertoires whereby we manipulate our own behavior such that it
is likely that we will behave in a particular way in certain circumstances
that may arise. I believe that this behavior is often excruciatingly
complicated and mysterious (as in episodic memory and STM), but it (the
"non-storage" view) is, I believe, in keeping with parsimony in two ways:
1.) we do not invoke a term that we haven't been forced to invoke (we are
forced by the data to acknowledge "change in the animal" but we are not
forced to say that anything is stored) and 2.) the behavioral phenomena can
be explained by conditioning processes and, presumably, by the physiology
that mediates conditioning processes. The language of "memory" can be
replaced with the language of conditioning, and the latter language is not
full of unwarranted metaphor.


>
> Deal with the underlying issues: What are the characteristics of
> behaviour that we can call "memory functionality" according to you?

The term "memory" can, and is, invoked anytime some behaviorally relevant
environmental event changes future behavior. That is, any time we can say
"learning" we can invoke "memory." About the only thing that doesn't invoke
the term "memory" is the unconditioned elicited response.

> For me, storage and retrieval behaviour is one of the key
> differentiators. So either make a counter-claim about those
> characteristics or show me that there is no such distinction between
> those behaviours at all.

But you don't witness "storage," you simply witness that there is some
antecedent circumstance that chages behavior. This is consistent with the
notion that the animal is changed by the relevant event, but "stored the
event" goes beyond the facts.

Glen M. Sizemore

unread,
Dec 18, 2006, 2:42:40 PM12/18/06
to

"PeskyBee" <pesk...@gmail.com> wrote in message
news:12od7r0...@corp.supernews.com...
> "Allan C Cybulskie" <allan_c_...@yahoo.ca> escreveu na mensagem
> news:1166446104.2...@48g2000cwx.googlegroups.com...
>>
>> JGCASEY wrote:
>>> On Dec 18, 5:38 am, "Allan C Cybulskie" <allan_c_cybuls...@yahoo.ca>
>>> wrote:
>>> > Glen M. Sizemore wrote:
>>> ...
>>> > Deal with the underlying issues: What are the characteristics
>>> > of behaviour that we can call "memory functionality" according
>>> > to you? For me, storage and retrieval behaviour is one of the
>>> > key differentiators. So either make a counter-claim about
>>> > those characteristics or show me that there is no such
>>> > distinction between those behaviours at all.
>>> ...
>>>
>>> This topic has raged on this newsgroup for a long time
>>> and I don't see any resolution to it so I have learnt
>>> just to ignore it.
>>
>> I used to ignore it, but lately I've been seeing that crap more and
>> more and am seeing people getting mislead by it, and so I feel required
>> to take it wrong. And it's an important issue. If Glen is right, then
>> it is in fact a critical idea that we need to take into account when
>> talking about memory and thus about "mind". But he isn't right.
>
> You are absolutely right about that. A lie told many times starts
> to be seen as having "some truth" in it.

Like the "storage" and "retrieval" metphors? No, you will not say so, but
all that ois witnessed is the brain is changed, there is NO evidence that
anything is stored, and the R&M experiment shows that despite what morons
like you and Alpha say. Let's see, if the animal correctly responds to what
was presented before, it is evidence of "storage." Oh - and if the animal
responds AS IF some stimulus had been presented that really wasn't, that's
evidence of storage too. Pretty much looks like any outcome is consistent
with "storage;" how convenient.

Glen M. Sizemore

unread,
Dec 18, 2006, 2:50:24 PM12/18/06
to

"feedbackdroid" <feedba...@yahoo.com> wrote in message
news:1166467256....@n67g2000cwd.googlegroups.com...

>
> PeskyBee wrote:
>
>>
>> You are absolutely right about that. A lie told many times starts
>> to be seen as having "some truth" in it. We must occasionally
>> argue against those untenable things.
>>
>>
>
>
> Actually, naive individuals will start believing what they hear after
> only 3 repetitions. This is what behaviorism counts on. If I say it 3
> times, they must believe.
>
>
>>
>> >
>> > Well, as I said in my reply to him it seems clear that humans aren't
>> > using "precise representations" (leaving aside the question of whether
>> > the keys or indices I talk about are themselves representations) and it
>> > seems also clear that storing precise representations allows for better
>> > memory functionality than what we do.
>>
>>
>
>
> When you argue with a behaviorist, you have to realize you are arguing
> about "his" particular and peculiar, and usually straw-man, definition
> of said topic. When it comes to representation, they take this to mean
> literal picture and, this therefore implies a homunculus to view it.


This is simply a lie. It doesn't matter if the alleged representation is
"iconic" or "symbolic." The homunculism/mereological fallacy is raised
either way.

> You get what you define, which is why such arguments just go round in
> endless circles.
>
> Bennett and Hacker systematized the approach in "Philosophical
> Foundations of Neuroscience", which should properly be titled "Two
> Behaviorists Say What's Wrongheaded About Neuroscientist Terminology".

Neither Bennett and Hacker would call themselves behaviorists. One is a
neuroscientist, and one is a sort of Wittgensteinian philosopher (you might
argue that Wittgenstein was a behaviorist of sorts, but my guess is that
nether one has read a page of Skinner).

>


Glen M. Sizemore

unread,
Dec 18, 2006, 3:14:10 PM12/18/06
to

"PeskyBee" <pesk...@gmail.com> wrote in message
news:12odpo5...@corp.supernews.com...

Perhaps you should inquire as to who MR Bennett is.

JGCASEY

unread,
Dec 18, 2006, 3:23:28 PM12/18/06
to

On Dec 19, 5:58 am, "J.A. Legris" <jaleg...@sympatico.ca> wrote:

[...]

> Organisms are not computers, nor are they equivalent to
> computer programs. The best we can do is to simulate
> aspects of organisms, but we cannot expect these to be
> any more similar to real organisms than, say, physicists
> expect numerical simulations of nuclear decay to cause
> dangerous radiation.

AI is about building machines and machines are real things
not simulations. A point I made to Glen Sizemore.

subject:
is conditioning sufficient?

JC: Also I don't see it as a mere simulation. Unlike the
simulation of say a tornado an AI program can actually
physically interact with the real world.

GS: I don't know where I stand here. I'll have to think
about it some more. ...


Jun 26 2004 by Glen M. Sizemore - 244 messages - 18 authors


--
JC

PeskyBee

unread,
Dec 18, 2006, 3:33:36 PM12/18/06
to
"Glen M. Sizemore" <gmsiz...@yahoo.com> escreveu na mensagem
news:4586eeac$0$31246$ed36...@nr2.newsreader.com...

Convenient only on the vision of those who don't know how science
works. Or do you think that the issue you raise cannot be addressed
by simple and straightforward statistical analysis? It's a question
so trivial that it is unworthy of sophomore attention.

*PB*

PeskyBee

unread,
Dec 18, 2006, 3:34:20 PM12/18/06
to
"Glen M. Sizemore" <gmsiz...@yahoo.com> escreveu na mensagem
news:4586f60e$0$31248$ed36...@nr2.newsreader.com...

Perhaps we should ask to the ones (still alive) that B&H criticize
what they think of all this: Francis Crick, Gerald Edelman, Blakemore,
Young, Antonio Damasio, Richard Gregory and David Marr. Which side
do you think I will choose?

*PB*

Glen M. Sizemore

unread,
Dec 18, 2006, 3:49:29 PM12/18/06
to

"PeskyBee" <pesk...@gmail.com> wrote in message
news:12oduq8...@corp.supernews.com...

Who cares? The issue is that Bennett is not "outside" the field as you
imply. Accuracy is not really your strong suit, though, is it?

Glen M. Sizemore

unread,
Dec 18, 2006, 3:51:28 PM12/18/06
to

"PeskyBee" <pesk...@gmail.com> wrote in message
news:12oduou...@corp.supernews.com...

And how is the conceptual issue I raised "addressed" by statistical

Glen M. Sizemore

unread,
Dec 18, 2006, 3:59:41 PM12/18/06
to
<snip>

>
> http://en.wikipedia.org/wiki/Funes,_the_Memorious
>
> Borges knew about this stuff, even being a novel writer...

Borges never wrote a single novel to my knowledge. He was a short story
writer and a literary critic.

>
> *PB*
>
>


PeskyBee

unread,
Dec 18, 2006, 4:01:46 PM12/18/06
to
"Glen M. Sizemore" <gmsiz...@yahoo.com> escreveu na mensagem
news:4586fe56$0$31245$ed36...@nr2.newsreader.com...

A "neuroscientist" that rejects the conceptual structure chosen by
the majority of those who work in the field cannot be said to be
"inside" that field, don't you think?

*PB*

PeskyBee

unread,
Dec 18, 2006, 4:13:58 PM12/18/06
to
"Glen M. Sizemore" <gmsiz...@yahoo.com> escreveu na mensagem
news:4586fed0$0$31245$ed36...@nr2.newsreader.com...

Oh gosh.
Somebody chooses the name "bubery" to stand for a behavior
evoked by a particular stimulus. A nutcrack comes along and say
that it is possible to see an organism bubbering without having first
been exposed to that particular kind of stimulus. The creator of the
name presents a statistical baseline showing clearly the typical
random occurrence of such "spontaneous bubery". Then, he shows the
results of some experiments where the presence of bubery, given the
submission of that particular stimulus, cannot be explained by
random fluctuations (probability p < 0.00005). Would that be
enough for you to support the name bubery?

*PB*

Alpha

unread,
Dec 18, 2006, 4:14:05 PM12/18/06
to

"Glen M. Sizemore" <gmsiz...@yahoo.com> wrote in message
news:4586ed86$0$31243$ed36...@nr2.newsreader.com...

Memory, or memorization are not metaphors but words that point to/refer to a
state/process that does happen (the change in the animal). That you do not
understand that is astonishing.

>
>
>>
>> Deal with the underlying issues: What are the characteristics of
>> behaviour that we can call "memory functionality" according to you?
>
> The term "memory" can, and is, invoked anytime some behaviorally relevant
> environmental event changes future behavior. That is, any time we can say
> "learning" we can invoke "memory." About the only thing that doesn't
> invoke the term "memory" is the unconditioned elicited response.
>
>> For me, storage and retrieval behaviour is one of the key
>> differentiators. So either make a counter-claim about those
>> characteristics or show me that there is no such distinction between
>> those behaviours at all.
>
> But you don't witness "storage," you simply witness that there is some
> antecedent circumstance that chages behavior.

No - you are wrong! I witness storage when I view a movie and then can
recall/retrieve scennes from that movie to my mind's eye's delight.

>This is consistent with the notion that the animal is changed by the
>relevant event, but "stored the event" goes beyond the facts.

Nope - the facts point to storage of invariant representations.

--

It is loading more messages.
0 new messages