Is Subsumption an Innate Human Cognitive Module?

25 views
Skip to first unread message

Michael DeBellis

unread,
Jun 28, 2018, 12:52:30 PM6/28/18
to ontolog-forum
This is purely a question of psychology not computer science so I know it's a bit OT here but given the strong ties between AI and psychology I thought some people might have opinions and pointers to relevant papers. I've been wondering lately if subsumption may be an innate learning mechanism for humans.  In the Evolutionary Psychology paradigm (and the work of people who helped create that paradigm such as Chomsky and Fodor) there is speculation and research on what the ev-psych people call cognitive modules.  Fodor helped popularize the term with his short book Modularity of Mind and Chomsky also talked about modules early on in his 1984 book Modular Approaches to the Study of Mind. However, almost everyone in the ev-psych community focuses on modules at what we would call the "application" level. I.e., modules for language, for ethical reasoning, Theory of Mind, Living Things, etc.  They seldom talk about any general purpose modules that are shared by the other modules and some prominent people such as Cosmides and Tooby are strongly against such an idea.  I realize that many people would critique what I'm saying here as taking what should just be a metaphor (brain is analogous to a computer) and treating it as if the brain just IS a computer but while I recognize that it is an error to just assume that the brain is designed as a computer is I think it's also an error to just dismiss some argument out of hand because it's taking a computer science technique and hypothesizing that the brain may work the same way. 

If you look at some of the evolutionary psych research there are taxonomies in many of the modules. Living things is the best example. The book Folk Biology edited by Douglas Medlin and Scott Atran as well as some of Atran's early papers on the topic provide strong evidence that all humans: from pre-verbal infants  to hunter gatherers to professors of biology seem to have a basic innate model of living things (Animals, Plants, etc.) and that learning for any specific environment essentially consists of taking those innate concepts and creating new subclasses. One of the interesting findings from Atran's work is that even professors of biology resort to the innate model when they talk about things like their gardens rather than using the accepted biology model which is different (e.g., according to Atran there is no concept of "Tree" in the biology model). The book Mapping the Mind: Domain Specificity in Cognition and Culture edited by Hirshfield and Gelman also provides evidence regarding pre-verbal infants. 

Frank Keil has also done some interesting work on learning in children where he shows evidence that they often learn by creating new subclasses (what he calls Kinds or Natural Kinds although I think he means it in a different way than the standard philosophical definition of natural kinds) and that they reason by using where a Thing falls in the Taxonomy as a way to predict how it behaves (e.g., living things can move on their own but inanimate objects need some agent to move them). This is in his paper: The Growth of Understanding of Natural Kinds as well as his book Concepts, Kinds, and Cognitive Development. Keil also showed that children seem to reason about how similar two different things are based on how far apart they are in the hypothesized innate taxonomy. 

There are some precedents for the hypothesis that some basic reasoning skills are innate. Randy Gallistal and others have shown that basic arithmetic such as counting (although not necessarily unbounded counting, many tribes seem to have one, two, three, and then just many) is innate and Chomsky's Strong Minimalist Hypothesis (see his latest book with Berman: Why Only Us) hypothesizes that recursion is innate in humans. 

The more psychology I read the more I keep coming across examples that make me think subsumption may possibly be an innate capability, that humans are born with a few innate concepts and that a big part of learning consists of figuring out where things fall in this innate taxonomy and in creating new subclasses in it. Is this taking the computer metaphor too literally or might it be possible? Also, any references that are relevant to this topic would be appreciated. 

Michael

John F Sowa

unread,
Jun 28, 2018, 3:04:27 PM6/28/18
to ontolo...@googlegroups.com
On 6/28/2018 12:52 PM, Michael DeBellis wrote:
> I've been wondering lately if subsumption may be an innate learning
> mechanism for humans.

The word 'subsumption' is a technical term used in some systems.

I would recommend a more common pair of terms that are easy
to understand, and available as adjectives, verbs, and nouns:

Adjectives: general / special.

Verbs: generalize / specialize.

Nouns: generalization / specialization.

> I recognize that it is an error to just assume that the brain is
> designed as a computer is. I think it's also an error to just
> dismiss some argument out of hand

My only recommendation is to dismiss the word 'subsumption'
(except in a citation of some historical precedent) and
replace it with the pair generalize/specialize.

If you use this pair of terms, you (and your readers or students)
can immediately see their application in every branch science,
mathematics, philosophy, engineering, and everyday life.

Moral of the story: Don't use weird words that create more
confusion than enlightenment -- unless they have been used in
a historical document you're citing. If you do cite it, you
should immediately recommend the more intelligible replacement.

To answer the question above: generalization/specialization is
either innate or quickly learned by every living thing.

John

Ferenc Kovacs

unread,
Jun 29, 2018, 7:51:04 AM6/29/18
to ontolo...@googlegroups.com
Michael.
"subsumption may possibly be an innate capability, that humans are born with a few innate concepts and that a big part of learning consists of figuring out where things fall in this innate taxonomy and in creating new subclasses in it. 
I believe that we are not born with innate concepts in the sense that concepts are tangible "objects" or ideas that exist on their own and in the world of ideas.
Following the lines that learning is creating,  remembering is retrieving associations that work in both directions it is justified to assume that 
"Is this taking the computer metaphor too literally or might it be possible
Is this taking the computer metaphor too literally or might it be possible?"
But there are two important differences. The human brain works with "natural" (analog) wave forms, computers use manipulated wave forms to make them digitally process-able.Second, although both tools of intelligence work by performing operations on data, humans do not keep track on the number of operations (recursions) as precisely as computers do.
What we assume about the operations is that they are not induction and deduction or abduction, but organised in a different way to satisfy our needs to predict/foresee the future.
Thus we have objects as a result of separation and isolation, properties as a result of abstraction. They are connected in commutation, but they indicate concrete as well as abstract characters when though of as one. Thus we have quality and quantity in one (a number) that does not exist in a single form: no property or attribute exists without an object and vice versa, no object makes sense without a character that describes or identifies it with respect to others.
Thus you have an abstract one and a concrete one where if one is concrete, it may be a part of the abstract one, or a whole. Or if one is abstract, it may have a list of elements that are brought under that umbrella due to subsumption because thinking in integers/countables needs them to be closed.
This works with nouns and adjectives alike, because basically what you have here is a containment/containing relation working at hand.
As for the grammar forms: 
    Adjectives:  general / special.
    Verbs:  generalize / specialize.
    Nouns:  generalization / specialization.
The verbs in the middle represent operations that result in the following senses:
Abstract: general, special, generalization, specialization
Concrete: generalization, specialization
Note that you need abstract forms in order to cut your description short and void of details.
Note: all properties are abstract, because they are the product of abstraction (one of the recognized mental operations). And none are stand-alone.
Best
Ferenc
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Jon Awbrey

unread,
Jul 1, 2018, 9:45:34 PM7/1/18
to ontolo...@googlegroups.com, Michael DeBellis
Michael,

You mention a lot of people I haven't thought about for a long time,
except for Chomsky — I kept up with him as best I could a bit longer.
I remember Doug Medin from the year I was at Illinois and I recall
a colloquium talk Frank Keil gave at Michigan State that intrigued
me because he echoed ideas from Kant about the synthetic à priori,
but I didn't get a chance to ask him more about it.

As far as Fodor's line goes, I'm generally sympathetic to
faculty psychology, even if only because it comports with
the ways of mathematicians and programmers in analyzing and
synthesizing functions and structures, but it's my experience
that the faculties we need for modeling intelligence and inquiry
interact with each other and mutually recur far too intricately
to deserve the name “modules” in the strictest technical sense.

Still, if all we're talking about is some native knack or
natural instinct for latching onto subsumptions wherever
they may occur then I could go along with that for the
sake of further argument.

I agree with previous comments that “subsumption” suffers
from a surfeit of senses but here's a couple of places where
I found it natural to use “subsumes” or one of its synonyms,
once in a logical sense and once in a grammatical sense.

• Aristotle's “Apagogy” : Abductive Reasoning as Problem Reduction

http://intersci.ss.uci.edu/wiki/index.php/Functional_Logic_:_Inquiry_and_Analogy#Aristotle.27s_.E2.80.9CApagogy.E2.80.9D_:_Abductive_Reasoning_as_Problem_Reduction

• Generalities About Formal Grammars
http://intersci.ss.uci.edu/wiki/index.php/Cactus_Language#Generalities_About_Formal_Grammars
Search for “subsume” and “subsumption” on this page.

There are reasons coming out of Peirce's logic and also
category theory for this usage but I'll have to save that
for another time.

Regards,

Jon

On 6/28/2018 12:52 PM, Michael DeBellis wrote:
--

inquiry into inquiry: https://inquiryintoinquiry.com/
academia: https://independent.academia.edu/JonAwbrey
oeiswiki: https://www.oeis.org/wiki/User:Jon_Awbrey
isw: http://intersci.ss.uci.edu/wiki/index.php/JLA
facebook page: https://www.facebook.com/JonnyCache

Michael DeBellis

unread,
Jul 2, 2018, 5:35:48 PM7/2/18
to ontolog-forum
"But there are two important differences. The human brain works with "natural" (analog) wave forms, computers use manipulated wave forms to make them digitally process-able.Second, although both tools of intelligence work by performing operations on data, humans do not keep track on the number of operations (recursions) as precisely as computers do. "

When I audited Introduction to Cognitive Neuroscience at Berkeley about a year ago one of the first things the professor said is that the brain is an analog to digital converter. Neurons are digital, they either fire or they don't. The inputs to neurons often come in the form of analog sense data but the processing is still digital.  While I agree that humans don't keep track of the depth of recursion the way a computer can I don't think that has any impact on what Chomsky says about recursion and why it must be essential to human understanding of natural language. This is a common misunderstanding of Chomsky (e.g., Searle does this often) to assume that humans must be conscious of the models that are hypothesized to explain things like our understanding of language. If you look at even the cognitive processing that insects do such as dead reckoning the models that explain insect behavior are fairly complex. No one assumes that an ant "understands" the Pythagorean theorem but it clearly uses that theorem when it does navigation. 

Or if you look at the Game Theoretic models that explain behavior from microbes to individual humans to organizations of humans they also can be mathematically complex. Organisms will often interact with each other according to models that are what biologists call an ESS (Evolutionary Stable Strategy) which is just the biological name for a Nash Equilibrium. No one thinks that a primate deciding whether to escalate or back off from another primate in a conflict over food or a mate is doing game theory but the game theoretic models are still good explanations and predictions for their behavior. 

In the same sense we can hypothesize tools like recursion or subsumption (or generalization/specialization if you prefer) are used by all humans even though few of them have any understanding of what they are or that they are using them. 
 
"What we assume about the operations is that they are not induction and deduction or abduction, but organised in a different way to satisfy our needs to predict/foresee the future."

But that explanation is so vague that it is not very useful. It is worthwhile to hypothesize HOW humans model the world to predict the future and what I'm proposing is that subsumption may be part of the answer to that question.

Michael

 

Michael DeBellis

unread,
Jul 2, 2018, 6:05:32 PM7/2/18
to ontolog-forum
"but it's my experience
that the faculties we need for modeling intelligence and inquiry
interact with each other and mutually recur far too intricately
to deserve the name “modules” in the strictest technical sense."

This is a common criticism and I have to admit when I first started looking at the connections in things like the visual system my reaction was that any concept of modules had to get thrown out.  You can describe general flows of information, for example information tends to flow from the Lateral Geniculate Nucleus then to layers I-V in the Primary Visual Cortex (from I to II from II to III, etc.) but all through that there are back connections that go the other way, connections that skip one layer and jump to another, etc. If there's a God She is definitely a spaghetti programmer. But when you look at all the possible connections (i.e., how connected would a brain be if every neuron connected to every other one) and then the actual connections it is orders of magnitude less. I just read an interesting book by Damasio called Descartes Error which I wish I still had to give you the exact numbers because he makes that point early on, the difference between possible and actual connections is several orders of magnitude.  Also, there almost has to be some useful concept of modules or the brain would be overwhelmed with information and greater intelligence would be less rather than more evolutionarily adaptive. When I think I may see a lion I want to focus on my visual system and get my legs ready to move and block out things like thoughts of how cute some girl in the tribe looks.

Also, regardless of what we call them, neuropsychologists already use the concept of module when they analyze the brain. We study things like the visual system and the auditory system. These are essentially well accepted as modules although people usually don't use that name for them but essentially that is what they mean by visual system: a part of the mind that focuses on solving a specific problem and that is more or less encapsulated from the rest of the mind. I say mind because we can talk about things like the Visual System without being specific about how or where in the brain certain problems (e.g., edge detectors, surface detectors, face detectors) are solved. 

We actually have a pretty good idea of where and how when it comes to vision but that wasn't always the case and the idea of separating things like algorithms and data structures from implementation IMO makes as much sense for animal cognition as it does for software design. For example, there were experiments done to demonstrate that ants use dead reckoning rather than following odor trails for finding their way back to their nest (see Gallistal's work) and that they use dead reckoning is now essentially universally accepted even though I'm pretty sure no one has any idea how they implement it in their little ant brains. What the evolutionary psychology people are doing is simply to extend that concept from processing sense data and navigation to solving behavioral problems such as selecting a mate, finding edible food, following tribal norms, etc. 

Michael
Reply all
Reply to author
Forward
0 new messages