Daniel Weiskopf --- The Architecture of Higher Cognition

283 views
Skip to first unread message

New Waves in Philosophy of Mind

unread,
Dec 3, 2012, 11:50:46 AM12/3/12
to
The Architecture of Higher Cognition

Daniel Weiskopf

Psychologists commonly talk about ‘higher’ cognitive faculties and processes, by which they mean to include such things as various forms of reasoning (deductive and inductive), planning and decision-making, theory construction, categorization, and so on. But there is little attempt to show what, if anything, these various processes might have in common. Attempts along these lines are typically uninformative or obviously defective. This gives the impression that the category is a fundamentally superficial one. Here I present an analysis of higher cognitive faculties in terms of three distinct properties: (1) their representational abstractness relative to perception-action systems; (2) their causal independence or detachability from ongoing perception and action; (3) their permitting free recombination of information across sources and domains. Classic higher cognitive processes, including those that involve coneptualized thought, display all three of these. But because the properties are independent, higher cognition can be viewed as assembled from capacities that might exist evolutionarily or developmentally earlier in different forms. Thus we can sketch a story about the origins of higher cognition that does not present it as a single monumental leap forward but rather as incrementally acquired. Having such a criterion, moreover, is of great practical use, since claims are often made about whether higher capacities, including conceptual capacities, are implicated in performance on various experimental tasks. I discuss how this criterion may be applied to several studies of higher thought in animals, especially primates, and infants. Theorizing and experimental design in cognitive psychology, developmental psychology, and comparative psychology can thus benefit from clarifying this central notion in cognitive architecture.

A PDF of the paper is ready to view and download in the attachment below. 
A direct link to the PDF: http://goo.gl/9cpyE
Dan Weiskopf.pdf

Mark Sprevak

unread,
Dec 4, 2012, 3:26:59 PM12/4/12
to

Hi Dan! This is a really nice paper! I very much enjoyed reading it, and I learnt a lot!

Here are three random comments on a terrific paper. Please feel free to ignore them if they are not useful.

  1. Talk of H-cognition as an informal gloss

    I found nearly everything that you say in the paper very convincing, but I confess that I was a bit less convinced by Sect 4. Sect 4 claims that the notion of H-cognition still serves as a useful psychological kind once your 3 dimensions of H-cognition are in hand.

    I guess I was inclined to think that once one has your distinctions in hand, talk of H-cognition begins to look more like an informal gloss or heuristic. Such talk should really be unpacked when one makes a precise claim about the nature or implementation of H-cognition. (In this respect, H-cognition talk begins to look like informal talk of a trait being ‘innate’.) Talk of H-cognition can be precisified in a number of different ways that pull apart in certain cases. I guess I wasn’t that convinced that the generalisations you cite in Sect 4 in support of H-cognition as a genuine kind couldn’t be profitably be stated as generalisations over one or more of the dimensions of H-cognition.

  2. Objection to modularity view

    You raise a great objection in Sect 2: one cannot use the modular/non-modular distinction to distinguish L/H-cognition because one can have highly modular H-cognition. But if this is true, then isn’t it also a counterexample to your view in Sect 4, since it ‘information integration’ would no longer be a necessary requirement on H-level cognition? That is, assuming the Sect 4 claim that high score on all three dimensions is necessary for H-cognition—if one drops this requirement (as in (1) above), then it would avoid the worry.

  3. Low-level sensory inference

    You mention in Sect 2 than current models of low-level sensory processes treat them as fairly sophisticated inferences. I was worried that, even with your 3 dimensions of H-cognition in hand, models of these sensory systems—e.g. hierarchical Bayesian generative models (Hinton, Friston, Tenenbaum)—would still count as H-cognition. Hierarchical generative models often attribute abstract representations of hidden causes even to low-level sensory models. The models can be run in generative ‘off-line’ mode without stimuli. The models also receive and integrate a wide range of top-down (and lateral) information. Maybe if these models of low-level processes are right, then rather surprisingly, even low-level processes can be H-cognition?

Bryce Huebner

unread,
Dec 4, 2012, 11:06:19 AM12/4/12
to new-waves-in-ph...@googlegroups.com
Hey Dan, It looks like Mark beat me to the punch. 

As with all of your papers, I learned a bunch from this one, and really enjoyed reading it. I hadn't really thought much about the higher-lower distinction, but I now see that I probably should, and that it's a really difficult problem!

I'll throw my support behind these worries, as I was thinking something similar. I too am worried about hierarchical Bayesian generative models for sensory systems (as well as predictive mechanisms in the Midbrain, Insular Cortex, MPFC, and DLPFC, etc). if Wolfram Shultz is right, we have mechanisms that are computing poly-sensory, abstract representations even in the midbrain. Maybe these count as HC in some sense, but I'm not really sure that i'm comfortable with that claim. My guess is that you will respond by noting that these mechanisms are HC in some ways, but not HC simpliciter. I like that response, and I think that the real beauty of your account of the distinction is that it allows for multiple fine gradations. But, while I like that sort of response, i also worry that if you take that route, you run smack into Mark's first worry about HC becoming little more than a heuristic or informal gloss. The real work seems to be done by the 3 graded dimensions along which capacities can vary, not by the fact that some of these systems are 'higher' and others 'lower'.

I'm happy to be told that this response is misguided, if it is. Again, cool paper!

Justin Fisher

unread,
Dec 5, 2012, 1:17:47 PM12/5/12
to new-waves-in-ph...@googlegroups.com
Hi Dan,

Very interesting paper -- on an important topic, very clear, and nicely crafted.  I agree with very much of it, and have only a few minor quibbles and suggestions.

One suggestion would be to phrase your conclusion as saying that you've identified three well-defined and scientifically interesting dimensions of variance, each of which is in the ballpark of loose talk of higher-vs-lower levels of cognition.  (I don't see this as much different from what you did say -- just perhaps a slightly more clear way of packaging it.)

A second minor suggestion would be to find some way of fitting the final two points you make in section 2 (involving afference hierarchy and phylogeny) into the organization of the reminder of that section (a series of named problems that any account of levels of cognition would need to solve).  I'd be inclined to merge these last two points in with your "problem of neutrality":  a good account needs to be neutral, not just with respect to how much modularity there will turn out to be at different levels, but also with respect to how many degrees of synaptic separation from the sensory periphery the different levels might lie at, and with respect to how recently evolutionary selection pressures might have significantly shaped or reshaped that level.

While we're on modularity, one worry I had was that your recombination dimension does seem to preclude strong sorts of modularity -- a fully encapsulated module likely won't be able to freely sustitute in predicates from other parts of the mind.  E.g., my cheater-detector might be able to say "John's a cheater" but not able to say "John's an astrophysicist".  So this means that, at least on that dimension, you don't meet your own neutrality constraint.  (I don't think that means your view is wrong, just that you need to be careful in how you phrase the constraint and/or your claims to have met it.)  [Looks like Mark scooped me on this worry -- that's what I get for typing my comments before reading everyone else's.]

My biggest quibble was with your claim (on page 11) that concepts are individuated in part by what they represent.  I, at least, would like to remain open to the possibility that some concepts might shift referents over time.  E.g., (in an example adapted from Gareth Evans), suppose you meet Alice briefly at a party, and form an individual concept of her - call this concept C.  A week later, you encounter Betty, but think she is the woman you met at that party so continue using C in application to her.  I think that when you first meet Betty, C is correctly applicable only to Alice, so you're mistaken in applying it to Betty.  But after many years of continuing to use C in application to Betty, C will clearly have shifted in reference so that it refers (if not entirely, at least primarily) to Betty, and at most only slightly Alice.  And somewhere in the middle there is a period of semantic indeterminacy or at least ambiguity, where the concept partially refers to Alice and partially refers to Betty.

If a single concept C can shift in reference like this, then you can't be right in saying that concepts are individuated (in part)by what they represent.  Now, you may not think that semantic shifts like this are possible, but I don't see any reason why you should need to commit yourself to that in this paper.  Instead, why not remain neutral, and allow that concepts might not be individuated by what they represent?  I.e., leave open the possibility that I embrace, namely that each concept is a mental particular, individuated by the folder-like information-coordinating role that it plays within a cognitive economy over time. 

Cheers,

-Justin

Dan Weiskopf

unread,
Dec 5, 2012, 11:52:24 PM12/5/12
to new-waves-in-ph...@googlegroups.com
Hi Mark, thanks for the careful reading and the comments. Will take these in turn.

1. I'm not sure I see the sense in which H-cognition is a heuristic--that suggests a reasoning procedure or shortcut to me. But that's just terminology. The important issue you're raising is whether it's a kind, or more specifically, whether using the taxonomic distinction of higher and lower systems pays off in any way. I somewhat cagily said that "the hierarchy of higher cognitive faculties generally" can be seen as constituting psychological kinds. On one reading, that would be satisfied in the event that each type of H-cognition plays an interesting taxonomic and explanatory role. I suggested that there are such roles for these capacities to play, developmentally, comparatively, and in evolutionary terms. We can exploit these properties taken individually and in various combinations to serve a range of explanatory ends. This is enough to vindicate the utility of the hierarchy as a whole, i.e, the set of taxonomic distinctions I was aiming to draw.

However, the analogy with 'innateness' is interesting, and I'll need to consider it a bit further. I will agree with this: once we have in hand a multifunctional analysis of higher cognition we will always need to keep an eye on precisely which of these capacities we are appealing to in any given context. This may mean that it is mostly more useful to talk about the particular capacities themselves than to the superordinate to which they belong. But I'll give it some more thought.

2. I'm threading the needle on this one a bit, but I don't think I am violating my own requirements. I don't think H-cog should simply be defined in terms of nonmodularity. Defining it in terms of informational integration avoids this. After all, it *might* turn out that massively modular systems can achieve this sort of informational integration. As it happens I don't think they can (tough luck). But if this is right, they are ruled out on independent grounds, not simply ruled out as part of the definition, so to speak. And after all, I might be wrong and massively modular architectures *can* be informationally integrated in the right way. It's a debatable point.

3. This is a good point but a complex one to argue in the abstract, since one needs to attend carefully to the structure of these models. I will say that I think modelers are often far too quick to assume that their systems are representing properties like causation and other abstracta, frequently on *very* thin evidence. They don't do a lot to distinguish causation from various near-cousins that it resembles; and moreover, they don't consider that there might be various different forms of causation, some of which are tracked by these systems but others of which are not. (Susan Carey's studies of infant representations of causation show just how hard it is to establish that something is tracking a real causal relation.) My reading of how a few of these models operate also suggests that they can be influenced by some top-down factors, but it's not clear whether it's genuine informational penetration (as opposed to control signaling), and if it is, how widespread their access happens to be. My bet is that it's actually highly limited in this respect. As for whether they can be run offline, there is nothing prohibiting a system that often operates in a bottom-up or online fashion from also being run offline; when it's being run in the latter mode, it's part of a higher cognitive process (e.g., imagery), but not when it's being run in the former mode.

Dan Weiskopf

unread,
Dec 6, 2012, 12:05:16 AM12/6/12
to new-waves-in-ph...@googlegroups.com
Hi Bryce, thanks for sharpening up Mark's worry. I've looked at some of their relatives before, but I will have a look at these particular models when I get a moment. I guess I should lay my cards out and say that I'm not sure about the proper interpretation of many Bayesian models--whether they are intended as process models, or whether they are a way of mathematically characterizing the function of various systems without being committal about what is represented and how it is manipulated. Many of them seem to fall into the latter category, whereas the distinctions I'm drawing are in terms of representations and processes. So it may be less of a conflict than it seems. But in the event that midbrain structures turn out to be relatively abstract, I'm comfortable with that. (I've long advocated this interpretation of the superior colliculus.) Remember that representational abstractness is characterized relative to the capacities of the sensory systems. It would be unsurprising to me if a lot of inbound processing involves representational augmentation of this kind, even if it occurs fairly early. So you correctly guessed my response here--on how I'd reply to the latter worry, see my above comments to Mark. What one wants out of a taxonomy is a way of drawing a set of theoretically useful distinctions. Rather than undermining it, then, I think this case actually nicely illustrates the potential of this kind of hierarchical approach.

Dan Weiskopf

unread,
Dec 6, 2012, 12:46:56 AM12/6/12
to new-waves-in-ph...@googlegroups.com
Hi Justin, thanks for the kind words and the very detailed remarks. I like the first and second suggestions quite a bit. I think I've dealt with the third one above--I don't see any contradiction between saying that you shouldn't straightforwardly define the higher/lower distinction in terms of modularity, and saying that higher cognition requires informational integration. It may be that the latter is incompatible with some forms of modularity, but this needs an argument; indeed, a different argument for each style of modularity. And there is, at the end of the day, no guarantee that every sort of cognitive architecture can support higher cognition. To see this, think of the limits of subsumption architectures.

I didn't say a great deal about concepts themselves in the paper, since that wasn't the focus, but I'll say a few things about the case you raise. First, individuating concepts partially in terms of what they're about is compatible with their shifting reference. On some accounts, that just means that a vehicle starts out as being one concept and turns into another, due to a referential shift. No problem--although how we describe any particular case depends on the details.

So concepts are partially individuated by their content, but there can be other relevant individuative factors as well. In many cases, I think you should individuate them historically. To be a little more precise, if I have a token concept C that at its inception refers to Betty and over time it (i.e., the underlying vehicle of representation) comes to refer to Veronica instead, then in virtue of the vehicle playing a continuing historical role in my thought, this is a case where one concept I possess has shifted its reference. It started life as a Betty-concept and is now a Veronica-concept. There may well be ambiguous intermediate periods where its reference is either partial or indeterminate. What guarantees some continuity in my thought is that I deploy the same vehicle over time, its historical links allowing it to persist despite potential shifts in content. I think this is close to the story you told, which is, I guess, my long-winded way of agreeing. But let me emphasize that this is compatible with individuating concepts partially in terms of content. This token concept C falls under different content-types at various points in its history, but it can be individuated by other aspects than its content, which guarantees our ability to trace its continuity through these changes.

Mark Sprevak

unread,
Dec 6, 2012, 6:55:11 AM12/6/12
to new-waves-in-ph...@googlegroups.com

Hi Dan,

Thanks very much for this!

I’m not sure I see the sense in which H-cognition is a heuristic

I just meant that talk of H-cognition is useful in helping us to get into the ballpark of making theoretically interesting claims (like talk of ‘innateness’). Talking in this way serves a useful heuristic purpose in guiding research. But once one starts doing serious work in the area, this informal talk should should be dropped in favour of the more precise notions that you develop.

I somewhat cagily said that “the hierarchy of higher cognitive faculties generally” can be seen as constituting psychological kinds. On one reading, that would be satisfied in the event that each type of H-cognition plays an interesting taxonomic and explanatory role.

Thanks—that is really helpful. I didn’t pick that up that subtlety on my first reading of the paper.

I don’t think H-cog should simply be defined in terms of nonmodularity. Defining it in terms of informational integration avoids this.

I can see your point, but it still seems a fine line to tread. On many understandings of modularity, modules require information encapsulation, and that at least on the surface, seems to preclude informational integration.

Is your way out of this to say that although a single module would not satisfy information integration, there is nothing to stop a collection of interacting modules from satisfying it? Therefore, a MM architecture implementing H-cognition is not being ruled out by fiat. Is that the right way to think about it?

But if it is, then the old modularity proposal in Sect 2 still seems to have some life left:

A related way of drawing the higher/lower distinction says that lower cognition takes place within modular systems, while higher cognition is non-modular.

Suppose we understand ‘non-modular’ as taking place outside a single model—say, involving the interaction between multiple modules. If so, this proposal is no longer rules out a MM architecture implementing H-cognition by fiat. L-cognition would take place inside modules, and H-cognition would be implemented in the interactions between multiple modules (as Carruthers describes). So your original objection to the modularity proposal—that it precludes an MM architecture from implementing H-cognition—no longer goes through?

Thanks very much for your response to (3) and comments to Bryce, really helpful!

Justin Fisher

unread,
Dec 6, 2012, 6:27:47 PM12/6/12
to new-waves-in-ph...@googlegroups.com
Thanks for your reply Dan,

I can see two ways of going here -- call them "my way" and "your way".  My way is to identify "concepts" with what you are calling "vehicles" -- concepts are psychological particulars that play an active information-coordinating role in cognition and can potentially shift in reference over time.  Your way is to distinguish "concepts" from "vehicles", to hold that concepts themselves are semantically invariable, and to construe cases like the Alice/Betty case as cases where a single vehicle shifts which concept it expresses.  I see how this could provide a self-consistent account of the cases that I (but not you) would describe as concepts shifting in reference.

One worry that I have about "your way" is that I don't really understand what you think a "concept" is supposed to be, that is distinct both from the vehicle and the correct application conditions.  Is the concept some abstract entity dwelling in platonic heaven?  Is it something like a property?  We both agree that we need to talk about something like vehicles, and that we need to talk about something like reference or correct application conditions.  What further theoretical mileage do you get out of positing some weird abstract third thing as an intermediary between vehicles and reference/application conditions?

It might also help me to understand "your way" better if you could sketch what you would say about Hesperus/Phosphorus cases.  Do you say there is only one Venus-concept, and naive stargazers just unknowingly had two vehicles expressing that concept?  Or did you think naive stargazers had two separate concepts (and if so, what distinguished them)?

Regardless, I think all of this is tangential to this paper, so my recommendation (again) is to phrase that passage of your paper more neutrally, so that it doesn't get needlessly bogged down in these issues.

Cheers,

-Justin


Dan Weiskopf

unread,
Dec 7, 2012, 12:28:23 PM12/7/12
to new-waves-in-ph...@googlegroups.com
Hi Mark,

On modularity, it may be that the notion requires informational encapsulation, although this is not a stable feature of many of its uses (Shallice, Barrett, and others all drop this requirement, and Carruthers adopts a different interpretation of it). Even so, the issue is: can a system of (perhaps) encapsulated modules realize the aspects of human thought that involve informational integration? This, it seems to me, is a question to be decided by looking at the properties of various architectures. To take just the most recent and best-developed instance, Carruthers does argue that a massively modular mind can do so, although both Edouard and I have argued against this in various places.

Does this mean that the modularity proposal for distinguish higher and lower cognition is still viable? Well, the canonical way it is interpreted is as claiming that lower cognition is composed of various modules whereas higher cognition is composed of something essentially nonmodular--classical central cognition or the like. Massive modularists, after all, are supposed to be denying the existence of such a single unitary domain-general system. This is the sense in which I'm using the term 'non-modular'. My objection to the proposal was just that it seems that we wouldn't want to prejudice the issue of whether a system implements higher cognition by tying it to the question of whether its organization is modular. Modularity, if you like, is a somewhat more fine grained aspect of cognitive architecture. Or, put another way, whether you achieve the functions typical of higher cognition via modular or nonmodular means is simply irrelevant.

(I should add that I think these views that try to achieve informational integration by tying together bundles of modules in various ways may in the end just amount to way of implementing classical central cognition; there are some wrinkles to iron out there, but if so, it turns out to be a wonderful irony.)

Dan Weiskopf

unread,
Dec 10, 2012, 1:31:45 PM12/10/12
to new-waves-in-ph...@googlegroups.com
Justin:

Briefly, here's my view. I think of concepts as vehicle-content pairs (and I think of content as having two further components, but ignore that for now). Concepts are the vehicles of higher thought, but I wouldn't say a concept *just is* a vehicle, at least not if that implies that part of its individuation doesn't also appeal to content. So I'm not committed to any of the possibilities you mention in your second paragraph.

What it means for someone to have a concept that shifts its reference, on my view, is for them to have the same persisting vehicle acquire different content. There is a sense in which this involves both likeness and difference. The likeness is in the common vehicle--this is the historical thread that allows us to say that it is still, in an important respect, the same concept. The difference, obviously, is in content. Given these two possible dimensions of comparison, we may choose to emphasize one or the other (or both) in making individuative decisions. That is the normal situation when we are faced with a problem about change: we need to locate some stable aspect to 'pin' the unstable ones to. Thus from the fact that a concept is a vehicle-content pair it does not follow that we cannot say that we continue to have the same one if only the vehicle remains constant. (The facts about what something is do not necessarily determine the appropriate individuative standards to use in every circumstance.)

I hope this makes it clear why I want to continue to insist that concepts are individuated (inter alia) by their content, and also why I think it's a live option for a concept, as had by a particular subject at a particular time, to change its content or reference.

D

Robert O'Shaughnessy

unread,
Dec 14, 2012, 11:02:06 AM12/14/12
to new-waves-in-ph...@googlegroups.com

Hi Dan

I really enjoyed the paper and I like your three dimensions for concepts a lot. I think they have the potential to be deployed in many useful ways. Clearly one can envisage how they can determine whether faculty A is higher than faculty B. But I guess the points made by the commenters above got me thinking about the practicalities of setting a bar for higher cognition simpliciter as Bryce puts it and just what would motivate that setting.

Clearly faculties using human style concepts would be HC as these score nearly maximum on your dimensions. And faculties in a subsumption architecture might make use of concepts/representations scoring nearly zero so these could be ruled out as engaging in higher cognition (although getting fans of such architectures to agree might be trickier!).

But how about in between? What might be rough guidelines for setting the bar? For example, would a score of, say, over 50% on all three dimensions be a useful first pass? Would a score of zero (or below a set minimum) on any one dimension rule it out for HC? Would the three dimensions carry equal weight or differ in importance for HC, as sometimes I felt you were suggesting?

The worry then becomes what would make even rough answers to these questions principled rather than arbitrary? Anyone whose favoured architecture got ruled out could simply claim that there was no motivation for setting the bar in that way. For example a purely (as opposed to a partially) connectionist architecture might not exceed the threshold because it scored so badly on the combinability dimension but its advocates would say that it may turn out that we discover in the future that combination can be achieved without the vehicles being combined.

On the other hand without some detail on these kinds of matters it seems we would only be sure about our taxonomizations in cases that were close to maximum/minimum scores on all three dimensions.

So I guess my question is whether you think there is a way to set the bar in a principled manner or, if not, why you don’t think there needs to be?
Once again, I really enjoyed the paper!
-Robert

Georg Theiner

unread,
Dec 15, 2012, 4:42:08 PM12/15/12
to new-waves-in-ph...@googlegroups.com

Hi Dan,

Thanks for sharing your excellent paper, which has tremendously helped my own thinking on a bunch of closely related issues.  In particular, I think that your taxonomic scheme can be fruitfully employed to think about current debates over 4E-cognition, in addition to the applications you mention in Section 4.  So let me offer a few musings on your taxonomy from a 4E-perspective.

First, we can obviously use your scale to derive a much more fine-grained taxonomy of “ideal types” akin to Dennett’s (1995) “Tower of Generate and Test,” defined purely in terms of meta-architectural constraints (rather than any specific cognitive architectures that are used to meet those constraints).  Let’s say that Popperian creatures score relatively low on all 3 dimensions, Fregean creatures score fairly high on all 3 dimensions, and Gregorian creatures score extremely high on all 3.  Once could view the resulting hierarchy of mental complexity as a trend towards increasingly greater amounts of “symbol un-grounding.”  From that perspective, the main question is to identify the main drivers in un-grounding and decoupling cognition from its embodied/embedded roots, including lower, non-conceptual forms of mental representations.  More specifically, what kinds of tweaks to the biologically basic architecture of our brains does it take to reach the upper echelon of H-cognition?

Clearly, the cogitations of a socially, culturally, and technologically scaffolded mind can score very high on the W-scale. [Note: we can stay neutral with respect to HEC vs. HEMC interpretations of scaffolding – Sterelny’s modest notion of “scaffolded minds” will do just fine.]  For example, we can rely on sophisticated mathematical symbolisms to represent Hilbert spaces.  We can use computer simulations of your tinkertoy model that we can start and pause, rewind, and re-run at will, with little cognitive effort, to engage in counterfactual reasoning about chemical compounds.  And we use fully and transparently compositional natural deduction systems to perform complex reasoning tasks.  In fact, we can use the W-scale to rank various types of scaffolding (cf. M. Wilson 2002).  (1) Rotating falling blocks in Tetris, or moving furniture around the room to generate possible solutions of where to put things, involve the exploitation of spatial relationships among elements in the world in order to solve spatial problems.  Since the elements do not represent anything other than themselves, they presumably score relatively low on H-cognition (“situated & concrete scaffolding”).  (2) Arranging tokens of army soldiers on a map for the purpose of military decision-making involve the exploitation of spatial relationships, but applied to a more abstract task; so those activities score higher (“situated & abstract scaffolding”).  (3) Arranging diagrammatic systems such Venn diagrams to solve syllogistic reasoning problems exploits the correspondences between the spatial/topological properties of circles with the mathematical properties of sets (“spatially grounded & abstract scaffolding”).  (4) Being able to use natural deduction systems to solve inferential problems (spatially ungrounded & abstract scaffolding”) would presumably be a very advanced form of H-cognition.

Many other current debates in this arena can naturally be framed in the terms of this account.  For example, starting from a Popperian creature with a perceptually grounded symbol systems architecture (Barsalou, etc), how Fregean can you get without natural language?  Then, how far can you get with the incorporation of natural language (which would itself be perceptually grounded, rather than translated into amodal symbols)?  Finally, how much further un-grounding work had to be accomplished by the cultural evolution of external representational systems from the emergence of graphical art to the invention of written language?  The answers to those questions will potentially yield empirical generalizations that are couched in terms of H-cognition, thus providing further evidence for the robustness of your taxonomic kinds: (e.g., “Human brains can’t become Fregean without natural language,” Human talking brains can’t become Gregorian without such-and-such forms of external scaffolding” etc.)

What other implications does a 4E-friendly application of your account have for the topics that have already been discussed in this forum?  First, consider the robustness of H-cognition as a psychological kind.  According to Dehaene’s (1997) “triple-code” model, our adult mathematical competence is a hybrid mental faculty that arises from the informational integration of three separate systems: a biologically basic analog system for understanding magnitude, the representation of elementary arithmetic facts in the verbal system, and our visually based acquaintance with purely symbolic numerical representations.  As an integrated faculty, our adult mathematical competence scores high on H-cognition; whereas each subsystem taken individually arguably does not.  As you point out, our intuitive grasp of magnitude may not even be H-cognition.  Verbal rote-learning of the multiplication table is not (obviously) compositional; it takes the child a while to realize that these operations can be arbitrarily applied to any given pairs of natural numbers.  The ability to manipulate (eg) algebraic notations rides piggyback on the ability to read, which itself depends on the invasion and redeployment of areas of the visual cortex that were already well-adapted for certain aspects of visual shape recognition (Dehaene 2009).  The evolution of writing systems towards greater readability by our biological brains was no doubt an important prerequisite for the development of highly scaffolded H-cognition; however, without the proper informational integration into the other systems, the ability to manipulate meaningless squiggles would presumably not be H-cognition.  Does the emergence of hybrid mental faculties undermine the claim that H-cognition is a robust functional kind?  No: as long as we can make important empirical generalizations about our adult mathematical competence in terms of H-cognition, this is compatible with causal heterogeneity and, at least potentially, large amounts of variation in the levels of H-cognition at the level of implementing mechanisms.

Next, the related issue of modularity.  According to your account, modular systems score low on the W-scale, and thus have to be considered as L-cognition.  Now consider the literature on situated problem-solving which is replete with examples in which people rely on the use of concrete, special-case oriented, idiosyncratic strategies which cannot be generalized beyond a very narrow task environment (Kirsh 2009).  The classic example here is a dieter’s situated strategy for figuring out for her lunch portion how much cottage cheese two-thirds of her daily allotment is, which are three-quarters of a cup.  What the person did was to turn over a one-cup tub of cottage cheese, criss-cross it to mark four quarters, remove three quarters, and then consume two of those three, which of course was exactly one half.  As Kirsh (ibid.) points out, this strategy works only in a very narrow, activity-specific environment: e.g., if the dieticians might have decided that the daily allotment was 3/5 of cup, and the lunch portion was 7/16, such a strategy would fail, and dieters would have to undergo the general-purpose but more error-prone strategy of calculating fractions.  However, this is not a problem because our problem-solving strategies typically co-evolve with the task environments in which we tend to solve them (e.g., if the dieticians’ recommendations were to change along the above lines, manufactures would presumably change the tub size to make the calculations easier).  Is this H-cognition or L-cognition?  Or, on a much grander scale, we have the vast range of highly specialized, special-purpose, “modular” character of most iPhone apps.  For example, when I shazam a song that I hear in a bar to identify the singer, title, length, and album of a song, is this an instance of L-cognition or H-cognition? 

On the one hand, one could argue that episodes of soft-assembled, modular problem-solving are not cases of H-cognition, similar to the biologically basic, perhaps hardwired forms of modular cognition.  For example, Peretz and Coltheart (2006) have argued that our basic music faculty is highly modular, comprising a set of neurally isolable, functionally distinct processing components.  If they are right, your account implies that our biologically basic music faculty is clearly not a case of H-cognition.  On the other hand, there are important differences between the two cases.  The augmented music faculty that is composed by the integrated User+Shazam system seems to deserve a significantly “higher” score on the W-scale than its biologically basic counterparts: it employs greater representational resources, as well as categories of greater abstraction; supports higher degrees of causal autonomy, and makes available inferences that take advantage of a great deal of what we know about these songs (though not necessarily known by the user herself before her use of Shazam.).  If we accept this intuition, does this lead to a collapse of the difference between higher and lower mental faculties?  How can the “music faculty” receive a low score when it is located inside the head, yet receive a much higher score when it is partly located in the environment?

There is, of course, an important difference between biologically hardwired modules and culturally soft-assembled “modular” components.  Suppose it is true, as it seems quite plausible to me, that only Gregorian minds that already score very high on the W-scale are able to enhance their repertoire with a practically unlimited variety of special-purpose skills, practices, and artifacts that can be used for specific types of problem-solving.  In other words, the problem of mental synthesis must already have been solved before extra layers of relatively modular (yet, at least in some cases, potentially knowledge-rich) forms of expertise can be gainfully added to the pre-existing cognitive repertoire of a Gregorian creature.  Perhaps MM-architectures could never get to this point courtesy of biological evolution alone.  But once our minds became sufficiently Gregorian, culturally advanced forms of “modular” problem-solving are capable of yielding relatively high forms of H-cognition, provided that they are informationally integrated with our overall cognitive architecture.  In that case, Shazam and its cognates would also provide evidence that your taxonomy of H-cognition does not discriminate “by fiat” against (partly) modular systems.

Let me know what you think.  Again, I really learned a great deal from engaging with your paper.

Cheers,

-Georg

Dan Weiskopf

unread,
Dec 16, 2012, 12:48:44 PM12/16/12
to new-waves-in-ph...@googlegroups.com
Robert:

Thanks for the comments. I do think there are endpoint cases that clearly show the total absence of these qualities, some of which you mentioned. And I do think that we have at least one example (human conceptualized thought) of the joint possession of all three in a relatively well-developed way. But I hadn't imagined there would be any simple way of quantifying or scoring architectures as to whether they are, overall, better exemplars of H-cognition than others. Hard for me to imagine where the numbers would come from, or how they should be weighted and combined.

The aim, rather, is to clear up what we should mean when we talk about H-cognition and provide a set of distinctions--a taxonomy--that will let us make more clear and fine-grained comparisons among cognizers and their architectures, where we need to. I also hope that this will encourage some thought about the ways in which these categories might be made more fine-grained, and how they relate to one another in developmental and evolutionary terms. I do think that representational abstraction admits of a sort of rough ordering, but even there I see no reason to think of this as a uniformly rising tide--there can be creatures with fairly localized islands of representational skill, where there's no particular way to decide which is 'more advanced' in any interesting sense. The same goes for combinatorial abilities and autonomously-driven cognition. Rather than try to precisify these scoring-based metrics, I prefer to think of the proposal as offering a way to organize existing research (which has often run these various factors together) and suggesting hypotheses for future exploration. Does that clarify things? And more importantly, does it seem like a plausible and worthwhile goal?

D

Robert O'Shaughnessy

unread,
Dec 17, 2012, 6:33:33 AM12/17/12
to new-waves-in-ph...@googlegroups.com

Hi Dan

Thanks for your clarification. I absolutely do think what you set out is a worthwhile goal. I have been writing myself about the combination dimension (with more giving you greater cognitive power) so I found it fascinating to be thinking about the abstractness and autonomy dimensions at the same time.

-Robert

Dan Weiskopf

unread,
Dec 17, 2012, 3:58:04 PM12/17/12
to new-waves-in-ph...@googlegroups.com

Georg:

Thanks for the highly detailed and thoughtful suggestions. These will be useful in revising the latter sections of the paper. I particularly appreciate the cases illustrating how achieving certain kinds of H-cognition is only possible given certain preconditions—as I pointed out in response to Robert above, this is where some of the relevant explanatory power in the account should be coming from.

I did have something like Dennett’s model in mind when developing mine (as well as other exemplars, such as Sterelny and Liz Camp). You’re also correct that H-cognition generally involves ways of decoupling cognitive activity from various constraints: of the categories of perception and action, of the immediate causal impingements of the world, of the local informational context, etc. While I don’t doubt that our symbolic and cultural activities in the world play a massive role in amplifying what we can do, H-cognitively speaking, I personally am on record as opposing both embodied and extended cognition. Even so, a host of (by my reckoning) extra-psychological structures certainly play a role in normal development of these capacities. It’s a nice point that the taxonomy can be used to cover ‘extended’ cognitive capacities as well as standardly interpreted intracranial ones; that’s exactly the kind of neutrality I was aiming at. I wouldn’t want, as you point out at the end, to have the view itself to discriminate between extended and intracranial capacities. That takes a separate argument.

Your music recognition case is a bit of a puzzler. The reason it’s hard to place is, I think, because whereas when I identify a piece of music by hearing it, it’s my own transducers and perceptual processes that are implicated in the process, whereas it’s odd to think of the phone’s mic as being one of *my* sensory surfaces. The phone’s processing seems to be a ‘sideways’ adjunct to my cognition, deployed in the event that I can’t myself pick out who sang that song. Playing along with the idea that this is really one of my extended cognitive capacities for a moment, devices like this function a bit like oracular voices. If I can’t figure something out, I can just ask the oracle for the answer, although how it does it is entirely obscure to me; I only see the results it dumps out. The relevant informational interface involves only my awareness of my musical ignorance, my decision to consult the oracle, and my knowledge of the oracle’s output—all of which are relatively H-cognitively mediated.

But I don’t think it should bother us that there is one L-cognitive and one H-cognitive way to recognize music. For one thing, there is a way of individuating the faculties on which they take different inputs and outputs. More importantly, though, the nature of their interface with the rest of cognition is different. They are placed differently relative to the perceptual systems, are initiated by different cognitive acts, send output to different systems, and so on. These locational or relative facts matter. This highlights the point that there is no fact about whether a capacity is H-cognitive or L-cognitive outside of the context of a particular animal’s cognitive architecture. The distinction is senseless, or at least undefined, without such information.

So, does that seem convincing? I don’t see any reason a 4E theorist in particular should object to it, anyway.

D

 

Georg Theiner

unread,
Dec 19, 2012, 11:07:45 PM12/19/12
to new-waves-in-ph...@googlegroups.com

Hi Dan,

Thanks for these clarifications.  Again, I’m mostly sympathetic to the issues you’re raising, including your point that whether a capacity is H-cognitive or L-cognitive depends on the context of a particular animal’s cognitive architecture (although it’s likely that I have a more plastic vision of what “the” human cognitive architecture looks like).  At any rate, let me return to the point about the emergence of H-cognition from co-opting previously segregated L-cognitive components, and integrating them into one’s cognitive architecture.  I’m interested in what further generalizations we can potentially derive based on your taxonomy (cf. Section 4).  These are mostly empirical issues, but it’s worth speculating about them a bit.

First of all, I totally agree with the point that you made earlier in response to Robert, that it is certainly misguided to think that there would be/has been a uniform evolutionary trend towards greater degrees of H-cognition.  You already cited several problems with this idea in your discussion of evolutionary-based proposals; Mithen’s (1996) theory that during hominid evolution, selective advantages have oscillated between favoring specialized, hardwired, and modularized intelligence and favoring general intelligence would be another example of this sort.  Still, each time the pendulum swings towards cognitive fluidity, it seems to undergo a similar type of trajectory – a spiral leading to increasingly higher levels of mental complexity.

Example: Let’s suppose, for the sake of the argument, that Mithen is right and the “cultural explosion” that occurred 30,000-60,000 yrs ago happened when homo sapiens sapiens crossed a certain threshold of cognitive fluidity, after reversing a trend towards increased modularity that dominated early hominid evolution.  This trend continued, leading to greater feats of mental synthesis concerning biologically basic forms of intracranial cognition.  With a certain temporal (cultural) delay, this pattern was recapitulated, at a higher level of complexity, by the way in which our tools evolved with us.  Consider the cultural evolution of ancient systems of writing, counting, and numbering, each of which followed the patterns towards increased representational abstraction, context-independence, and potential for informational integration.  During the early (concrete, task-dependent, situated) stages of their development, who would have been able to predict the enormous amounts of H-cognition that you can squeeze out of biological brains – even those of our ancestors who had already fully achieved fairly high forms of biologically basic H-cognition! –  if they dovetail with the right kind of symbolic structures.

Thus, if we limit your meta-architectural taxonomy to the description of particular types of advanced cognitive evolution, rather than trying to cover the entire realm of cognitive creatures, I can easily see how we coul gainfully precisify the metrics for each of the three properties you mention.  You point out that the three properties are logically independent, and empirically dissociable, and as far as the big picture goes, I agree.  But then again, there may be many more specific, local generalizations available once we restrict ourselves to charting the evolution of the “modern” human mind, especially the potential co-evolutionary dynamics between the three properties.  For instance, it might be nomologically impossible for creatures like us to score high only on two but not the third of those dimensions.  Theories that emphasize the co-evolution of brain, language, and symbolic thought, and how they “drag along” higher levels of meta-cognition, meta-cognitive awareness, and voluntary cognitive control would – if true – presumably support such generalizations.

[Ok, I gotta include one minor comment on your discussion of music cognition.  You claim that it’s odd to think of the phone’s mic as being one of *my* sensory surfaces, while you take it as unproblematic to speak of “my own transducers” (or any other sub-personal perceptual/cognitive mechanisms inside the brain.)  There are a few things an externalist might say in response to this.  First, based on evidence about children’s naïve reasoning about property ownership, I’m not so sure about your intuitions; I would think that it’s more intuitive for a kid who grows up wearing glasses to consider those as *my” sensory surfaces, as opposed to speaking of “my” V1 or whatever.  Second, I guess the really controversial claim would be that they can become part of me (i.e., part of my extended perceptual/cognitive system) rather than the weaker claim that they can be mine.  If (BIG ‘if’!) the stronger claim is adjudicated in terms of the first-person phenomenology of one’s sense of agency or sense of ownership, then I would also be fairly confident that the boundaries not only of one’s body, but also of one’s self, do not always coincide with the biological boundaries of the organism (i.e., they sometimes include extra-corporeal structures, and sometimes exclude intracorporeal structures).  However, I take it that the points you make about the relational/locational individuation of cognitive and perceptual system do not hinge on first-person intuitions about agency/ownership, so the above point may be moot; at least, it is not necessarily something an internalist has to deny.  Anyway, let’s not get into what seems to be a tangential issue at best…)]

Best,

-Georg

Reply all
Reply to author
Forward
0 new messages