Re: [CYBCOM] Consciousness and cybernetics

2 views
Skip to first unread message

Peter Cariani

unread,
Mar 7, 2023, 11:52:10 PM3/7/23
to cyb...@googlegroups.com
Hi Everyone,

I'm new to this CYBCOM discussion. I heard about the topic of consciousness and cybernetics that has come up, a topic of which I have passionate interest.

Several years ago now, I designed and taught an introductory survey undergraduate course on consciousness studies that focused on the psychological aspects and neuroscientific correlates of conscious awareness. The name of the course was Consciousness: Philosophy, Psychology, Neuroscience. Before that I had written papers commenting on neural coding and anesthesia (some anesthetics may abolish awareness by scrambling signals rather than suppressing them, i.e. disrupting the requisite regenerative organization of neural coding and processing needed to sustain both working memory and awareness).

I want to react to Frances Heylighen's remarks (hi Frances! it's been a while! I recently read your old paper with Cliff Joslyn and I think it is one of the clearest concise expositions of cybernetics I have ever seen).

First, Chalmer's "Hard Problem," i.e. why conscious awareness exists in the first place, is an empirically-unresolvable, metaphysical question. It is basically equivalent to asking "why is
there something rather than nothing" or "why is there gravity"? All questions of the existence of fundamental aspects of the universe are of this form. What we can do, and are doing, is to identify the material (neural) conditions under which each of us has conscious awareness. We can then examine the correlations between neurophysiological observables and our own subjective, experiential private observables and make models of causal brain-experience relations that can be tested by altering neural activity patterns. The problem of the existence of conscious awareness is insoluble, but the problem of identifying the neural requisites of conscious awareness and its contents is eminently resolvable. Whether the funding powers that be decide to devote significant resources to solving it is a different matter, but eventually, sooner or later, neuroscience will have an adequate grasp of what are those requisites.

I do agree with Frances  that the neural global workspace theories are the best theories we currently have. (Frances, I think you should just stop there and avoid invoking physics, especially quantum mechanics to attempt to explain consciousness. Although I don't think Hameroff's theory works on any level, at least he points to possible neuronal biophysical mechanisms that might serve as substrates.) 

I think that Pask's theory of organizational closure is (ultimately) compatible with these theories, given some heavy translation into neural terms. When a regenerative loop is closed, then one has self-sustaining states and circular causality. I think of this in terms of a multistable "autopoiesis of neural signals" (apologies to Maturana and the language police). 

Here are 2 papers, one published and the other unpublished, that outline my views on both emergence and consciousness.

CarianiNYASRegen.pdf
CarianiEmergencePOV2007.pdf

Jason Hu

unread,
Mar 8, 2023, 9:03:29 AM3/8/23
to cyb...@googlegroups.com

Hi Peter,

I can say “welcome” since CYBCOM was started by me as a student of Stuart Umpleby in GWU in 1993. It was hosted at GWU as a listserv for many years and then migrated to Google and is now volunteer-managed. People and their conversations come and go on this platform. Club of Remy members also use here for “after-session follow-ups” sometimes.

Perhaps what you said here can be bought for a Club of Remy discussion session? I already invited Francis and Shima to do a discussion, now adding you, we will have four discussants and I guess the result will be very helpful to our members and us. If you all agree let’s decide on a date. I’m available to host these meetings on Wednesdays and Fridays. Currently, only March 17 is booked, so that you can pick up other Wednesdays or Fridays.

Also, thank you for sharing your paper.

Best regards - Jason


--
You received this message because you are subscribed to the Google Groups "CYBCOM" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cybcom+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cybcom/42B10168-D87B-4DCF-B476-0D81DF185B4A%40gmail.com.

--
You received this message because you are subscribed to the Google Groups "CYBCOM" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cybcom+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cybcom/42B10168-D87B-4DCF-B476-0D81DF185B4A%40gmail.com.

I am generally extremely skeptical whenever people try to argue this or that on the basis of relativity and quantum mechanics, especially when attempting to explain conscious awareness. The "collapse of the wave function" is simply a description for a contingent event that occurs when an observer makes a measurement, whose outcome could be one of several. The operation of measurement is necessarily a contingent process. There is no uncertainty reduction, no "information" in Ashby's sense, unless there are multiple possible (this means sometimes-observed) outcomes. I push back against the mystification of the measurement process.

I don't believe that consciousness has a purpose or function per se such that that it somehow evolved in response to natural selective pressures. I know this is a popular assumption, that every aspect of brain and mind must have some evolutionary purpose vis-a-vis survival or it wouldn't be there. 

[Before we tackle this question, I want to say that my working hypothesis is that conscious awareness is not restricted to humans -- I think any animal with a nervous system and a regenerative short term memory (i.e. organizational closure of neuronal signals) likely has some basic form of conscious awareness (although I'm unsure about sponges). Speculation is fine, but I think theories and models of conscious awareness first need to be applied to human consciousness, where we can have reliable first-person experiential reports, such that they can be tested empirically.]

Back to arguments against a functional role for awareness per se in coordinating behavior. We should not automatically conflate awareness (experiential state) with working memory (mental function). Both may supervene on organized neural activity -- because they co-occur when we are awake and attentive does not mean that they are the same thing. Notably, there are cases of human sleep-walking subjects who are unaware of their surrounds who have carried out quite complex tasks that require memory functions (in one famous case, a sleep-walking individual drove 20 miles and killed one of his in-laws then showed up in very confused state, covered with blood, at a police station. As it subsequently turned out, this person was diagnosed with a neurological sleep disorder and was ultimately acquitted by a jury.). All sorts of automatisms are commonplace. We carry out all sorts of tasks without awareness of the details of how we are doing what we are doing. Working memory certainly plays an essential functional mental, informational role in all sorts of activities, but it is possible to carry out complex actions without conscious awareness, which undermines the argument that awareness per se confers selective advantage. 

As far as I can see, awareness and its contents is determined by (the organization of) microscopic brain (spiking) processes but awareness by itself does not change those microscopic processes. As Jaegwon Kim would say, the behavior of the physical world in terms of public, physical observables is closed under physical laws. This makes our experience "epiphenomenal", which has had unfortunate interpretations. To hard core eliminative materialists it means that conscious awareness can be ignored. I don't agree with this stance at all -- our subjective experience is as "real" to us as anything else, and it is amenable to scientific study. I take the position that awareness and its contents supervene on particular dynamic organizations of matter, i.e. particular patterns of neuronal activity (that can be reversibly disrupted by general anesthetics). The causality runs one way, from material process (the brain) to subjective experience. I don't think that there is an ultimate explanation for why the universe is structured this way, but there it is.

Functional explanations likewise supervene on material process. A bicycle only functions as a means of effective human propulsion if its parts are organized in the right (appropriate for the function) relations to each other. Whether the parts are assembled into a functioning bicycle does not affect the physics of the individual parts. While it is true that the wheels will exhibit circular motion around the axles when the bike is correctly assembled, and hence the organization of the parts will constrain their trajectories, there is no "downward causation" per se (unless one mixes variables of different types/levels). There is only downward causation in the sense that the observation that ("higher level") organizational constraints are operant (Pattee would call these local constraints, as opposed to universal laws) can predict the relative trajectories of the parts. For example, if one observes a door on a hinge, then the predicted motion of the door can be constrained to one degree of freedom so long as the hinge remains intact and free to move. The hinge is an organizational, special constraint, whereas the parts continue to obey the general laws of physics (however construed).

Whereas functional explanations assume some measure of performance, awareness has no such measure (but working memory does! working memory is a functional concept involving the ability to retain informational distinctions over short periods of time). I am all for explanations of psychological capabilities in terms of both informational functions and what they do for the organismal lineage in terms of differential survival, development, and reproduction. I also think that the same brain mechanisms that subserve those mental functions, such as working memory, are also essential for conscious awareness. But, unlike working memory and attention, I don't see a direct functional role for awareness itself. One does not need to include awareness in neuropsychological models to account for informational functions. However, one does need to include awareness if one is going to predict our subjective experience in some situation. This is why a purely physical/biophysical description of a living brain, even if it were perfectly predictive of its next state, would be incomplete because it would leave subjective experience out of the picture. It would also leave out functional explanations -- such a state transition model by itself would not provide any functional descriptions that would tell us how the brain "works" to achieve its various functions (sensation, perception, cognition, emotion, goal-seeking, memory, anticipation, attention, orchestration of action).

I don't see how the proposed theory explains anything about perception of inverted spectra. The inverted qualia problem is one of those pseudo-problems that philosophers love to conjure up (zombies are a similar kind of idle philosophical rabbit hole). It is neither "correct" nor "incorrect", just badly posed. Any adequate theory of mind will need to explain the neural codes underlying percepts, working memory, and other functions. My working hypothesis, having spent several decades working on the problem of the neural basis of pitch perception, is that the relational structure of perceptual attributes (qualia) is isomorphic to the relational structure of the neural coding space. It is not a simple matter (to anyone but a philosopher) to invert color perception. There is more than one way to generate color percepts (using a particular wavelength of light or a temporal flicker pattern, such as the Benham Top) -- you would need to remap more than simple wavelengths, such that all of the neural correlates of color were also remapped. And as you say, relations of colors to other concepts would also need remapping. The problem is a pseudoproblem because the comparison is artificially constrained to become tautological -- the definition of the problem excludes any empirical test and mandates a particular solution (I have similar issues with "parallel universes"). If we can look under the hood at the brain mechanisms involved, there will be differences in wiring and/or coding. I think we do presently have some of the neurophysiological means (e.g. evoked visual potential latencies) to assess whether the color visual systems of two subjects are inverted with respect to their acquired linguistic labels.

I think a stronger example would be the experiential up-down inversion that occurs when wearing inverting spectacles. At first the scene appears upside down, but after some time (a few days) and a good deal of clumsy mucking around, the scene flips back to normal as appropriate percept-action/action-percept relations are re-established. Certainly from their behavior we can distinguish between individuals in the clumsy ill-adapted state vs. the normal right-side up state. But can we distinguish between someone who is wearing non-inverting glasses from someone who has adapted fully to inverted spectacles? Not on the basis of private experience, because qualia are inherently private observables, not on the basis of behavior because complete adaptation is possible, but perhaps on the basis of neuronal activity patterns.

This is all obviously a much longer discussion that we can continue at will.

take care, keep asking questions
Peter


Begin forwarded message:

From: cyb...@googlegroups.com
Date: March 6, 2023 at 14:19:25 EST
To: Digest recipients <cyb...@googlegroups.com>
Subject: [CYBCOM] Digest for cyb...@googlegroups.com - 2 updates in 1 topic
Reply-To: cyb...@googlegroups.com


Francis Heylighen <fhey...@vub.ac.be>: Mar 06 12:43PM +0100

Dear Jason,
 
You are correct that our "local field" theory of consciousness is based on the assumption that we can be aware of only a rather narrow range of phenomena, and this in an intrinsically subjective way colored by personal values and feelings.
 
I am even a bit hesitant to use the term "field", since many people inspired by New Age ideas seem to believe that individual, human consciousness is just a part of a global field of consciousness that pervades the cosmos. That, however, is incompatible with the locality principle in physics, which notes that no communication between different parts of the cosmos can go faster than the speed of light.
 
Many people wrongly assume that quantum non-locality has proven otherwise. But quantum entanglement is not sufficient for information transmission: it only allows "correlation", not "communication". That is why quantum mechanics and relativity theory are perfectly compatible. Quantum field theory, being relativistic, makes clear that nothing can travel faster than light. Therefore, a consciousness that would extend from here to the Andromeda galaxy, would need millions of years to grasp that something happened simultaneously here and in Andromeda. Not quite the thinking speed you would expect from a cosmic consciousness ;-)
 
Let's then indeed talk a bit more about this and arrange a zoom with me and Shima...
 
Best,
 
Francis
 
I have been thinking about your project here and consider it very important if, by "localness" in your local field theory, you meant something I expressed in a recent Club of Remy discussion as "a local observer observing a limited (thus incomplete) amount of information reaching to imperfect knowledge what's and why's and producing a non-optimized plan of how's for his/her action that sometimes does work but most of the times leading to unintended consequences."
 
The tongue-twister style of this funny expression is targeting a widespread original sin of academics, ie, assuming there exists ideal global "truth" that their ivory-tower thinking can eventually nail down, their construction of fancy theories and models that could attain "globality" and thus save the world. My perception of your brief introduction to your local field theory is that you and Shima Beiji might also be chasing the same rabbit that I have been hunting from a different path. If this is correct, let's talk more or arrange a Zoom meeting to chat more about it; then, I would like to join your team to work on this. If this is incorrect, please send me more of your writings on this line so I can find the distinctions.
 
Many thanks! - Jason
 
-----------------------------------
 
Jason Jixuan Hu, Ph.D.
 
Independent Research Scholar
 
Organizer: Club of REMY: www.clubofremy.org
 
General Partner: Wintop Group: www.wintopgroup.com
 
YouTube: https://www.youtube.com/channel/UCJumBT3J15xhAoNs9CnrSVg/videos
 
office: j...@wintopgroup.com
 
mobile: jasonth...@gmail.com
 
---------------------------------------------------
 
On Mon, Feb 6, 2023 at 11:22 AM Francis Heylighen <fhey...@vub.ac.be> wrote:
 
As several CLEA people have already heard us report enthusiastically, Shima and I have made a real breakthrough in our research, which potentially could make us famous :-). Before we start writing the paper, here is already a quick summary, for which we hope to get your feedback...
 
Francis
 
The local field theory of subjective experience:
 
a soft solution to the hard problem of consciousness
 
Francis Heylighen and Shima Beigi
 
After years of study, reflection and discussion, we have recently made a breakthrough in our understanding of consciousness, which we want to report here in an initial short form. This breakthrough in particular proposes a solution to the so-called "hard problem of consciousness", which by some is considered to be the most difficult problem in the whole of science. Our objective is twofold: to demystify consciousness, and to promote a more open-minded way of relating to the world.
 
The question of what constitutes consciousness can be subdivided in two questions:
 
1) the level of consciousness: what distinguishes conscious mental processes (eg thinking, observing, acting) from non-conscious ones (eg sleep, anesthesia, subliminal perception, subconscious intuitions)
 
2) the content of consciousness: what precisely constitutes a subjective experience (also known as "phenomenal consciousness" or "quale")?
 
For (1), we assume that a plausible answer is provided by the global neuronal workspace theory of consciousness (Dehaene, Baars, Changeux·), which is supported by a growing amount of empirical evidence. This theory posits that conscious experiences are "broadcasted" across a global, interconnecting network of neurons in the brain, so that they can be examined, monitored and redirected by different more specialized modules in the brain.
 
This broadcasting requires a strong, recurrent or resonant pattern of activation, thus maintaining the experience for a while in working memory, giving other processes the time to examine and redirect the experience. Subconscious processes, on the other hand, just move directly or automatically through the respective specialized neural networks (eg for visual recognition) in a "feedforward" manner, thus leaving no time for other modules to intervene.
 
For (2), our "local field theory" explains what subjective experience is, why we have it, and how it can be expanded. It thus provides a "soft" solution to the "hard" problem. The theory is based on two insights:
 
a) experience is intrinsically meaningful or affective: it touches us deeply, at an embodied level, pushing us towards or away from respectively good or bad things
 
b) consciousness provides us with a choice, ie with a range of possible thoughts, actions, or things to pay attention to.
 
a) The meaning aspect refers to the fact that subjective experiences are not just neutral observations: we feel them; we are "moved", "touched" or "affected" by them. This "raw feeling" can be understood as an implicit tension, drive or force, which pushes or pulls us in a certain direction. At the most primitive level of organisms such as bacteria or sea anemones, sensations trigger movement that is directed towards a goal, ie a fit state. That means away from dangers (aversive behavior) and/or towards opportunities (appetitive behavior).
 
But the tension does not need to result in movement: perhaps the feeling is one of pleasure or contentment that pressures you to stay in the same place, ie continue doing whatever you were doing rather than change course. What is important is that sensed conditions are evaluated or interpreted with respect to the organism's value system. They need to be made sense of, so that the organism knows how to react adequately.
 
For primitive organisms, there is only one possible reaction for each sensed condition: the reaction is deterministic: stimulus -> response, or condition -> action. This could be modeled as a dynamic system, where for each state there is just a single next state. While such an organism can "sense" conditions in the cybernetic sense, we would not call it "conscious" in the human sense of the word. It behaves rather like an automaton, or perhaps like a "philosophical zombie" that is supposed to lack subjective experience. For true consciousness, we need a higher level of control, where the organism can consider different potential reactions, and choose between them. That brings us to the next core idea of the local field theory: choice or freedom.
 
b) The choice aspect is what distinguishes conscious from subconscious processes. The latter happen automatically, in the background, so that you cannot examine, consider or intervene in them. Consciousness is what gives a person some degree of control over their thoughts and actions, so that they can decide to pursue one path rather than another. This is the aspect of consciousness that underlies what is known as agency, volition, or free will. To achieve such control, the person must not only make sense of the situation at hand, but also conceive a range of potential developments or courses of action (which we call a "prospect"). That prospect then guides the decision about which course of action to pursue.
 
Rather than as a deterministic dynamic system, such a prospect could be modeled as a local field of potential happenings weighted by their subjective probability and value. The weighting means that potential events or actions that are more likely, desirable or undesirable receive more attention or activation. Thus, they are primed for becoming the next focus of attention. Whether they actually become the focus depends on what happens next, in perception, action or thought: is the potential actualized or not?
 
The process of actualizing one of the potential happenings can be modeled by analogy with the "collapse of the wave function" in quantum mechanics. The local field of prospect is similar to a wave or probability distribution, centered on the present focus of attention, while diffusing away from it in the directions of highest probability or desirability. The collapse recenters it on a new focus, determined by the last event that affected the conscious state. From there, it immediately starts diffusing again towards the most strongly associated potential developments, until a new event collapses it around that new focus of attention.
 
The collapse does not need to be discontinuous, like in quantum mechanics, although it can be. An example of a discontinuous collapse could be a Gestalt switch, an "Aha!" experience, or the appearance of a new phenomenon. A continuous "collapse" is more like the fast, but continuous, focusing of a camera on a particular object within its field of vision, or the narrowing of the beam of a flashlight from a wide angle to a more focused one.
 
Neural dynamics
 
On the neural level, the local field or wave may be realized as a "resonant" or "reverberating" pattern of activation circulating across an assembly of neurons. That means that the process of circulating activation is self-maintaining, providing it with sufficient stability to maintain for a short while in working memory or in the "global neuronal workspace". That allows other parts of the brain (eg incoming perceptions) to add their own activation (interpretation) to it. These perturbations will shift the self-maintaining pattern somewhat. This shifting could be modeled using our simulations in the Templeton project of how chemical organizations change under the influence of perturbations. The "collapse" then corresponds to the settling of a shifting pattern into a new attractor (resonance, self-maintaining organization). In between attractors, the shift is continuous.
 
Given that the overall dynamics is highly non-linear, and that "perturbations" come from a wide variety of independently functioning brain circuits, the result of the collapse is in general unpredictable, yet far from random or arbitrary. This looks like a realistic model of "free will", in the sense of a mechanism that makes non-deterministic, yet meaningful or intelligent, decisions. Note that in spite of the "quantum" character of unpredictable collapses, the process does not actually require quantum effects at the subatomic level: the chaotic dynamics inherent in the brain is sufficient to explain this kind of dynamics.
 
Qualia inversion
 
As a concrete illustration of how the local field theory resolves the hard problem of consciousness, we will look at the thought experiment known as "spectrum inversion". If the zombie argument would be correct, then subjective experience is not necessary for acting in a human-like way. That would mean that two different people could in principle have completely different subjective experiences of the phenomena they encounter, and still behave in the same way.
 
For a simple version of the thought experiment, imagine that whenever I see the quale of "blue", you actually see a different quale, which is equivalent to my quale of "red". Vice versa, when I see "red" then you see "blue". As long as we agree about which things we designate as "blue", respectively as "red", it would seem that we can never find out that our subjective experiences are actually different.
 
In contradiction with this assumption, the local field theory says that the experience of "red" is not just an independent quale, but a field of associations that point towards a range of implications, expectations and actions. The field of "blue" is very different from the field of "red". For example, we interpret red as a warm (and therefore attractive) color, which however may signal a danger (and in that aspect aversive). Blue, on the other hand, we associate with cold (aversive), but also with calmness or serenity (attractive). Therefore, we are likely to react differently to red or blue qualia. Inverting these qualia would make an observable different in behavior, in contradiction with the zombie assumption.
 
As a fun illustration, below are two poems generated by ChatGPT about each color, listing some of these common associations that are clearly pointing in different directions:
 
Red, the color of passion and fire,
 
A hue that inspires both love and desire.
 
It's the shade of a rose, in bloom and alive,
 
A symbol of beauty, that will always survive.
 
Red, the color of blood, a reminder of life,
 
A symbol of courage, in the face of strife.
 
It's the tint of a sunset, at the end of the day,
 
A reminder of beauty, that will never fade away.
 
Red, the color of danger, a warning to heed,
 
A signal to stop, before we proceed.
 
It's the tint of a flame, burning bright and hot,
 
A reminder of power, that should be respected a lot.
 
Blue, the color of the sky and the sea,
 
A hue that brings peace and tranquility.
 
It's the shade of a sapphire, shining bright,
 
A symbol of wisdom, that guides us through the night.
 
Blue, the color of sadness, a feeling we all know,
 
A reminder that sometimes, we need to let go.
 
It's the tint of a bruise, a sign of healing pain,
 
A reminder that we're alive, to feel again.
 
Conclusion
 
We have proposed a new theory of subjective experience that starts from its functional aspect: why it is good for biological organisms and for human individuals not only to have consciousness, but also to deepen and expand it as much possible. The more accurately and intensely you sense, feel and evaluate phenomena, the better you will know which reaction may be appropriate. And the wider the range of potential reactions you are able to consider, the better the eventual choice you will make. Therefore, we can expect that consciousness will expand, both over the evolutionary history of life, and over the personal history of individual development. In further research, we plan to apply our theory in order to help people expand their consciousness in the most effective way.
 
--
 
Prof. Francis Heylighen
 
Director Center Leo Apostel, Vrije Universiteit Brussel
 
https://clea.research.vub.be/en/FrancisHeylighen
 
--
 
You received this message because you are subscribed to the Google Groups "CYBCOM" group.
 
To unsubscribe from this group and stop receiving emails from it, send an email to cybcom+un...@googlegroups.com.
 
To view this discussion on the web visit https://groups.google.com/d/msgid/cybcom/p06240823e006f2227af9%40%5B134.184.131.111%5D.
 
--
 
You received this message because you are subscribed to the Google Groups "CYBCOM" group.
 
To unsubscribe from this group and stop receiving emails from it, send an email to cybcom+un...@googlegroups.com.
 
To view this discussion on the web visit https://groups.google.com/d/msgid/cybcom/CA%2BSRckuq6uOPF8mTbfx9RxsfoUUh-zyQsMk7tXbrLJGPcSK16Q%40mail.gmail.com.
 
--
 
Prof. Francis Heylighen
Jason Hu <jasonth...@gmail.com>: Mar 06 06:22AM -0700

Dear Francis,
 
It's great that my conjecture is agreed by you! How about that we schedule
a Club of Remy discussion session with you, Shima, and me being
discussants? (We schedule a session whenever we have three discussants
focusing on the same topic - which needs to be "important and urgent" as a
club criteria.) This topic certainly qualifies! I think we are targeting a
principle as fundamental as Godel's Law - if you're hesitant to use the
term "field" - and I agree with your hesitancy since "field" implies
infinity reach, which contradicts the concept of "locality." I think we are
actually targeting a fundamental human limit, which might previously be
addressed or noted by Hayek, Popper, Godel, etc., from different cognitive
lenses (economics, philosophy of science, mathematics). Still, we focus on
basic and general human cognitive activities of knowing-thinking-doing. (I
had suggested in several different meetings with colleagues that we should
change the term "observer" to "OTA" - Observer-Thinker-Actioner all
integrated.) Clarifying the limitation of human OTA in a new Theory of
Locality of Cognition helps to avoid "the Fatal Conceit" (per Hayek) and to
practice "Piecemeal Engineering" (per Popper), and serves as a vaccine
against "Abuse of Value"(per myself.)
 
If you agree, then each of us prepares seven slides to present our key
points (see "Magic Seven Slides Rule" at clubofremy.org first page) that we
want to bring to a Zoom discussion. We need to agree on a date. We have our
regular topic discussions on Wednesdays and book-reading sessions on
Fridays. Our regular meeting time (before the summer-time change) is the
following. We relatively fix the timeslots for the convenience of our
members who are interested in attending our sessions.
- 7 AM US West Coast
 
- 8 AM Phoenix
 
- 9 AM Chicago
 
- 10 AM US East Coast
 
- 3 PM London, Lisbon
 
- 4 PM Amsterdam, Madrid, Rome, Paris, Maribor, Ljubljana
 
- 6 PM Moscow
 
Please let me know what date you would like to do this session ASAP since
I'll be sending out our next meeting reminders to CoR members; some of them
might be interested in this too.
 
All best - Jason
 
 
You received this digest because you're subscribed to updates for this group. You can change your settings on the group membership page.
To unsubscribe from this group and stop receiving emails from it send an email to cybcom+un...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "CYBCOM" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cybcom+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cybcom/42B10168-D87B-4DCF-B476-0D81DF185B4A%40gmail.com.

Louis Kauffman

unread,
Mar 8, 2023, 12:22:30 PM3/8/23
to cyb...@googlegroups.com
Dear Jason,
I would like to join a Club Remy discussion on "Sentience and AI” if we have one.
Best,
Lou Kauffman

Jason Hu

unread,
Mar 8, 2023, 12:40:29 PM3/8/23
to cyb...@googlegroups.com
Dear Lou, of course! You just need to initialize one by giving me a one-page TAO - Title/Abstract/Outline of your thoughts. I'll post it as a "Call for Discussants." If two or more members of CoR sign up, we schedule a session! Everyone prepares up to 7 slides (you used to have one but it still worked) for about 20 minutes of presenting, then you have your happy 2-hours conversations recorded for future students. You should lead more of these sessions. Best - Jason

Joshua Madara

unread,
Mar 8, 2023, 1:55:49 PM3/8/23
to cyb...@googlegroups.com
Lou, I would be interested in that discussion, as well.

Peter, welcome to CYBCOM!

Jason, thank you for everything!

Sincerely,
Joshua

Louis Kauffman

unread,
Mar 8, 2023, 2:12:57 PM3/8/23
to cyb...@googlegroups.com
Can do. It would be early April. But the session on “consciousness” I could join whenever you are holding it.

Louis Kauffman

unread,
Mar 9, 2023, 2:06:01 AM3/9/23
to cyb...@googlegroups.com
Dear Jason,
Here is a suggestion for a Session. As I said early April can work for me.
Best,
Lou 


Title: Sentience and AI
Abstract: Recently, conversational programs such as ChatGPT have been made available for public experimentation.
These programs handle language well enough make summaries of aspects of knowledge, prove elementary mathematical theorems, and they appear to hold 
conversations with humans who operate the programs. The purpose of this meeting of Club Remy is to discuss the properties of such programs and how their introduction into 
the conversational domain of human beings affects the structure of our thought and action. I suggest this theme rather than any attempt to discuss whether the programs are sentient.
They are not sentient. This is what makes the situation so interesting - we are likely to endow the behaviours of AI programs with qualities that they at best possess only in their 
interaction with observers. This is a present day experiment in observing systems.

Here is a sample from ChatGPT:




Miguel Marcos Martinez

unread,
Mar 9, 2023, 2:41:48 AM3/9/23
to cyb...@googlegroups.com
I’d love to attend as well. Could I ask also for key terms to be defined? I am in the camp that does not attribute sentience to any LLM or any other type of generative AI, but I think it serves a talk like this well to define ‘sentience’, for example, especially since the claim that ChatGPT is not sentient is explicit in the description of the talk, even if it is not the subject.

Defining terms like this is itself controversial, but setting a “local” benchmark against which to measure subsequent claims in the talk helps to contextualize a great deal and possibly avoid misunderstandings.

Thanks. 
Miguel 

--
You received this message because you are subscribed to the Google Groups "CYBCOM" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cybcom+un...@googlegroups.com.

Francis Heylighen

unread,
Mar 9, 2023, 5:28:18 AM3/9/23
to cyb...@googlegroups.com, Francis Heylighen, Shima Beigi, Francis Heylighen
I see two stubborn misunderstandings coming up again and again in discussions of the “hard problem of consciousness”, or more concretely the question of why we have subjective experience:

1) why is this experience subjective, i.e. why cannot we just objectively perceive our situation?
2) why do we actually experience te situation, i.e why cannot we just process the available information the way a computer does, mechanically, without any accompanying feeling or experience?

The answer to question (1) should be obvious to cyberneticists and autopoiesis theorists: of course, an organism can only “know” the world from its local, subjective perspective. There is no such thing as objective knowledge or observer-independent information. 

What our “local field theory” of consciousness proposes is an answer to question (2). This is obviously needed since Peter Cariani (as quoted below), together with many others, still labors under the impression that experience is merely an epiphenomenon, i.e. something that does not have any survival function, and which we could just as well do without.

Forget about quantum mechanics and relativity theory, which are at best only inspirations for an eventual mathematical model of the process we are describing, but which are irrelevant for the actual physical mechanism.

Peter’s example of the sleepwalking murderer (i.e. someone performing a complex action without being conscious) is instructive because it is so obviously extreme and exceptional (and one could rightly wonder whether this really happened the way it is reported). Yes, there are plenty of things we can do without being conscious of our actions, but these normally do not include courses of action that require a complex sequence of decisions, such as murdering someone. Subconscious processing of information is the brain’s default mode for automated, routine activities, such as walking, seeing, or parsing speech. Conscious processing is much slower and more energy intensive. It will therefore only be used for activities where there is substantial uncertainty about the right course of action. In other words, consciousness is what allows intelligent, deliberate choice between a range of potential thoughts, interpretations or actions.

The choice-making process in the brain is of course not structured as an objective, algorithmic decision procedure with well-defined alternatives. The answer to question (1) already excludes that interpretation. Therefore, standard information processing or cognitive science accounts, such as measuring the capacity of working memory, are not sufficient to describe it. Our “local field” model instead considers a highly contextual, fuzzy, transient and continuous “wave” of potentialities. Moreover, that “wave” is not just a probability distribution (the way wave functions in quantum mechanics are usually interpreted), but a “field”, i.e. a system of drives or “forces” pushing and pulling in different directions, with the different drives emanating from different circuits in the brain.

There is no obvious or easy way to “measure” the performance of such a field in guiding our decision-making, unlike the storage capacity of working memory, as Peter correctly notes. But I am certain that indirect measures are possible, because there is a clear difference between people acting consciously and people acting in an automatic, unconscious manner. 

One measure we recently considered is the “perplexity” of language produced. Perplexity is a way to quantify the degree of unexpectedness of words and phrases. The simplest way to determine whether a text was written by ChatGPT or other AI programs rather than by a human is when the text has low perplexity. That is because ChatGPT generates text automatically on the basis of the recurrent patterns it has learned, thus generating highly predictable phrases. Our working hypothesis is that the more “conscious” you are when talking or writing, the wider the range of ideas and words you will consider, and therefore the higher the perplexity of the language you produce. It’s not clear yet whether this hypothesis can be empirically confirmed, but if it does not pan out, I am sure there will be other ways to operationally test our hypothesis.

One other possible operationalization would be to observe a person’s behavior and measure the degree to which this behavior is predictable, using common, standard routines, rather than decided step by step. Imagine that you are walking to your destination in the usual manner, not paying attention to where you put your feet. On a regular surface, your walking gait is likely to be highly regular and predictable. Imagine now that you are a bored child, who has made it a game to walk according to a particular system, e.g. avoiding to step on the borders separating pavement stones, or aiming to touch every fallen leaf with your toes. Or imagine simply that the road is muddy and that you need to be very careful where you place your feet in order not to get wet. In the latter case, your walking behavior will be both very conscious and very non-routine (i.e. unpredictable for someone who does not know what you are paying attention to). I’m pretty sure that one could develop an operational procedure to distinguish the first type of unconscious, routine walk from the second, conscious one...

Another reason why I am convinced that subjective experience has a profound, concrete function is the observation of my own consciousness through meditation and mindfulness exercises. I have the impression that philosophers and theoreticians who see subjective experience merely as some weird, abstract epiphenomenon, which we could very well do without, are not in touch with their own bodily sensations. The closer you observe your own feelings and experiences, the clearer it becomes that these feelings are constantly anticipating, directing and guiding your further thoughts, feelings and actions. This happens through an ever-shifting dynamics of associations evoking further associations, which may or may not result in an externally observable change of behavior. 

The whole idea of a zombie without subjective experience, yet behaving indistinguishably from a normal human, for me is an aberration that only a philosopher alienated from his own body could conceive. (I’m using the masculine “his” on purpose, as I suspect women tend to be more in touch with their actual feelings).

Finally, I want to reply to the kind of criticism I expect to get from Peter and others, which is that the local field theory is probably reducible to the complex dynamics of activation in neuronal networks, and that it therefore can only at best explain the  “neural correlates” of consciousness. From this perspective, neuronal activity is the “objective”, “physical” aspect of consciousness. On the other hand, this perspective assumes that subjective experience cannot be reduced to neural correlates. My answer to this is the same as to question (1): of course, from a first-person perspective, experience is subjective, affective, felt, etc. From a third-person perspective, it may be modelled as an apparently objective pattern of neural activity. But from an autopoietic perspective, both perspectives are subjective. 

Both are merely simple models, constructed by an observer, of an infinitely complex external reality that we cannot directly access. None is intrinsically “more real” or more “fundamental” than the other. Understanding consciousness does not imply that you reduce the “subjective” model to the “objective” one: complete reduction of one model to another is almost never possible. Therefore, we should not be surprised that one model captures aspects that the other misses. The fact that certain aspects of experience are not captured well by the “neural correlates” model does not imply that these aspects are merely “epiphenomal”, and that we therefore cannot explain their function. 

I see the local field theory as an “in-between” model, which clarifies the link between neuronal activity and the felt, subjective experience that accompanies consciousness. Being a model, it will of course be incomplete, and too simple to capture all the aspects of experience. But I hope it will lay to rest this enduring misconception that experience does not have any function or purpose…

Francis



 


Prof. Dr. Francis Heylighen
Director, Center Leo Apostel

Louis Kauffman

unread,
Mar 9, 2023, 3:15:59 PM3/9/23
to cyb...@googlegroups.com
I think that we can go with this announcment as written.
 One of the challenges of the session will be to “define” sentience.
There is no way to define sentience in a formal way as I define prime number.
Words in English have definitions, and they are ultimately circular, depending for grounding on our experience.

Peter Cariani

unread,
Mar 16, 2023, 4:54:05 PM3/16/23
to cyb...@googlegroups.com
My apologies -- I sent this out on March 9th via the wrong email server, so it did not make it to CYBCOM. I've edited it.

On Mar 9, 2023, at 2:51 PM, Peter Cariani <car...@icloud.com> wrote:

Hi Frances,


On Mar 9, 2023, at 5:28 AM, Francis Heylighen <fhey...@vub.ac.be> wrote:

I see two stubborn misunderstandings coming up again and again in discussions of the “hard problem of consciousness”, or more concretely the question of why we have subjective experience:

1) why is this experience subjective, i.e. why cannot we just objectively perceive our situation?

2) why do we actually experience te situation, i.e why cannot we just process the available information the way a computer does, mechanically, without any accompanying feeling or experience?

The answer to question (1) should be obvious to cyberneticists and autopoiesis theorists: of course, an organism can only “know” the world from its local, subjective perspective. There is no such thing as objective knowledge or observer-independent information. 

I don't think this is an answer to my complaint that the Hard Problem of Chalmers is a problem of fundamental existence, i.e. why is there subjective experience in the first place. These questions themselves don't look like misunderstandings per se -- you might want to explain what exactly are the misunderstandings to which you are referring. Unpack that assertion. 

I am a hard-core empiricist, psychological constructivist, and non-realist so I entirely agree that there is no knowledge that is completely independent of observers. What we call "objectivity" is when there are intersubjectively verifiable observations that, by social agreement, involve observables (public, calibrated measuring processes) that will yield similar observations no matter who is the observer or what they believe. For example, if I measure the length of a board with a ruler and find it to be 1 meter long (within 1 cm), I expect that anyone else using a properly calibrated ruler will measure that board length to be 1 m. You do not need to adopt a realist metaphysics to have  measurement processes that are highly replicable across observers.


What our “local field theory” of consciousness proposes is an answer to question (2). This is obviously needed since Peter Cariani (as quoted below), together with many others, still labors under the impression that experience is merely an epiphenomenon, i.e. something that does not have any survival function, and which we could just as well do without.

I use the term epiphenomenon advisedly because it is apt to be misinterpreted. Here, for me, it just means that the causal relations between material, neural substrates and conscious awareness is one-way. Our phenomenal, subjective experience is produced as a concomitant of particular organizations of neuronal behavior, but the subjective experience itself does not modulate neural activity (the neural processes that generate the subjective experience in tandem with external stimuli DO cause subsequent neural states). It may be that I am using a more restrictive conception of "causal" linkage here.

As far as I know, all conscious experiences come after their neural correlates (e.g. readiness potentials). There is always a time lag between the neural processes and our awareness of them. The brain has made up its mind before we become aware of the decision. I know it seems counterintuitive, and operates against Cartesian and other theories. 

Do you have a different model of the relation between brain states and awareness? 
Can you explain how you thing about that?


Forget about quantum mechanics and relativity theory, which are at best only inspirations for an eventual mathematical model of the process we are describing, but which are irrelevant for the actual physical mechanism.

OK, will do.


Peter’s example of the sleepwalking murderer (i.e. someone performing a complex action without being conscious) is instructive because it is so obviously extreme and exceptional (and one could rightly wonder whether this really happened the way it is reported). Yes, there are plenty of things we can do without being conscious of our actions, but these normally do not include courses of action that require a complex sequence of decisions, such as murdering someone. Subconscious processing of information is the brain’s default mode for automated, routine activities, such as walking, seeing, or parsing speech. Conscious processing is much slower and more energy intensive. It will therefore only be used for activities where there is substantial uncertainty about the right course of action. In other words, consciousness is what allows intelligent, deliberate choice between a range of potential thoughts, interpretations or actions.

I think we need to avoid conflating all informational processes in brains with awareness. There is an enormous amount of information processing going on that is subliminal -- all those aspects of brain activity and functions of which we are not aware.

Let's also be careful about conflating awareness with attention per se.

I should say that these ideas of which I am wary are commonly held, and I think my position is possibly a minority view. The assumptions  are:

1) Everything we see in biology, brains and behavior must have a survival-related function (this rules out of hand structural imperatives). Lots of people think that brain rhythms must have functional significance because of their ubiquity. However, many physical systems have natural resonances, so that oscillations may be indicative of which neuronal populations are excited or inhibited. However the oscillations themselves  don't necessarily have a causal, necessary role in information processing -- it is entirely possible that the neural codes that are involved in informational processes are quite independent of oscillations (in the auditory nerve, there are some neural resonances in the form of mildly preferred modulation frequencies, but the neural coding at that level appears to be quite independent of these resonances). The question of the functional significance of oscillations is an open question.

2) Conscious awareness must have survival value because humans and animals are most responsive during waking states (and therefore better perform survival related informational functions) . But awareness per se may depend on the same underlying neuronal mechanisms/processes as working memory, attention, perception, cognition, and other informational functions.

The choice-making process in the brain is of course not structured as an objective, algorithmic decision procedure with well-defined alternatives.

It appears to heterarchical competition between neuronal processes associated with different choices. There are examples of lesions to orbitofrontal cortical areas that can knock out the ability of brains to settle on one choice or another. In my class I used David Eagleton's 6 part series The Brain which aired on PBS and maybe also BBC. It has lots of examples of the kinds of phenomena of which I speak. In it there is an example of a woman, formerly an engineer, who suffered orbitofrontal damage which left her incapable of deciding which vegetables to buy at the supermarket (e.g. do I buy onions or potatoes?).


The answer to question (1) already excludes that interpretation. Therefore, standard information processing or cognitive science accounts, such as measuring the capacity of working memory, are not sufficient to describe it.

The neural global workspace model, which you endorse, as do all neurally based theories, does (qualitatively) explain why experiences are individual and private, on the assumption that particular neural organizations of activity (global signal regeneration, which I intepret in terms of organizational closure), produce experiential concomitants. These models however don't explain at all why such experiences should exist in the first place, or why they should be produced as concomitants of global, regenerative activity. So they describe neural-phenomenal correlations, but cannot address the metaphysical questions of why these should exist. 

I have come to the conclusion that we must regard conscious awareness as a fundamental aspect of the universe that depends entirely on particular organizations of material processes. It's a weird state of affairs, but I don't see any way out of it.


Our “local field” model instead considers a highly contextual, fuzzy, transient and continuous “wave” of potentialities.

What does the wave consist of? What is the material substrate? I'm tempted to make a joke about "hand waving", but I won't, well probably not. . .


Moreover, that “wave” is not just a probability distribution (the way wave functions in quantum mechanics are usually interpreted), but a “field”, i.e. a system of drives or “forces” pushing and pulling in different directions, with the different drives emanating from different circuits in the brain.

OK, fine. 
But what are the fields and forces you are invoking? 
What pray tell is their relation to neuronal activity?


There is no obvious or easy way to “measure” the performance of such a field in guiding our decision-making, unlike the storage capacity of working memory, as Peter correctly notes. But I am certain that indirect measures are possible, because there is a clear difference between people acting consciously and people acting in an automatic, unconscious manner.

OK, well medical people (neurologists) do have some practical tests that they use to assess someone's responsiveness and connection with external surrounds. They have estimated levels of consciousness going from awake down to deep coma. That's fine -- usually I agree that high levels of awareness co-occur with focused attention and enhancement of working memory. And I think that aspects of the same neuronal processes that subserve working memory and attention also produce our experience and its specific contents. This is a longer discussion.


One measure we recently considered is the “perplexity” of language produced. Perplexity is a way to quantify the degree of unexpectedness of words and phrases. The simplest way to determine whether a text was written by ChatGPT or other AI programs rather than by a human is when the text has low perplexity. That is because ChatGPT generates text automatically on the basis of the recurrent patterns it has learned, thus generating highly predictable phrases. Our working hypothesis is that the more “conscious” you are when talking or writing, the wider the range of ideas and words you will consider, and therefore the higher the perplexity of the language you produce. It’s not clear yet whether this hypothesis can be empirically confirmed, but if it does not pan out, I am sure there will be other ways to operationally test our hypothesis.

Contra Tononi's theory and "perplexity" of behavior, I don't think that awareness is based on a complexity threshold. Instead it involves closing a regenerative loop such that one has global sustained informational states that consist of which particular sets of neuronal signals are persisting.

The complexity of our awareness (how many distinctions we hold in our minds at a given time, how complex is the set of signals being actively regenerated) is orthogonal to awareness itself, which I think is a simple closure. When you abolish global signal regeneration, as in deep sleep or anesthesis or seizure, then awareness is abolished. This is basically how global neuronal workspace theories explain these different states.


One other possible operationalization would be to observe a person’s behavior and measure the degree to which this behavior is predictable, using common, standard routines, rather than decided step by step. Imagine that you are walking to your destination in the usual manner, not paying attention to where you put your feet. On a regular surface, your walking gait is likely to be highly regular and predictable. Imagine now that you are a bored child, who has made it a game to walk according to a particular system, e.g. avoiding to step on the borders separating pavement stones, or aiming to touch every fallen leaf with your toes. Or imagine simply that the road is muddy and that you need to be very careful where you place your feet in order not to get wet. In the latter case, your walking behavior will be both very conscious and very non-routine (i.e. unpredictable for someone who does not know what you are paying attention to). I’m pretty sure that one could develop an operational procedure to distinguish the first type of unconscious, routine walk from the second, conscious one...

I agree that one can quantify complexity of behavior under certain conditions (but all metrics of complexity are in the eye of the beholder -- they require some frame of expectations that then are confirmed or violated to some degree).


Another reason why I am convinced that subjective experience has a profound, concrete function is the observation of my own consciousness through meditation and mindfulness exercises. I have the impression that philosophers and theoreticians who see subjective experience merely as some weird, abstract epiphenomenon

As I tried to make clear, I DO NOT regard our experience as irrelevant or in any sense less "real" than physical process. I don't like eliminative materialists either.


, which we could very well do without, are not in touch with their own bodily sensations. The closer you observe your own feelings and experiences, the clearer it becomes that these feelings are constantly anticipating, directing and guiding your further thoughts, feelings and actions. This happens through an ever-shifting dynamics of associations evoking further associations, which may or may not result in an externally observable change of behavior. 

OK, but bodily experiences are in the same category as experiences in general.


The whole idea of a zombie without subjective experience, yet behaving indistinguishably from a normal human, for me is an aberration that only a philosopher alienated from his own body could conceive. (I’m using the masculine “his” on purpose, as I suspect women tend to be more in touch with their actual feelings).

I always try to avoid attributing mental characteristics to one biological sex or another, without strong accompanying evidence. There are obviously some sex-related differences due to genetic and hormonal differences, e.g. females are more susceptible to autoimmune diseases and ME/CFS. Some people focus more on their bodies, some live more in their emotions, some in their thoughts. Each of us has a different set of propensities. I know women and men of all sorts.


Finally, I want to reply to the kind of criticism I expect to get from Peter and others, which is that the local field theory is probably reducible to the complex dynamics of activation in neuronal networks, and that it therefore can only at best explain the  “neural correlates” of consciousness.

Well, you need to draw out the linkages. It is a long way to get from some set of neuronal activity to "fields", however you construe them. (BTW, the Gestaltists had a field theory of brains and minds.)


From this perspective, neuronal activity is the “objective”, “physical” aspect of consciousness.

Only if "objective" here means intersubjectively verifiable,
i.e. that we can make public measurements that will give us comparable results.
That means that we need to observe brains and make predictive models from observed
brain states.

This perspective is not necessarily realist -- you are projecting onto this perspective a realist metaphysics. 


On the other hand, this perspective assumes that subjective experience cannot be reduced to neural correlates.

Subjective experience can potentially be predicted if we have sufficient descriptions of neural correlates and the right bridge laws that then predict our experience on the basis of specific neuronal patterns. But this does not means that subjective experience can be "reduced" to patterns of neuronal behavior. They are completely disjoint sets of observables -- this is why bridge laws are necessary.


My answer to this is the same as to question (1): of course, from a first-person perspective, experience is subjective, affective, felt, etc. From a third-person perspective, it may be modelled as an apparently objective pattern of neural activity. But from an autopoietic perspective, both perspectives are subjective.

All perspectives are subjective, but some involve public observables that are consensually adopted and calibrated so as to be able to replicate measurements.


Both are merely simple models, constructed by an observer, of an infinitely complex external reality that we cannot directly access. None is intrinsically “more real” or more “fundamental” than the other. Understanding consciousness does not imply that you reduce the “subjective” model to the “objective” one: complete reduction of one model to another is almost never possible.

I agree with that. I am a non-reductionist -- my working ontology is hylomorphism -- multiple aspects.

Some models explain and predict more phenomena more precisely than others.
Models can be judged and compared vis-a-vis some specified purpose.

Therefore, we should not be surprised that one model captures aspects that the other misses. The fact that certain aspects of experience are not captured well by the “neural correlates” model does not imply that these aspects are merely “epiphenomal”, and that we therefore cannot explain their function. 

Again you are projecting assumptions and conclusions that are not necessarily held by me.
It creates a straw man false dichotomies and muddles the discussion.

What specific aspects of experience do you think cannot be captured by neuronal models?

Again, I am sorry I used the e-word. It unfortunately evokes involuntary (and subliminal) knee jerk reflexes.

I see the local field theory as an “in-between” model, which clarifies the link between neuronal activity and the felt, subjective experience that accompanies consciousness.

So give us the neural part of it. Try to sketch all that out before you claim that you've solved the mind-body problem. Try to avoid over-hyping your theory. It is an idea in its formative stages, so the proper attitude is to be humble about it and also be sure to acknowledge th history of similar ideas.


Being a model, it will of course be incomplete, and too simple to capture all the aspects of experience. But I hope it will lay to rest this enduring misconception that experience does not have any function or purpose…

As far as I can see, it has not laid to rest at all the questions of function or causality. Those questions were side-stepped. You are still making conflations that are not well justified. It all bears further discussion -- conversation.

take care, Peter

Miguel Marcos Martinez

unread,
Mar 16, 2023, 5:13:21 PM3/16/23
to cyb...@googlegroups.com
Peter, lots to chew on here but one thing immediately caught my eye.

 There is always a time lag between the neural processes and our awareness of them. The brain has made up its mind before we become aware of the decision.”

Who or what is this ‘our’ and ‘we’ you refer to in the above sentences? It appears to be a different entity from the brain/neuronal processes. Or am I reading this incorrectly?

Miguel

Shann Turnbull

unread,
Mar 16, 2023, 5:32:03 PM3/16/23
to cyb...@googlegroups.com, Francis Heylighen, Shima Beigi, Francis Heylighen, stur...@alumni.harvard.edu
Hi Francis
I think you have made good points.
But to be sure might you agree to change the word “information” that you used only four times to the word “data” except on the second occasion when I think you mean “semantic communication”?
A problem in testing your hypothesis about the complexity of language is that it can be so context dependent on internal physiological, neurological, and/or external factors that is not detected by ChatGPT. 
In my latest article that presents six hypotheses I cite on page 77 "Kelso and Engstrøm44 who reported: ‘Experiments show that the human brain is capable of displaying two apparently contradictory, mutually exclusive behaviours at the same time.
GAP_Journal of BESS #4.2_v1 with changes.pdf
Reply all
Reply to author
Forward
0 new messages