On the concept of consciousness

228 views
Skip to first unread message

Ricardo Sanz

unread,
Aug 8, 2023, 9:13:27 AM8/8/23
to ontolo...@googlegroups.com
Hi,

In the previous thread on consciousness conferences I was just answering Alex's question; not trying to discuss anything. However, thanks to John's comments, I have changed my initial intention.

Consciousness is indeed a good, hot topic for an ontology discussion

Indeed, I am involved in building ontologies of awareness and consciousness, both for humans and robots. I know pretty well the current status of "consciousness science" that John summarily described (quite negatively). 

John said:
"Nobody has the slightest clue about how to detect consciousness, even in human beings."
This is false. For example, I have clues that John was conscious when he wrote this particular sentence. 
Doctors do that (enact clues of consciousness). Parents do that. Teachers do that. Chatters do that. Almost every animal does that. 

A better sentence would be:
"Nobody has an absolute certainty about how to detect all states of consciousness, even in human beings."
As of what I know, this is true; e.g. considering the difficulties associated with abnormal states like the lock-in syndrome that John mentioned.

There are some reliable methods of detection of the healthy state of consciousness, but they are not 100% reliable. False negatives are indeed what is most feared.

As John suggested, there is no falsifiable, globally accepted theory of consciousness yet. I think we are in an early stage of (pre)conceptualization and it is unclear what we are talking about.

----------

And then ... two "ontology" questions for the forum: 

A) Are "consciousness" and "awareness" the same thing? Do we need one concept or two?

Francis Crick said that he thought both were the same thing, and he said he always used "awareness" except when he wanted to shock the audience.

B) Do we need a wider variety of concepts because "consciousness" is a mongrel concept (e.g. considering qualia, arousal states, drugs, emotion, world-awareness, proprioception, self, etc.)?



Best wishes,
Ricardo




On Fri, Aug 4, 2023 at 4:49 PM John F Sowa <so...@bestweb.net> wrote:
Alex and Ricardo

Nobody has the slightest clue about how to detect consciousness, even in human beings.  There are many, many examples of people who had some kind of injury or altered state where they were unresponsive -- or more precisely, unable to make any voluntary motion.  The physicians in charge recommended removal of life support.  But for one reason or another, they continued life support until the patients "woke up'".

Then the patients reported that they had heard all the discussion and were trying to say "No, no, no!"  But they couldn't.    If the best trained physicians can't reliably detect consciousness in a human being, there is ZERO reason to believe any programmer who makes any claims about his or her favorite program.

I certainly admit that consciousness is a very important issue for physicians, biologists, and neuroscientists.  But the best informed people in those fields admit that they have no reliable methods for detecting whether any animals other than humans are conscious.  They're willing to admit that higher mammals are probably conscious.  But they have no reliable criteria for distinguishing conscious decisions from knee-jerk reactions.

Furthermore, this is an ontology forum.   The citations below have ZERO influence on any issue about ontology.  Anybody who has time to waste on idle speculation can read them .  But there are a huge number of important issues that could be discussed at noon on any particular Wednesday.

John

--

UNIVERSIDAD POLITÉCNICA DE MADRID

Ricardo Sanz

Head of Autonomous Systems Laboratory

Escuela Técnica Superior de Ingenieros Industriales

Center for Automation and Robotics

Jose Gutierrez Abascal 2.

28006, Madrid, SPAIN

John F Sowa

unread,
Aug 8, 2023, 11:06:45 AM8/8/23
to ontolo...@googlegroups.com
Ricardo,

Consciousness is a feeling that people have, but nobody can define.  They normally attribute consciousness to other people who behave like themselves under similar conditions.  But they have no definition.

For many years, articles on consciousness were not published in psychological journals because nobody could state a clear definition of consciousness.  Even today, there is no universally accepted definition.  See https://en.wikipedia.org/wiki/Hard_problem_of_consciousness .

People like to think that their pets are conscious because they behave like people in similar circumstances.  But what about a pet turtle, or a pet frog, or a pet insect?  An octopus has a human-llike eye, and it is very active.  But an octopus is related to clams and oysters, which don't have eyes, and they don't resemble humans.  Are they conscious?   There is no universally accepted definition that can answer that question.

Ricardo>  John said: "Nobody has the slightest clue about how to detect consciousness, even in human beings."  This is false. For example, I have clues that John was conscious when he wrote this particular sentence.  .Doctors do that (enact clues of consciousness). Parents do that. Teachers do that. Chatters do that. Almost every animal does that. 

But expert physicians don't have clear guidance about patients who are in a coma or in various altered states.

They frequently need to distinguish conscious and unconscious states.  And they use the services of anesthesiologists to cause people to become unconsious.  But their only criterion is to ask people "Did you  feel  any pain?"  But there are huge numbers of patients who are unable to respond. And physicians have no way of detecting whether those patients are conscious.

Please read that Wikipedia article  on hard problems about consciousness.  I have read and studied many publications about neuroscience, and I have an interest in these issues.  But I also know something about knowledge representation and reasoning.  And I realize that there is no issue about consciousness that would be relevant to anything we do about representing and reasoning anything about ontology.

If the experts can agree on a definition of consciousness, we can translate that definition to some notation for ontology.  Until then, there is nothing useful that we can do.

John

Ricardo Sanz

unread,
Aug 8, 2023, 12:24:36 PM8/8/23
to ontolo...@googlegroups.com
John,

The status of "consciousness" is similar to the status of "life" concerning the availability of definitions and publications in scientific journals.  Obviously, behaviourism made it a hard living for researchers on consciousness (as it did with researchers on mental representation), but it is no longer the case that there are no scientific publications on the topic. In fact, during the last 20 years or so,  I have seen a publication explosion (with both physical and metaphysical profiles :-).

Obviously, one of the main problems in making this endeavour a bit more scientific was the difficulty of externally observing the phenomenon of inner experience. Only verbal report is available for this (as it was in the past, concerning memory. map making and cognitive navigation).  Rats cannot tell us about their mind maps, but current scientific theory holds that mental maps are there [1]. "Inability to respond" is a big difficulty, as you say, and hence it is a major thread of medical research (see for example Massimini "zap and zip" approach to clinical measurement of consciousness using TMS).

Being experience ineffability a major problem, the "hard problem of consciousness" is not this problem. The hard problem of consciousness is the problem of "something qualitatively different emerging from physical neurons". Emergence again :-) It is the metaphysical problem of how physics can produce consciousness at all -in the qualia sense- because for many people the mental phenomenon of inner experience is non-physical (this idea of duality is a remnant from cartesian theories of mind). It is a problem for metaphysicians, not for physicians. In this sense "consciousness is a miracle", using the term that you used in the thread on emergence.

However, consciousness is a multifaceted phenomenon. It has also a purely cognitive, representational aspect (what philosophers call "access consciousness"). This is the aspect of major interest in AI developments (in real-time intelligent controllers, e.g. in robotics). It is from this perspective that "consciousness" and "awareness" seem to be the same thing: The continuous update of mental representations from sensory flows.

Maybe we can keep the term "consciousness" for "qualia-laden awareness". Maybe not. Maybe we also need to introduce the "self" somehow. Maybe when a solid, system-architectural theory of consciousness/awareness is proposed and consolidated, the necessary concepts will be clarified. A major problem for this clarification is the unavoidable anthropomorphism that pervades all things "mental". As Dennett said, "the whole iguana". In this sense, I have the hope that the work on "conscious" AI will help deal with this bias; as it helped in clarifying the relations between human language and human conceptualizations.

Best,
Ricardo

[1] K. Dhein, “The cognitive map debate in insects: A historical perspective on what is at stake,” Stud. Hist. Philos. Sci., vol. 98, no. October 2021, pp. 62–79, 2023, doi: 10.1016/j.shpsa.2022.12.008.

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/7c4ecd2e53154201af5b05f21b33d12e%40bestweb.net.

Alex Shkotin

unread,
Aug 8, 2023, 12:36:31 PM8/8/23
to ontolo...@googlegroups.com
Ricardo, 

In Anokhin's lecture it was said that there are about 10 theories of consciousness with their own definitions each. Teams from  two most popular theories decided to conduct a series of experiments when certain properties of consciousness are determined in both theories and then it looks at what happens in practice - who is closer to the truth.
The last shift in general consideration has been towards the presence of consciousness in all animals, not just the higher ones.
Well, I state very approximately - so I remember it.
I agree with John (if I understood him correctly): we are engaged in the formalization of theories, and the very construction of a theory is the business of certain professionals.
An ontologist can collect definitions from 10 theories into a table and logically analyze them. This is also helpful. That is, a kind of logical comparative analysis of theories is being done.
So the first task may be the list of existing theories.

Alex

вт, 8 авг. 2023 г. в 16:13, Ricardo Sanz <ricardo.s...@gmail.com>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Alex Shkotin

unread,
Aug 8, 2023, 12:57:16 PM8/8/23
to ontolo...@googlegroups.com
Ricardo,

My way is to begin from commonsense meanings for words, terms. We are here.
And let me picture:
image.png
And this is also interesting:
image.png

Alex

вт, 8 авг. 2023 г. в 16:13, Ricardo Sanz <ricardo.s...@gmail.com>:
Hi,

--

Ricardo Sanz

unread,
Aug 8, 2023, 1:00:37 PM8/8/23
to ontolo...@googlegroups.com
Thanks Alex,

This is a good proposal. Make a catalog of theories and identify the concepts they use. I think there are (many) more than 10 theories, so this will take time :-)

However, my interest in ontology is not just on cataloging terms in a domain or text body. It is the formalization of and consistency between concepts that attracts me more.
I think that the work of the ontologist is essential in the formalization of theories, not just a post-hoc activity. In my view, "the very construction of a theory" is 
also the business of ontology professionals, esp. when the theory is about minds. 

Besides this, as the previous posts show, this domain is still in an initial conceptualization phase; the body of concepts is flowing and the cataloging effort may help contribute to clarify the chaos. I remember, 20+ years ago, I was participating in a conference on consciousness. A philosopher said to me: "Now that we have engineers here, maybe things will get clearer". Unfortunately, this is not happening :-). Maybe we didn't work hard enough :-) 

Best,
Ricardo

Leo Obrst

unread,
Aug 8, 2023, 1:26:07 PM8/8/23
to ontolo...@googlegroups.com
Ricardo, I agree, and direct readers to two good overviews of aspects of consciousness, for thorough summaries (up to the date of their authoring):

Thanks,
Leo



--
Leo Obrst, lob...@gmail.com

Alex Shkotin

unread,
Aug 9, 2023, 3:55:07 AM8/9/23
to ontolo...@googlegroups.com
Ricardo,

I like we think the same, this is a possibility to unite our efforts (If any;-):
- one creates a list of theories.
- Another takes one theory from the list to find out primary terms (Concepts, attributes, relations...) and definitions for others including axioms; i.e. looking at this particular theory as an axiomatic one.
- third for the same theory looks for the math structures it uses.
Etc.
But the first step I think should be join https://seminar.math-consciousness.org/

Alex



вт, 8 авг. 2023 г. в 20:00, Ricardo Sanz <ricardo.s...@gmail.com>:

Anatoly Levenchuk

unread,
Aug 9, 2023, 5:36:58 AM8/9/23
to ontolo...@googlegroups.com

Alex,
and start with configuration management to have an account on versions. E.g. ITT (Integrated information theory, one of contemporary theories of consciousness) have already version 4.0.

Integrated information theory (IIT) 4.0: Formulating the properties of phenomenal existence in physical terms

Larissa Albantakis, Leonardo Barbosa, Graham Findlay, Matteo Grasso, Andrew M Haun, William Marshall, William GP Mayner, Alireza Zaeemzadeh, Melanie Boly, Bjørn E Juel, Shuntaro Sasai, Keiko Fujii, Isaac David, Jeremiah Hendren, Jonathan P Lang, Giulio Tononi

 

This paper presents Integrated Information Theory (IIT) 4.0. IIT aims to account for the properties of experience in physical (operational) terms. It identifies the essential properties of experience (axioms), infers the necessary and sufficient properties that its substrate must satisfy (postulates), and expresses them in mathematical terms. In principle, the postulates can be applied to any system of units in a state to determine whether it is conscious, to what degree, and in what way. IIT offers a parsimonious explanation of empirical evidence, makes testable predictions, and permits inferences and extrapolations. IIT 4.0 incorporates several developments of the past ten years, including a more accurate translation of axioms into postulates and mathematical expressions, the introduction of a unique measure of intrinsic information that is consistent with the postulates, and an explicit assessment of causal relations. By fully unfolding a system's irreducible cause-effect power, the distinctions and relations specified by a substrate can account for the quality of experience.

To me in all knowledge-related problems with multiple theories (including connectionist ones, e.g. LLMs) configuration management is crucial. You somehow should have not a “web of explanations” but “SoTA subweb of explanations”. It is true to theories of consciousness. IIT have now explicit version 4.0 but the same is true to other theories of consciousness, e.g. GWT (global workspace theory that is very popular in AI researches) also have multiple versions and variants, e.g. GWD (global workspace dynamics for brains and there are new development there -- https://www.frontiersin.org/articles/10.3389/fpsyg.2021.749868/full).

And may be it will be more productive to start with review of several contemporary theories of consciousness to gather common notions about is all from the text of review but not from the text of theories (e.g. review like https://www.lesswrong.com/posts/8FuFepryeWbSYgqyN/an-introduction-to-current-theories-of-consciousness that emphasizes common features of several contemporary theories of consciousness).

But before all of it you should tell how you will use results of you work. E.g. with ATT (attention schema theory of consciousness) now have usage in building of artificial agents. An artificial agent, with a simple version of a moving spotlight of visual attention, benefitted from having an updating representation of its attention. The difference was drastic. With an attention schema, the agent learned to perform. Without an attention schema, the machine was comparatively incapacitated. -- https://www.pnas.org/doi/10.1073/pnas.2102421118

 

What are you going to do with the results of all this “list of theories, primary terms of each theory, math structures it uses”? Even in case you will have most contemporary results in best versioning system available and this versions will be not only versions of you ontology but corresponds to most fresh versions of original theories? What will be not first step (joining seminar) but next, the usage of results of this work?

Best regards,
Anatoly

 

Anatoly Levenchuk

unread,
Aug 9, 2023, 5:52:23 AM8/9/23
to ontolo...@googlegroups.com

Sorry, link to firs paper was missed – here it is: https://arxiv.org/abs/2212.14787

 

Best regards,
Anatoly

Alex Shkotin

unread,
Aug 9, 2023, 7:26:29 AM8/9/23
to ontolo...@googlegroups.com
Anatoly,

Super!

And let me say shortly: the aim number 1 is to persuade theoreticians to keep formal framework for one or another theory online available and collectively developed (maybe github) similarly to the HoTT collective book but more formal.

Alex

ср, 9 авг. 2023 г. в 12:36, Anatoly Levenchuk <ai...@asmp.msk.su>:

Alex Shkotin

unread,
Aug 9, 2023, 8:21:44 AM8/9/23
to ontolo...@googlegroups.com
Anatoly,

To get a feeling of the framework of theory, have a look here. Where the framework of undirected graphs theory is work in progress. Mostly there are definitions in Russian (sometimes to talk with Claude 2 I translated to English) and its formalization.

Alex


ср, 9 авг. 2023 г. в 12:36, Anatoly Levenchuk <ai...@asmp.msk.su>:

Alex,

Anatoly Levenchuk

unread,
Aug 9, 2023, 8:55:09 AM8/9/23
to ontolo...@googlegroups.com

OK, there are 100500 books about consciousness, you want write one more, collective one!
https://xkcd.com/927/



What is similarity in using HoTT collective book and your Consciousness book? Learning of multiple theories at one (why not a textbook/handbook then – but what student’s skills will be supported by this super-formalized-book)? History for the museum?

By the way, you can have all sources of all consciousness papers given to ChatGPT to question em’all if you have some questions. This is easy. Not needed special work in formalization (all formalization is available in source texts, see links that was in my previous letter – there a lot of math!). For me this is ontology work (work about modeling of the world in regards to consciousness phenomena).

Best regards,
Anatoly

 

image003.png

Anatoly Levenchuk

unread,
Aug 9, 2023, 9:06:25 AM8/9/23
to ontolo...@googlegroups.com

Alex,
I had done multiple of such formalization endeavors in my life. And build half a dozen editors for such a work. My question still here: how do you use your framework of theory (in your case this is 100500th variant of formalization of a graph theory)? In for education purposes, better simply wright textbook about it (100500th textbook about graphs).

Yes, same question about any ontology engineering work. If you have to pay $100 for formalization effort and then earn $1 for usage of resulting ontology, better do not the formalization.

If you will already had formalization of consciousness theories, what valuable you will be doing with it?

Anatoly

 

Alex Shkotin

unread,
Aug 9, 2023, 12:32:50 PM8/9/23
to ontolo...@googlegroups.com
Framework of theory is not a book. It's a system of definitions mostly. And it is a knowledge concentrator, as all definitions are there in one place and we need 10-20 frameworks instead of 100500 books :-)

When the undigraph theory framework is ready I'll write a report. And for example any textbook should refer to a framework for definitions, not keep its own.

Alex

ср, 9 авг. 2023 г. в 15:55, Anatoly Levenchuk <ai...@asmp.msk.su>:

Alex Shkotin

unread,
Aug 9, 2023, 12:46:49 PM8/9/23
to ontolo...@googlegroups.com
Anatoly, 

I am not just about formalization - we have a lot of them in formal ontologies, for example, in OBO Foundry. We are talking about axiomatic theories of a particular subject area. If you have any, please post them here.
Any book on graph theory, for example, the author's textbook, should in a good way refer to the framework where verified definitions, theorems and their proofs are fixed.
Such a concentration of theory in one place for general use is the way to go in the age of the Internet.

AL: If you will already had formalization of consciousness theories, what valuable you will be doing with it? 
Having many theories for the same domain is interesting without formalization. But formally we can compare primary terms and axioms first of all.
It's like formalizing axiomatically, we need to make theory very well formed :-)

Alex
  

ср, 9 авг. 2023 г. в 16:06, Anatoly Levenchuk <ai...@asmp.msk.su>:

Anatoly Levenchuk

unread,
Aug 9, 2023, 12:52:16 PM8/9/23
to ontolo...@googlegroups.com

How you will use your framework? It will not be understandable and usable without a book with text explanations. Moreover, it will be borrowing definitions from textbooks and monographies, not textbook and other helpful books will be borrowing definitions from it. This is one more case for configuration management problem: where you take your definitions and why third party need to go to your framework and not directly to source (may be via ChatGPT N or Claude N). I need a realistic scenario what you will do with your ontology, how many times it will be used and why you cannot take knowledge directly from source.

Same question to all other formal ontologies that are not a database schema or connectionists knowledge representations like LLMs. If theory of consciousness formal representation used in some software as data types, it is completely OK (if anybody have usage of this software for any purpose that validate it development). Or simply go to LLM with plugins with source papers about consciousness and ask questions about all these theories (and validate answers same way as with answers of human consciousness researches with normal scientific process).


Anatoly

 

image001.png

Anatoly Levenchuk

unread,
Aug 9, 2023, 12:57:41 PM8/9/23
to ontolo...@googlegroups.com

I give you reference to IIT. This is axiomatic theory of consciousness. It has also math. It is in version 4.0 now. What you will do with it?

Here once more time (
https://arxiv.org/abs/2212.14787):


Integrated information theory (IIT) 4.0: Formulating the properties of phenomenal existence in physical terms

Larissa Albantakis, Leonardo Barbosa, Graham Findlay, Matteo Grasso, Andrew M Haun, William Marshall, William GP Mayner, Alireza Zaeemzadeh, Melanie Boly, Bjørn E Juel, Shuntaro Sasai, Keiko Fujii, Isaac David, Jeremiah Hendren, Jonathan P Lang, Giulio Tononi

 

This paper presents Integrated Information Theory (IIT) 4.0. IIT aims to account for the properties of experience in physical (operational) terms. It identifies the essential properties of experience (axioms), infers the necessary and sufficient properties that its substrate must satisfy (postulates), and expresses them in mathematical terms. In principle, the postulates can be applied to any system of units in a state to determine whether it is conscious, to what degree, and in what way. IIT offers a parsimonious explanation of empirical evidence, makes testable predictions, and permits inferences and extrapolations. IIT 4.0 incorporates several developments of the past ten years, including a more accurate translation of axioms into postulates and mathematical expressions, the introduction of a unique measure of intrinsic information that is consistent with the postulates, and an explicit assessment of causal relations. By fully unfolding a system's irreducible cause-effect power, the distinctions and relations specified by a substrate can account for the quality of experience.


Once more time: it fit to all your criteria «
We are talking about axiomatic theories of a particular subject area. If you have any, please post them here». I already posted. What you will do with it?

Anatoly

 

Alex Shkotin

unread,
Aug 9, 2023, 1:01:40 PM8/9/23
to ontolo...@googlegroups.com
Anatoly, I will try to answer these comments in the report. And with hope I expect from you one or other axiomatic theory of a part of the reality.

ср, 9 авг. 2023 г. в 19:52, Anatoly Levenchuk <ai...@asmp.msk.su>:

Alex Shkotin

unread,
Aug 9, 2023, 1:05:43 PM8/9/23
to ontolo...@googlegroups.com
First of all I put it into my collection of axiomatic theories. Second, after my framework is ready I'll propose to them to create a framework for their axiomatic theory.

ср, 9 авг. 2023 г. в 19:57, Anatoly Levenchuk <ai...@asmp.msk.su>:

John F Sowa

unread,
Aug 10, 2023, 1:05:59 AM8/10/23
to ontolo...@googlegroups.com
Leo, Alex, Anatoly,

I strongly recommend the two articles that Leo cited.  They should be required reading for anybody who plans to do any further work in studying, writing, or applying any theories or ideas about consciousness.

The conclusion of the first article that Leo cites summarizes the complexity of the issues and the reason why nobody has ever presented a theory of consciousness that can be used for any project that Ontolog Forum can usefully discuss or apply

"A comprehensive understanding of consciousness will likely require theories of many types. One might usefully and without contradiction accept a diversity of models that each in their own way aim respectively to explain the physical, neural, cognitive, functional, representational and higher-order aspects of consciousness. There is unlikely to be any single theoretical perspective that suffices for explaining all the features of consciousness that we wish to understand. Thus a synthetic and pluralistic approach may provide the best road to future progress."

As for the theory that Alex and Anatoly suggested,  the authors are far better qualified to develop a theory of consciousness than any of  us or any of the thousand subscribers to Ontolog Forum.  But the theory they produced does not imply any practical applications or experimental tests that would be useful for anything related to ontology.

Therefore, all three of these references confirm my previous opinion:  There are far more important topics for Ontolog Forum to address.  Any time wasted on discussing consciousness would have no practical value for any applications of ontology..

John

___________________________________
 

__________________________________________

Alex Shkotin

unread,
Aug 10, 2023, 3:44:41 AM8/10/23
to ontolo...@googlegroups.com
John,

Anatoly gave a link to an article about the axiomatic theory of consciousness IIT4.0 where on page 5 we have:
image.png
My interest in discovering any axiomatic theory of a part of reality is to formalize it and create a framework. 
Formalization of theoretical knowledge is what we mainly do when we do not formalize facts.

Alex

чт, 10 авг. 2023 г. в 08:05, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Ricardo Sanz

unread,
Aug 10, 2023, 4:54:31 AM8/10/23
to ontolo...@googlegroups.com
Hi John,

The sentence "Any time wasted on discussing consciousness would have no practical value for any applications of ontology." sounds a bit disrespectful for the people that wrote the 100500 books about consciousness that Anatoly mentioned.

An "ontology of consciousness" has indeed practical value (e.g. in searching bibliographic databases on anesthesia and analgesia experiments or for builders of self-aware robots). I am interested in some practical uses in autonomous system engineering, but, for sure, there are many other uses that I can, at least, imagine (in medicine, biology, psychology, philosophy or AGI).

Building a "Theory of Consciousness" is, however, a too far-sighted effort. I concur with you that it may be too early and that this is not the place for this effort.

However, formalization of the concepts already used by authors -e.g. the concept of phi-structure that Alex mentioned-, could be of major value. 
A conceptual analysis and ontological modelling in this domain is in high need.

Best wishes,
Ricardo

 



--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Ricardo Sanz

unread,
Aug 10, 2023, 5:06:32 AM8/10/23
to ontolo...@googlegroups.com
In addition to Leo's links, I suggest also this one:


It is a bit old and biased, but gives a gist of what is being done in the artificial systems side.

Best,
Ricardo


alex.shkotin

unread,
Aug 10, 2023, 5:44:34 AM8/10/23
to ontolog-forum
Leo, 

Thanks for the link to the article from which it is clear that there is no shortage of theories of consciousness, rather there is an excess. This makes the job of compiling the list extensive and painstaking. Indeed, for each theory from the list, which can now be estimated at 20 pieces, it is necessary to indicate where this particular theory is stated. What is the corpus of texts where it is presented in full? Indeed, in order to create a single framework of the theory, all these texts, roughly speaking, must be concentrated into one structure (not a text).
The creation of a theory framework is done only together with its developers and will be successful only if they themselves begin to consider the framework a working tool for maintaining and developing the theory: each new element of the theory (definition, hypothesis, proof) is added to the framework after careful discussion and verification.

Alex

вторник, 8 августа 2023 г. в 20:26:07 UTC+3, lob...@gmail.com:

Alex Shkotin

unread,
Aug 10, 2023, 7:33:16 AM8/10/23
to ontolo...@googlegroups.com
Ricardo,

Very interesting, especially projects mentioned. So we have not only plenty of theories, but R&D implementations.
Here a situation is possible that they need no formalization because they use math directly.
The formalization is still possible but when the main knowledge is in math, the math level is responsible for accuracy.

Alex

чт, 10 авг. 2023 г. в 12:06, Ricardo Sanz <ricardo.s...@gmail.com>:

John F Sowa

unread,
Aug 10, 2023, 11:59:11 AM8/10/23
to ontolo...@googlegroups.com
Alex and Ricardo,

Your notes remind me of the importance of vagueness and the limitations of precision in any field -- especially science, engineering, and formal ontology.  Rather than a series of Ontolog sessions about consciousness,  I recommend a series of sessions about ***vagueness***.   Issues about consciousness could be discussed in one of the sessions.  That is why I changed to subject line.  For a summary of the issues, see below for an excerpt from an article I'm writing.

Alex> So we have not only plenty of theories [of consciousness], but R&D implementations.  Here a situation is possible that they need no formalization because they use math directly.  The formalization is still possible but when the main knowledge is in math, the math level is responsible for accuracy.

Right!  Plenty of theories and some implementations, but no consensus on the theories, and nothing useful for any theoretical or practical applications of ontology.

Furthermore, every formal theory is stated in some version of mathematics.  Every version of logic -- from Aristotle to today -- is considered a branch of mathematics.  Formalization is ***always*** an  application of mathematics.  The notation used for the math is irrelevant.  Aristotle's syllogisms are the first version of formal logic, and he invented the first controlled natural language for stating them. 

Ricardo> In addition to Leo's links, I suggest also this one: https://en.wikipedia.org/wiki/Artificial_consciousness   It is a bit old and biased, but gives a gist of what is being done in the artificial systems side.

Thanks for recommending that article.  It is an excellent overview with well over a hundred references to theory and implementations from every point of view, including Google's work up to 2022. 

But I would not call it "old and biased".  Although it does not include anything about the 2023 work on GPT and related systems, it cites Google's work on their foundations.  GPT systems, by themselves, do not do anything related to consciousness.

Ricardo, quoting from a note by JFS> The sentence "Any time wasted on discussing consciousness would have no practical value for any applications of ontology." sounds a biit disrespectful for the people that wrote the 100500 books about consciousness that Anatoly mentioned. 

Please read what I wrote above.  I show a high respect for the ongoing research and publications.  But I make the point that none of that work is relevant to the theory and applications of ontology.   

Following is an excerpt from an article I'm writing.  Note the term 'mental model'.  I propose the following definition of consciousness:  the ability to generate, modify, and use mental models as the basis for perception, thought, action, and communication.  That definition is sufficiently vague to include normal uses of the word 'consciousness'.  It can also serve as a guideline for more detailed research and applications.  It could even be used to define artificial consciousness if and when any AI systems could "generate, modify, and use mental models as the basis for perception, thought, action, and communication."

John
____________________________

Excerpt from a forthcoming article by J. F. Sowa:

Natural languages can be as precise as a formal language or as vague as necessary for planning and negotiating.  The precision of a formal language is determined by its form or syntax together with the meaning of its components.  But natural languages are informal because the precise meaning of a word or sentence depends on the situation in which it’s spoken, the background knowledge of the speaker, and the speaker’s assumptions about the background knowledge of the listeners. Since no one has perfect knowledge of anyone else’s background, communication is an error-prone process that requires frequent questions and explanations.  Precision and clarity are the goal not the starting point.  Whitehead (1937) aptly summarized this point:
Human knowledge is a process of approximation.  In the focus of experience, there is comparative clarity.  But the discrimination of this clarity leads into the penumbral background.  There are always questions left over.  The problem is to discriminate exactly what we know vaguely.
A novel theory of semantics, influenced by Wittgenstein’s language games and related developments in cognitive science, is the dynamic construal of meaning (DCM) proposed by Cruse (2002). The basic assumption of DCM is that the most stable aspect of a word is its spoken or written sign; its meaning is unstable and dynamically evolving as it is used in different contexts or language games. Cruse coined the term microsense for each subtle variation in meaning. This is an independent rediscovery of Peirce’s view: sign types are stable, but each interpretation of a sign token depends on its context in a pattern of other signs, the physical environment, and the background knowledge of the interpreter.
For the purpose of this inquiry a Sign may be defined as a Medium for the communication of a Form.  It is not logically necessary that anything possessing consciousness, that is, feeling of the peculiar common quality of all our feeling, should be concerned.  But it is necessary that there should be two, if not three, quasi-minds, meaning things capable of varied determination as to forms of the kind communicated.    (R793, 1906, EP 2:544)
These observations imply that cognition involves an open-ended variety of interacting processes. Frege’s rejection of psychologism and “mental pictures” reinforced the behaviorism of the early 20th century. But the latest work in neuroscience uses “folk psychology” and introspection to interpret data from brain scans (Dehaene 2014). The neuroscientist Antonio Damasio (2010) summarized the issues:
The distinctive feature of brains such as the one we own is their uncanny ability to create maps...  But when brains make maps, they are also creating images, the main currency of our minds.  Ultimately consciousness allows us to experience maps as images, to manipulate those images, and to apply reasoning to them.
The maps and images form mental models of the real world or of the imaginary worlds in our hopes, fears, plans, and desires.  They provide a “model theoretic” semantics for language that uses perception and action for testing models against reality.  Like Tarski’s models, they define the criteria for truth, but they are flexible, dynamic, and situated in the daily drama of life. 

Alex Shkotin

unread,
Aug 10, 2023, 12:37:25 PM8/10/23
to ontolo...@googlegroups.com
John,

Just before reading your Excerpt: one definition is just an idea and maybe very nice. I am looking forward to reading your variant of the theory of consciousness. If you have this in mind.
Just a question: do flies or bees have mental models?

Alex

чт, 10 авг. 2023 г. в 18:59, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Alex Shkotin

unread,
Aug 10, 2023, 12:49:27 PM8/10/23
to ontolo...@googlegroups.com
Excerpt is very interesting and mostly philosophical. 
The main question is: can we create a device (now these are autonomous robots) capable of studying the outside world and then itself?
The progress in this direction is one of the main topics in robotic news.
And this progress is significant.

Alex

чт, 10 авг. 2023 г. в 18:59, John F Sowa <so...@bestweb.net>:
Alex and Ricardo,

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

John F Sowa

unread,
Aug 10, 2023, 1:47:21 PM8/10/23
to ontolo...@googlegroups.com
Anatoly,

Thanks for your comment and the diagram.  But it was too small.  I put a larger copy in Standard.png.

In any case, it's too late to write book #100,501.  I'm sure that N more must have appeared by now.  I suggest a book on the importance of vagueness.  That topic is less popular for publishing, but much, much more frequent in practice.

John
 


From: "Anatoly Levenchuk" <ai...@asmp.msk.su>

OK, there are 100500 books about consciousness, you want to write one more, collective one!

Standard.png

John F Sowa

unread,
Aug 10, 2023, 3:29:46 PM8/10/23
to ontolo...@googlegroups.com, Peirce List
Alex,

The answer to your question below is related to your previous note:  "Just a question: do flies or bees have mental models?"

Short answer:  They behave as if they do,  Bees definitely develop a model of the environment, and they go back to their nest and communicate it to their colleagues by means of a dance that indicates (a) direction to the source of food; (b) the distance; and (c) the amount available at that source.

That is very close to my definition of consciousness: "The ability to generate, modify, and use mental models as the basis for perception, thought, action, and communication."   The bees demonstrate generating and using something that could be called a mental model for perception, action, and communication.  The only question is about the amount and kind of thinking.  

In the quotation by Damasio, he wrote "Ultimately consciousness allows us to experience maps as images, to manipulate those images, and to apply reasoning to them."    It's not clear how and whether the bees can "manipulate those images and apply reasoning to them."

Flies aren't as smart as bees.  They may have simple images that may be generated automatically by perception and used for action.  But flies don't  use them for communication.

I admit that my definition is based on philosophical issues, but so is any mathematical version.  And the issue of vagueness is related to generality.  An image that can only be applied to a single pattern is not very useful. 

Alex> The main question is: can we create a device (now these are autonomous robots) capable of studying the outside world and then itself?

The application to bees and flies can be adapted to designing devices "capable of studying the outside world and then itself".    Every aspect of perception, thinking, action, and communication is certainly relevant, and those four words are easier to explain and to test than the complex books that Anatoly cited.  The most complex issues involve the definition of mental models and methods of thinking about them and their relationship to the world, to oneself, and to the future of oneself in the world.

And the issues about vagueness are extremely important to issues about similarity, generality, and changes in the world and oneself in the future. Those are fundamental issues of ontology, and every one of them involves vagueness or incompleteness in perception, thinking, action, and communication.  

As for mathematical precision, please note that Peirce, Whitehead, and Wittgenstein all had a very strong background in logic, mathematics, and science.   That may be why they were also very sensitive to issues about vagueness.  I'll also quote Lord Kelvin:  "Better an approximate answer to the right question than an exact answer to the wrong question."

John
 


From: "Alex Shkotin" <alex.s...@gmail.com>

Nadin, Mihai

unread,
Aug 10, 2023, 3:44:20 PM8/10/23
to ontolo...@googlegroups.com

Dear and respected colleagues,

Always impressed by the level of dialog between the two of you. Sometimes amused, when the limits of knowledge are reached. Will only quote from a recent publication (of course, I remain focused on anticipatory processes, a subject which, so far, did not make it into your conversations):

Fruit flies 'think' before they act, a study by researchers from the University of Oxford's Centre for Neural Circuits and Behaviour suggests. The neuroscientists showed that fruit flies take longer to make more difficult decisions.

In experiments asking fruit flies to distinguish between ever closer concentrations of an odour, the researchers found that the flies don't act instinctively or impulsively. Instead they appear to accumulate information before committing to a choice.

Gathering information before making a decision has been considered a sign of higher intelligence, like that shown by primates and humans.

'Freedom of action from automatic impulses is considered a hallmark of cognition or intelligence,' says Professor Gero Miesenböck, in whose laboratory the new research was performed. 'What our findings show is that fruit flies have a surprising mental capacity that has previously been unrecognised.'

--

All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

doug foxvog

unread,
Aug 10, 2023, 9:23:43 PM8/10/23
to ontolo...@googlegroups.com
On Thu, August 10, 2023 15:27, John F Sowa wrote:
> Alex,

> The answer to your question below is related to your previous note: "Just
> a question: do flies or bees have mental models?"

> Short answer: They behave as if they do, Bees definitely develop a model
> of the environment, and they go back to their nest and communicate it to
> their colleagues by means of a dance that indicates (a) direction to the
> source of food; (b) the distance; and (c) the amount available at that
> source.

This depends on one's definition of "mental model". Is there some kind of
model of the external world in an insect mind? Sure -- the insect uses
such model to find its way back "home".

But does the insect have a model of its own mind? Probably not. If a
"mental model" is a model in the mind, the insect has one; but if a
"mental model" is a model of one's own mind, the insect most likely does
not.

We can create an ontology of models such that "mental model" could
designate either #$ModelOfExternalityInAMind or #$ModelOfOnesOwnMind.
These would be different concepts.

> That is very close to my definition of consciousness: "The ability to
> generate, modify, and use mental models as the basis for perception,
> thought, action, and communication." The bees demonstrate generating
> and using something that could be called a mental model for perception,
> action, and communication. The only question is about the amount and kind
> of thinking.

Agreed. My concept of consciousness would be an awareness of part of
one's thoughts and ability to reason about it. There are many related
concepts -- an ontology of types of consciousness could make such
discussions clearer.

> In the quotation by Damasio, he wrote "Ultimately consciousness allows us
> to experience maps as images, to manipulate those images, and to apply
> reasoning to them." It's not clear how and whether the bees can
> "manipulate those images and apply reasoning to them."

Well, bees can use these internal images to guide their actions so they
can fly to object types identified in the maps at appropriate times.

> Flies aren't as smart as bees. They may have simple images that may be
> generated automatically by perception and used for action. But flies
> don't use them for communication.

I'm not sure if there is some type of communication. Groups of flies
congregate around things that flies "like".

> I admit that my definition is based on philosophical issues, but so is any
> mathematical version. And the issue of vagueness is related to
> generality. An image that can only be applied to a single pattern is not
> very useful.

Agreed.

-- doug foxvog


> Alex> The main question is: can we create a device (now these are
> autonomous robots) capable of studying the outside world and then itself?
>
> The application to bees and flies can be adapted to designing devices
> "capable of studying the outside world and then itself". Every aspect
> of perception, thinking, action, and communication is certainly relevant,
> and those four words are easier to explain and to test than the complex
> books that Anatoly cited. The most complex issues involve the definition
> of mental models and methods of thinking about them and their relationship
> to the world, to oneself, and to the future of oneself in the world.
>
> And the issues about vagueness are extremely important to issues about
> similarity, generality, and changes in the world and oneself in the
> future. Those are fundamental issues of ontology, and every one of them
> involves vagueness or incompleteness in perception, thinking, action, and
> communication.
>
> As for mathematical precision, please note that Peirce, Whitehead, and
> Wittgenstein all had a very strong background in logic, mathematics, and
> science. That may be why they were also very sensitive to issues about
> vagueness. I'll also quote Lord Kelvin: "Better an approximate answer to
> the right question than an exact answer to the wrong question."
>
> John
>
> ----------------------------------------
> From: "Alex Shkotin" <alex.s...@gmail.com>
>
> Excerpt is very interesting and mostly philosophical. The main question
> is: can we create a device (now these are autonomous robots) capable of
> studying the outside world and then itself? The progress in this
> direction is one of the main topics in robotic news. And this progress is
> significant.
>
> Alex
>
> --
> All contributions to this forum are covered by an open-source license.
> For information about the wiki, the license, and how to subscribe or
> unsubscribe to the forum, see http://ontologforum.org/info/
> ---
> You received this message because you are subscribed to the Google Groups
> "ontolog-forum" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to ontolog-foru...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/ontolog-forum/07f2797e7a4b43f79da961ffb646a159%40bestweb.net.
>


Alex Shkotin

unread,
Aug 11, 2023, 4:58:50 AM8/11/23
to ontolo...@googlegroups.com
John, 

As I understood from your answer to Anatoly, you are not going to write a book on the theory of consciousness. I think this is right. This is an old (like Aristotle's notes) way of developing theory, which is not suitable in the age of the Internet. Theory is, firstly, a system of definitions based on identified primary terms (concepts, attributes, relationships, and so on).
You already have one essential definition, for consciousness.
And perhaps you will continue to give definitions to the terms used, or it will turn out that some of these terms are primary and for them it is necessary to list their properties through axioms.
So ToC_JFS
def consciousness
eng:The ability to generate, modify, and use mental models as the basis for perception, thought, action, and communication.
Now we need definitions or they should be recognized as primary:
def mental model
eng:???
def perception
eng:???
def thought
eng:???
def action
eng:???
def communication
eng:???

If you would like to continue this way, time from time, I'll be happy to create a framework (aka skeleton) for your ToC. And it would be open for comments from others.
This is a way to develop a theory these days:-)

I will add that I have no idea to develop ToC on my own, but to get a philosophical background, my way is to take [1] off the shelf (in paper or electronic form) ;-)

Alex


чт, 10 авг. 2023 г. в 22:27, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Alex Shkotin

unread,
Aug 11, 2023, 5:34:35 AM8/11/23
to ontolo...@googlegroups.com
Dear and respected Mihai Nadin,

Thanks for the interesting information. What relationship exists between consciousness and anticipatory processes?
For example, let's take such a type of weapon as a mine. It is cocked and buried. Is it possible to say that the process of anticipation takes place inside it?
I think no, if we look at "Anticipation is the act of using information about the past and present to make predictions about future scenarios." here
But the author finds anticipation where there is no question of consciousness, as far as I understand.

Alex

чт, 10 авг. 2023 г. в 22:44, Nadin, Mihai <na...@utdallas.edu>:

Ricardo Sanz

unread,
Aug 11, 2023, 6:00:28 AM8/11/23
to ontolo...@googlegroups.com
Hi John, all,

I like the excerpts of your forthcoming article a lot. As Alex says, it is mostly philosophical, but it is so in the Quinean sense of philosophy being just the part of science that deals with more abstract entities.

From what you say, and what you write, I see that you are very interested in consciousness and that you see some ontological utility of it :-)

I think this is so because the definition that you propose addresses what philosophers would call the functional aspect of consciousness. I like your definition but I think it needs some simplification work to reach the very fundamental core of the concept. This is not easy because our analyses are always set in the context of human mental processes and all high-level human mental activity seems related to consciousness (for example note that "communication" is not necessary for consciousness; we can be conscious and silent).

I am indeed highly aligned with your views. This is one of the design principles for conscious machines that we proposed many years ago [1]:

Principle 6 (System Awareness). A system is aware if it is continuously perceiving and generating meaning from the continuously updated models.
  
As you see, the maintenance of the mental model is a central aspect. This was the following principle:

Principle 7 (System Self-awareness/Consciousness). A system is conscious if it is continuously generating meanings from continuously updated self-models in a model-based cognitive control architecture.

Here we thought of consciousness as a form of self-awareness.

For many years these ideas were a no-go for research funding. This has changed, however, and now there is funding available for this thread [2].

The elaboration and deep understanding of these concepts is essential for the construction of autonomous machines that make sense of the world (as we do). In many artificial systems these concepts do manifest themselves, but only implicitly through the system architecture, and not explicitly as engineering design traits. E.g. I see this strongly related with the discussion concerning syntax/semantics in ChatGPT.  We shall understand them rigorously to build better machines.

BTW, I agree that "vagueness" is an essential, necessary aspect to be dealt with. But it is not the central one. The central one is "the agent models its reality". And this modeling shall be: real-time, tuned to reality, using vagueness, learnable, shareable, etc. The importance of vagueness comes after the uncertainty of the world. It is a secondary aspect, not the primary one. First the basement; then the walls and the windows in them.

Best,
Ricardo

[1] R. Sanz, I. López, M. Rodríguez, and C. Hernández, “Principles for consciousness in integrated cognitive control,” Neural Networks, vol. 20, no. 9, pp. 938–946, 2007, doi: http://dx.doi.org/10.1016/j.neunet.2007.09.012.

PS: My initial question about the difference between "consciousness" and "awareness" is still there .... :-) 


--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Alex Shkotin

unread,
Aug 11, 2023, 6:05:29 AM8/11/23
to ontolo...@googlegroups.com
Doug,

" My concept of consciousness would be an awareness of part of
one's thoughts and ability to reason about it."
If you write the same idea in the form of definition of consciousness we will have two to compare.
And we can keep two definitions in the skeleton of the ToC of JFS, hoping that in the future we will have a theorem that they are equivalent. 

Alex

пт, 11 авг. 2023 г. в 04:23, doug foxvog <do...@foxvog.org>:

Nadin, Mihai

unread,
Aug 11, 2023, 2:24:58 PM8/11/23
to ontolo...@googlegroups.com

Dear and respected Alex Shkotin,

Dear colleagues,

  1. What relationship exists between consciousness and anticipatory processes?

None. The current state of an anticipatory system is depends upon past states, current state and possible future states.

This is the definition reflecting my  view of anticipation.  It is empirically founded.

Regarding the definition you mentioned (and linked to): in providing my feedback—peer review process—I rejected Carrie Deans formulation. By the way: her views changed—see her article in Epigenetics and Anticipation (https://link.springer.com/book/10.1007/978-3-031-17678-4)

  1. Anticipation is always expressed in action. It is an autonomic process. ANTECAPERE (the etymology of the notion of anticipation) suggests: action before (ANTE) understanding. Consciousness is the expression of understanding. Awareness is the lowest level of consciousness.
  2. Your example: a programmed behavior. No anticipation of any kind. By the way, anticipation is a definitory characteristic of the living (at all its levels, from the monocell to the human being.

On matters of consciousness: while not directly addressed in the text I am going to point to, it provides enough definitions (which might be of interest to you as the axiomatist of this group)-- https://www.nadin.ws/wp-content/uploads/2023/01/epigenetics-and-the-spiritual-EN.pdf

 

I hope that I addressed your questions.

 

Mihai Nadin

 

From: ontolo...@googlegroups.com <ontolo...@googlegroups.com> On Behalf Of Alex Shkotin
Sent: Friday, August 11, 2023 4:34 AM
To: ontolo...@googlegroups.com

John F Sowa

unread,
Aug 11, 2023, 4:19:08 PM8/11/23
to ontolo...@googlegroups.com, Peirce List
Dear All,

This thread has attracted too many responses for me to save all of them.  But Mihai Nadin cited intriguing experimental evidence that fruit flies "think" before they act (copy below).   I also found a web site that says more:about the experimental methods:  https://www.ox.ac.uk/news/2014-05-22-fruit-flies-think-they-act . See excerpts at the end of this note.

Ricardo Sanz> My initial question about the difference between "consciousness" and "awareness" is still there.

The distinction between consciousness and awareness is very clear:  Awareness can be detected by experimental methods, as in the experiments with fruit flies.  Thinking (or some kind of mental processing) can be detected by a delay between stimulus and response.  But nobody has found any experimental evidence for consciousness, not even in humans.  

We assume consciousness in our fellow humans because we all belong to the same species.  But we have no way to detect consciousness in humans who have suffered some kinds of neural impairment.   We suspect that animals that behave like us may be conscious, but we don't know.   And there is zero evidence that computer systems, whose circuitry is radically different from human brains can be conscious.

Ricardo> I agree that "vagueness" is an essential, necessary aspect to be dealt with. But it is not the central one. The central one is "the agent models its reality". 

Those are different topics.  A model of some subject (real or imaginary) is  a structure of some kind (image, map, diagram, or physical system) that represents important aspects of some subject.  Vagueness is a property of some language or notation  that is derived from the model.   What is central depends on the interests of some agent that is using the model and the language for some purpose.

Furthermore, vagueness is not a problem "to be dealt with".  It's a valuable property of natural language.  In my previous note, I mentioned three logicians and scientists -- Peirce, Whitehead, and Wittgenstein -- who recognized that an absolutely precise mathematical or logical statement is almost certain to be false.  But a statement that allows some degree of error (vagueness) is much more likely to be true and useful for communication and application.

Mathematical precision increases the probability that errors will be detected.  When the errors are found, they can be corrected/   But if no errors are found, it's quite likely that nobody is using the theory for any practical purpose..  
 
Jerry Chandler> You may wish to consider the distinctions between the methodology of the chemical sciences from that of mathematics and whatever the views of various “semantic” ontologies might project for quantification of grammars by algorithms. 

Chemistry is an excellent example of  the issues of precision and vagueness, and it's the one in which Peirce learned many of his lessons about experimental methodology.   Organic chemistry is sometimes called "the science of side effects" because nearly every method for producing desired molecules will produce a large number of unwanted molecules..  And minor variations in the initial conditions may have a huge effect on the yield of the  desired  results.  Textbooks that describe the reactions tend to be vague about the percentages because they can vary widely as the technology is developed..  

Jerry> What are the formal logical relationships between the precision of the atomic numbers as defined by Rutherford and logically deployed by Rutherford and the syntax of a “formal ontology” in this questionable form of artificial semantics? 

For any subject of any kind, a good  ontology should be developed by a collaboration of .experts in the subject matter with experts in developing and using  ontologies.  The quality of an ontology would depend on the expertise of both kinds of  experts.

Doug Foxvog>  Is there some kind of model of the external world in an insect mind?  Sure -- the insect uses such model to find its way back "home".  But does the insect have a model of its own mind?  Probably not.

A Tarski style model may be represented by predicates, functions, and names of things in the subject matter and two kinds of logical operators:  conjunction (AND) and the existential quantifier (There exists an x such that...).

For most  applications, subject matter experts typically add images and diagrams.  For people, those images and diagrams make the model easier to understand.   For formal analysis and computing, those images and diagrams would  be mapped to predicates, functions, and names, which are related by conjunctions and existentially quantified names.

Doug> We can create an ontology of models such that "mental model" could designate either #$ModelOfExternalityInAMind or #$ModelOfOnesOwnMind.  These would be different concepts.

If you consider minds as things in the world, this reduces to the previous definition.  The psychologist Philip Johnson-Laird wrote a book and many articles about mental models.  I cite him frequently in my writings, and I use the term 'mental model' in the same sense as his publications.

Alex Shkotin> What relationship exists between consciousness and anticipatory processes?

As Michai Nadin wrote, "None;"  I agree with his discussion and references.

Alex>  My concept of consciousness would be an awareness of part of one's thoughts and ability to reason about it.

But that would only enable the researcher to detect his or her own consciousness.  That method would be useless for a theory about non-human animals or robots.

Alex>  def consciousness.  The ability to generate, modify, and use mental models as the basis for perception, thought, action, and communication.

That definition would enable humans to develop theories about human consciousness.  And they do that.  But it does not enable humans to observe and develop theories about consciousness in any non-human things. 

You might make a conjecture about consciousness in apes, since they are very closely related to humans.  . You might extend that conjecture to other animals, but you can't be certain.  And there is no way that you could extend that conjecture to computer systems, which have no resemblance whatever to human thinking processes.

John


From: "Nadin, Mihai" <na...@utdallas.edu>

Dear and respected colleagues,

Always impressed by the level of dialog between the two of you. Sometimes amused, when the limits of knowledge are reached. Will only quote from a recent publication (of course, I remain focused on anticipatory processes, a subject which, so far, did not make it into your conversations):

Fruit flies 'think' before they act, a study by researchers from the University of Oxford's Centre for Neural Circuits and Behaviour suggests. The neuroscientists showed that fruit flies take longer to make more difficult decisions.

In experiments asking fruit flies to distinguish between ever closer concentrations of an odour, the researchers found that the flies don't act instinctively or impulsively. Instead they appear to accumulate information before committing to a choice.

Gathering information before making a decision has been considered a sign of higher intelligence, like that shown by primates and humans.

'Freedom of action from automatic impulses is considered a hallmark of cognition or intelligence,' says Professor Gero Miesenböck, in whose laboratory the new research was performed. 'What our findings show is that fruit flies have a surprising mental capacity that has previously been unrecognised.

___________________________________

The researchers observed Drosophila fruit flies make a choice between two concentrations of an odor presented to them from opposite ends of a narrow chamber, having been trained to avoid one concentration.

When the odor concentrations were very different and easy to tell apart, the flies made quick decisions and almost always moved to the correct end of the chamber.

When the odour concentrations were very close and difficult to distinguish, the flies took much longer to make a decision, and they made more mistakes.

The researchers found that mathematical models developed to describe the mechanisms of decision making in humans and primates also matched the behaviour of the fruit flies.

The scientists discovered that fruit flies with mutations in a gene called FoxP took longer than normal flies to make decisions when odours were difficult to distinguish – they became indecisive.

The researchers tracked down the activity of the FoxP gene to a small cluster of around 200 neurons out of the 200,000 neurons in the brain of a fruit fly. This implicates these neurons in the evidence-accumulation process the flies use before committing to a decision.

Dr Shamik DasGupta, the lead author of the study, explains: 'Before a decision is made, brain circuits collect information like a bucket collects water. Once the accumulated information has risen to a certain level, the decision is triggered. When FoxP is defective, either the flow of information into the bucket is reduced to a trickle, or the bucket has sprung a leak.'

Fruit flies have one FoxP gene, while humans have four related FoxP genes. Human FoxP1 and FoxP2 have previously been associated with language and cognitive development. The genes have also been linked to the ability to learn fine movement sequences, such as playing the piano.

'We don't know why this gene pops up in such diverse mental processes as language, decision-making and motor learning,' says Professor Miesenböck. However, he speculates: 'One feature common to all of these processes is that they unfold over time. FoxP may be important for wiring the capacity to produce and process temporal sequences in the brain.'

Professor Miesenböck adds: 'FoxP is not a "language gene", a "decision-making gene", even a "temporal-processing" or "intelligence" gene. Any such description would in all likelihood be wrong. What FoxP does give us is a tool to understand the brain circuits involved in these processes. It has already led us to a site in the brain that is important in decision-making.

alex.shkotin

unread,
Aug 12, 2023, 4:01:05 AM8/12/23
to ontolog-forum

Dear and respected Mihai Nadin,


Thanks for the clear answer. If you have a system of definitions for the theory of anticipatory systems, I can try to make an appropriate framework / skeleton.

After all, this approach is simple in its own way: the definition of the term is separated from the text in which it is present in a free form, and is drawn up as a separate paragraph of a well-defined structure that is not related to the text.

For example,

def anticipatory system

A system is anticipatory if and only if the current state of the system depends upon past states, current state and possible future states.


It's a bit strange and incomprehensible what "the current state of the system depends upon… current state…" means

But that's what careful definition work is all about.


Alex



пятница, 11 августа 2023 г. в 21:24:58 UTC+3, Mihai Nadin:

Alex Shkotin

unread,
Aug 12, 2023, 4:50:16 AM8/12/23
to ontolo...@googlegroups.com
John,

Shortly between lines. Just to remove two possible misunderstandings:

Alex>  My concept of consciousness would be an awareness of part of one's thoughts and ability to reason about it.

AS:I never wrote this. 
 
But that would only enable the researcher to detect his or her own consciousness.  That method would be useless for a theory about non-human animals or robots.

Alex>  def consciousness.  The ability to generate, modify, and use mental models as the basis for perception, thought, action, and communication.
 
AS:This is your definition.

That definition would enable humans to develop theories about human consciousness.  And they do that.  But it does not enable humans to observe and develop theories about consciousness in any non-human things. 

You might make a conjecture about consciousness in apes, since they are very closely related to humans.  . You might extend that conjecture to other animals, but you can't be certain.  And there is no way that you could extend that conjecture to computer systems, which have no resemblance whatever to human thinking processes.

Alex

John F Sowa

unread,
Aug 12, 2023, 11:51:12 AM8/12/23
to ontolo...@googlegroups.com
Alex,

If we're trying to compare human intelligence to the intelligence of other animals and computer systems. it's important to define the terminology in a way that can be tested by experimental evidence.  If a definition uses non-observable terms, those terms must have previously been defined in a way that can be tested by experiments.

Alex>  My concept of consciousness would be an awareness of part of one's thoughts and ability to reason about it. 

Any human could read that statement and test  it against his or her own awareness, thoughts, and reasoning ability.  As humans, we can safely assume that other members of our species have similar abilities.   But we cannot use it to determine whether any other beings -- life forms or robots -- have those abilities.

Alex>  def consciousness.  The ability to generate, modify, and use mental models as the basis for perception, thought, action, and communication. 

The terms 'perception', 'thought', and 'mental models' raise many questions.  For humans, perception depends on our five senses for external observations, neurons connected to our internal organs, and a brain for interpreting those observations and responding with actions that could also be used for communicating with animals of any species.

Those properties could be attributed to fruit flies. In fact, they could also be attributed to any animals descended from bilateral worms about 600 million years ago.  They had two light-sensitive spots in their front ends and a fat ganglion for interpreting the signals.  But we can't be sure about one-celled animals or about jellyfish and sponges.

Attributing thoughts to all those animals is problematical.  The authors of the  article about fruit flies explicitly said that  they defined thinking by fruit flies as whatever neural processing caused the delay between perception and action.  One might debate whether the word 'thought' by itself could be applied to that delay, but the phrase "fruit fly thought' would be acceptable.

The word 'mental model' is much more complex than ''fruit fly thought'   Some AI researchers adopted that term (or something similar) from psychologists such as Johnson-Laird, but the neural network gang doesn't use that term.  I believe that the psychologists have found sufficient evidence to support that term or something like it.

As for the publications on artificial consciousness, I admit that there are many competent researchers working on projects that use that term.  I also admit that some of them use sophisticated mathematics to define their terms precisely.

But unless they define their terminology in ways that can be tested by experiments and observations, their theories are pure speculation.  I have ZERO confidence in any claims they make.

John

Alex Shkotin

unread,
Aug 12, 2023, 12:26:18 PM8/12/23
to ontolo...@googlegroups.com
John,

It is possible that there are no definitions for mental models, perception, thought, action and communication at all. Then these would be the primary concepts of some axiomatic theory.

Alex

сб, 12 авг. 2023 г. в 18:51, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

John F Sowa

unread,
Aug 12, 2023, 3:46:39 PM8/12/23
to ontolo...@googlegroups.com
Alex> It is possible that there are no definitions for mental models, perception, thought, action and communication at all. Then these would be the primary concepts of some axiomatic theory. 

But any concept that has physically observable conditions and effects can be defined in terms of those conditions and effects.  For perception, action, awareness, and communication, there are observable physical conditions and effects that can be used as the basis for a definition.

But the distinction between awareness and consciousness depends on private observations by the person who is conscious.  Nobody has discovered any method that is independent of asking somebody "Are you conscious?" 

There have been many cases where people are unable to speak or give any kind of sign, yet they are aware of what is happening near them and to them.   If they are lucky, they may "wake up" before the doctors and nurses turn off their life support systems.  If not, no one will ever know.

John

Alex Shkotin

unread,
Aug 13, 2023, 4:56:09 AM8/13/23
to ontolo...@googlegroups.com

John,


Exactly! And this is for me a way to develop a system of definitions for the theory. And this is why I asked you and maybe our community to continue define:

"So ToC_JFS

def consciousness

eng:The ability to generate, modify, and use mental models as the basis for perception, thought, action, and communication.

Now we need definitions or they should be recognized as primary:

def mental model

eng:???

def perception

eng:???

def thought

eng:???

def action

eng:???

def communication

eng:???

" https://groups.google.com/g/ontolog-forum/c/nXEQWq7fSmU/m/HZKRBBB7BAAJ


And we have at least two kind of systems to observe:

-biological systems of cells

-robotics (autonomous)

I'm not rushing anyone, but why not give a couple of definitions over the weekend?

Remember we got a lot of definitions for just one concept "service" (t)"service". the word meanings, term definitions; theories involved(-:PUBLIC:-)

Why not create a small system of definitions for ToC?


Alex



сб, 12 авг. 2023 г. в 22:46, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Nadin, Mihai

unread,
Aug 13, 2023, 4:50:51 PM8/13/23
to ontolo...@googlegroups.com

Dear and respected Alex Shkotin,

Dear and respected colleagues,

Thank you. System of definitions:

  1. https://www.nadin.ws/wp-content/uploads/2017/03/predictive-and-anticipatory-computing_encyclopaedia.pdf

In this text the distinction between anticipation, prediction, guessing conjecture, forecasting, expectation, etc is made

  1. A book (accepted by Springer) defines anticipatory processes in detail. I will let you know when it will be issued.
  2. Current state=function of previous state, current state, possible future state is indeed at first view confusing. Anticipatory actions are the expression of  holistic processes. Current state is the generic  expression for each of the elements that make up the living. The state of the heart and the action (motoric expression—jump if you want to avoid catching fire—as an example) are interrelated. I could go into more detail…Here I am only trying to shortly address what you describe as a bit strange and incomprehensible

The thought of holistic processes is not trivial. I am not even sure I have the mathematics for describing it.

 

YOUR willingness  to make an appropriate framework / skeleton (your words) is appreciated.

 

Best wishes.

 

Mihai Nadin

 

From: ontolo...@googlegroups.com <ontolo...@googlegroups.com> On Behalf Of alex.shkotin
Sent: Saturday, August 12, 2023 3:01 AM
To: ontolog-forum <ontolo...@googlegroups.com>
Subject: [ontolog-forum] Re: anticipation and consciousness

 

Dear and respected Mihai Nadin,

 

Thanks for the clear answer. If you have a system of definitions for the theory of anticipatory systems, I can try to make an appropriate framework / skeleton.

After all, this approach is simple in its own way: the definition of the term is separated from the text in which it is present in a free form, and is drawn up as a separate paragraph of a well-defined structure that is not related to the text.

For example,

def anticipatory system

A system is anticipatory if and only if the current state of the system depends upon past states, current state and possible future states.

 

It's what "the current state of the system depends upon… current state…" means

Alex Shkotin

unread,
Aug 14, 2023, 4:58:14 AM8/14/23
to ontolo...@googlegroups.com

Dear and respected Mihai Nadin,


Thanks for a very interesting answer. In your encyclopedia we have an example of a theoretical statement on natural language p.2:
"In an anticipatory system, the current state depends not only on the past state but also on possible future states."

And its formalization:


There are two ways to harmonize the English text and formalization: add "current state" to English or remove "current state" from the formula.

What do you think?


Best regards,


Alex



вс, 13 авг. 2023 г. в 23:50, Nadin, Mihai <na...@utdallas.edu>:

John F Sowa

unread,
Aug 14, 2023, 9:23:01 AM8/14/23
to ontolo...@googlegroups.com
There is nothing wrong with writing  x = f(x,y)

But I agree that it would be better to say that the state at time t depends on the history of all states up to t plus an ontology that characterizes which states are possible.

Unfortunately, consciousness is a property that can only be observed directly by the individual who is conscious.   Outside observers may infer consciousness by observing the behavior (or other signs) associated with that individual

But if the individual happens to be motionless, there is insufficient data for any outside observer to draw any conclusions of any kind.

In any case, there is much more to say about these issues.

John
 


From: "Alex Shkotin" <alex.s...@gmail.com>

Alex Shkotin

unread,
Aug 14, 2023, 12:27:50 PM8/14/23
to ontolo...@googlegroups.com
John,

I am trying to understand MN's theory of anticipatory systems and the math used.
The theoretical statement "In an anticipatory system, the current state depends not only on the past state but also on possible future states." is formalized this way:
current_state = f(past_state, current_state, possible_future_state)
My guess is that current_state is redundant as an argument to f. And here the answer of the author of the theory and notation is interesting.

As for the theory of consciousness, of course, many important and interesting things can be said for sure, and by the way, they have already been said.
But the definitions are interesting.

Alex

пн, 14 авг. 2023 г. в 16:23, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Nadin, Mihai

unread,
Aug 15, 2023, 4:51:41 PM8/15/23
to ontolo...@googlegroups.com

Dear and respected Alex Shkotin,

Dear and respected colleagues,

The even more challenging observation that within the living there are many “times” (different clocks, some faster some slower), suggests that the definition I submitted to you needs even more work. Usually when we write X(t) we describe a value at time t. Therefore x(t-1) describes a past value; t+1 a future value. When we have several rhythms, i.e. different times (fast, slow, etc.) the notion of past, present and future is subject to re-interpretation.

With this in mind, present state is a configuration—values within the holistic system, some interrelated, some not.

If your desire to capture the definition is still on, think about the type of formalization you might need. I am all set to learn from you and my colleagues regarding such a mathematics.

Alex Shkotin

unread,
Aug 16, 2023, 5:16:18 AM8/16/23
to ontolo...@googlegroups.com

Dear and respected Mihai Nadin,


First of all, I'd like to note that the formula you wrote is an equation.

Is it possible to look at this formula as follows. Some system has a characteristic X that changes over time t. At the same time, time is the time in the reference system of the observer, who has a clock and other measurement tools. And this observer makes tables of the change in time of the value X. And suddenly, after some time, as a result of analyzing various measurement results, he finds that they all fit into the formula:

Where alpha, beta are some constants, and f is some well-defined function.

But here the question arises: Doesn't it follow from this that there is a dependence of X(t) only on X(t-alpha) and X(t+beta)?

Is it possible to give an example of such a function f for which such a dependence does not exist.

But of course now the main thing is to understand how to read the equation itself.

So far, it still seems to me that X(t) on the right side of the equation is superfluous. Consider this way. I know X(t-alpha)=c1 and X(t+beta)=c2, in your case I need to solve equation x=f(c1, x, c2) to get X(t).

And an example of the characteristic X, the functions f and alpha, beta would be useful.


Alex



вт, 15 авг. 2023 г. в 23:51, Nadin, Mihai <na...@utdallas.edu>:

Ricardo Sanz

unread,
Aug 16, 2023, 5:19:45 AM8/16/23
to ontolo...@googlegroups.com
Hi Mihai,

Besides these more precise renderings, there is a major issue in this formulation. 

The left side of the equation being x(t) implies the possibility of instantaneous state change. 
While this is mathematically sound it is not physically possible.

Should not it be a derivative of x(t) like dx/dt ?

Best,
Ricardo





Alex Shkotin

unread,
Aug 16, 2023, 7:00:39 AM8/16/23
to ontolo...@googlegroups.com
Hi Ricardo,

There is more math to add https://youtu.be/B1J6Ou4q8vE :-)

Alex

ср, 16 авг. 2023 г. в 12:19, Ricardo Sanz <ricardo.s...@gmail.com>:

alex.shkotin

unread,
Aug 17, 2023, 6:31:00 AM8/17/23
to ontolog-forum
To continue the definitions of the terms in the defining part of the JFS definition, I decided to bring in Claude 2.

Alex

воскресенье, 13 августа 2023 г. в 11:56:09 UTC+3, alex.shkotin:

Nadin, Mihai

unread,
Aug 17, 2023, 7:54:12 PM8/17/23
to ontolo...@googlegroups.com
  1. The equation you are discussing describes one aspect of the living dynamics. Neither α nor β are constants; If that would be the case, the dynamics would be clear cut. Example: the clock of the heart beat or that of blinking have variable rhythms. Your blood pressure is maintained when you go to sleep although the position of the body changed (think about the physics involved). During sleep there is blinking, but related to other functions that those of visual perception.
  2. The equation is illustrative of anticipatory processes.
  3. In reality, we have a functional and a relational aspect—the equation is limited to the functional.
  4. Bringing up the equation was part of my indirect suggestion to fellow ontologists: defining a living entity entails many aspects which are usually left aside.
  5. No, x(t), which can be represented in a time sequence, depends on past, present and possible future—but in a holistic manner.

Thank you to everyone for suggestions and observations.

Alex Shkotin

unread,
Aug 18, 2023, 4:09:14 AM8/18/23
to ontolo...@googlegroups.com
Thank you for the clarification. 

пт, 18 авг. 2023 г. в 02:54, Nadin, Mihai <na...@utdallas.edu>:

alex.shkotin

unread,
Aug 26, 2023, 2:28:28 PM8/26/23
to ontolog-forum
We seem to have decided not to discuss definitions of the term consciousness, because need a specialist in the theory of consciousness. Perhaps this article is an introduction to the theories of consciousness.
"Whether current or near-term AI systems could be conscious is a topic of scientific interest and increasing public concern. This report argues for, and exemplifies, a rigorous and empirically grounded approach to AI consciousness: assessing existing AI systems in detail, in light of our best-supported neuroscientific theories of consciousness. We survey several prominent scientific theories of consciousness, including recurrent processing theory, global workspace theory, higher-order theories, predictive processing, and attention schema theory. From these theories we derive "indicator properties" of consciousness, elucidated in computational terms that allow us to assess AI systems for these properties. We use these indicator properties to assess several recent AI systems, and we discuss how future systems might implement them. Our analysis suggests that no current AI systems are conscious, but also suggests that there are no obvious technical barriers to building AI systems which satisfy these indicators."

Alex

вторник, 8 августа 2023 г. в 16:13:27 UTC+3, Ricardo Sanz:
Hi,

In the previous thread on consciousness conferences I was just answering Alex's question; not trying to discuss anything. However, thanks to John's comments, I have changed my initial intention.

Consciousness is indeed a good, hot topic for an ontology discussion

Indeed, I am involved in building ontologies of awareness and consciousness, both for humans and robots. I know pretty well the current status of "consciousness science" that John summarily described (quite negatively). 

John said:
"Nobody has the slightest clue about how to detect consciousness, even in human beings."
This is false. For example, I have clues that John was conscious when he wrote this particular sentence. 
Doctors do that (enact clues of consciousness). Parents do that. Teachers do that. Chatters do that. Almost every animal does that. 

A better sentence would be:
"Nobody has an absolute certainty about how to detect all states of consciousness, even in human beings."
As of what I know, this is true; e.g. considering the difficulties associated with abnormal states like the lock-in syndrome that John mentioned.

There are some reliable methods of detection of the healthy state of consciousness, but they are not 100% reliable. False negatives are indeed what is most feared.

As John suggested, there is no falsifiable, globally accepted theory of consciousness yet. I think we are in an early stage of (pre)conceptualization and it is unclear what we are talking about.

----------

And then ... two "ontology" questions for the forum: 

A) Are "consciousness" and "awareness" the same thing? Do we need one concept or two?

Francis Crick said that he thought both were the same thing, and he said he always used "awareness" except when he wanted to shock the audience.

B) Do we need a wider variety of concepts because "consciousness" is a mongrel concept (e.g. considering qualia, arousal states, drugs, emotion, world-awareness, proprioception, self, etc.)?



Best wishes,
Ricardo




On Fri, Aug 4, 2023 at 4:49 PM John F Sowa <so...@bestweb.net> wrote:
Alex and Ricardo

Nobody has the slightest clue about how to detect consciousness, even in human beings.  There are many, many examples of people who had some kind of injury or altered state where they were unresponsive -- or more precisely, unable to make any voluntary motion.  The physicians in charge recommended removal of life support.  But for one reason or another, they continued life support until the patients "woke up'".

Then the patients reported that they had heard all the discussion and were trying to say "No, no, no!"  But they couldn't.    If the best trained physicians can't reliably detect consciousness in a human being, there is ZERO reason to believe any programmer who makes any claims about his or her favorite program.

I certainly admit that consciousness is a very important issue for physicians, biologists, and neuroscientists.  But the best informed people in those fields admit that they have no reliable methods for detecting whether any animals other than humans are conscious.  They're willing to admit that higher mammals are probably conscious.  But they have no reliable criteria for distinguishing conscious decisions from knee-jerk reactions.

Furthermore, this is an ontology forum.   The citations below have ZERO influence on any issue about ontology.  Anybody who has time to waste on idle speculation can read them .  But there are a huge number of important issues that could be discussed at noon on any particular Wednesday.

John

John F Sowa

unread,
Aug 26, 2023, 4:08:56 PM8/26/23
to ontolo...@googlegroups.com
Alex and Ricardo,

I read that article. I agree that the authors have a god backround.  But they also have an agenda:  They want to show that the LLM community is making progress toward some mysterious phenomenon that people recognize in themselves, but nobody can define in a way that is sufficiently precise even to observe it in other people.

As for theories of ontology, the word 'consciousness' is a highly specialized term for which no computer applications of any kind  require an answer.   Until somebody can show any useful purpose of a formal definition , it is just one more irrelevant item in a dictionary.

Summary:  The LLM community wants bragging rights.  That is their business, not ours.  It's a waste of time to debate their hot button issue in Ontolog Forum.

John

John


Ravi Sharma

unread,
Aug 27, 2023, 3:00:18 AM8/27/23
to ontolo...@googlegroups.com
Agree, we can park the definition of consciousness, which will hopefully be described someday soon as double helix was described in the 1960's.
Thanks.
Ravi
(Dr. Ravi Sharma, Ph.D. USA)
NASA Apollo Achievement Award
Ontolog Board of Trustees
Particle and Space Physics
Senior Enterprise Architect



--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Nadin, Mihai

unread,
Aug 27, 2023, 3:11:34 PM8/27/23
to ontolo...@googlegroups.com

Dear and respected Dr. Sharma,

Dear and respected colleagues,

https://arxiv.org/abs/2308.08708?fbclid=IwAR1WoFKyn5nerA39NMUk3pAitPrGRT9iTilPzGiHh2y7Km0c7iaRBWh0jlk

keep in mind: Insights from the Science of Consciousness—not my title.

The double helix is a description in the decidable domain of chemistry. Consciousness pertains to the undecidable domain of biology. I respect your optimism. Science is NOT possible without an optimistic grounding. But it takes a different perspective to than that of determinism to describe phenomena that can canto be reduced to physics or chemistry. Bby the way: mathematics is a good example—a subject for another time).

Mihai Nadin

 

From: ontolo...@googlegroups.com <ontolo...@googlegroups.com> On Behalf Of Ravi Sharma
Sent: Sunday, August 27, 2023 2:00 AM
To: ontolo...@googlegroups.com

Ravi Sharma

unread,
Aug 27, 2023, 3:24:24 PM8/27/23
to ontolo...@googlegroups.com
Mihai. John, Colleagues
Thanks for honorifics and regards.
Also arXiv treatise is great thoughts for some other time but relevant to the topic of "consciousness" as it relates to AI. Appreciate.
I was talking about Double Helix with an analogy to the origin of Bio from Chemistry., 
Similarly when the Universe or matter came into being, how consciousness originated is a subject I will describe similarly a couple of months in future. It is metaphysics.
Regards.
Thanks.
Ravi
(Dr. Ravi Sharma, Ph.D. USA)
NASA Apollo Achievement Award
Ontolog Board of Trustees
Particle and Space Physics
Senior Enterprise Architect


John F Sowa

unread,
Aug 27, 2023, 4:02:02 PM8/27/23
to ontolo...@googlegroups.com
Ravi and Alex,

There are many different hypotheses about consciousness, but no evidence to support any of them.  The worst examples are the people who are paralyzed and can't move or speak.  But they can hear every word that is spoken about whether to "pull the plug".  If the medical experts can't detect consciousness in a human being, I have ZERO trust in any non-medical person who claims to have a definition of consciousness.

I admit that the arXiv article was written by competent authors, but they had no definition or suggestions about how to define the word in sufficient generality to apply to both humans and computers.  Furthermore, they had a hidden agenda:  demonstrate that certain AI systems are getting close to the goals of AGI.

That tendency has plagued AI from the earliest days:  Researchers need to get funding.  To get funding, they need to show progress.  Doing something that truly matches human ability is extremely difficult.  Therefore, they rename what they;re currently doing to make it sound as if they are making progress.

I am not claiming that anybody is doing that in a sneaky, underhanded, or even corrupt way.  But respectable researchers have been making exaggerated claims about AI since the 1950s.  

Furthermore, such claims are made in every field where anybody is trying to get funding.   Conclusion:  Never trust any fantastic new verbiage or definitions that are being made by anybody who needs funding for any purpose.

People blame businesses that are trying to sell something.  But academics are no better.  In fact, they are even worse than businesses because (1) they have less money, and (2) they are desperate to get more.

Net result:  I recommend that *ALL* attempts to define consciousness be put in the large bins that are being collected by the sanitation department.  There is no proposed definition that has any relavence whatever to ontology.

John

Ravi Sharma

unread,
Aug 27, 2023, 7:24:16 PM8/27/23
to ontolo...@googlegroups.com
John
I agree with many observations but request humility to allow others to express their opinions while you have a right to comment like this one.
I am not for or against these postings.
My message is that I propose to provide a different definition and concepts of "consciousness" to be defined in a new way with references when I present it. Since it is metaphysics, it may or may not be relevant for computing.
Regards.
Thanks.
Ravi
(Dr. Ravi Sharma, Ph.D. USA)
NASA Apollo Achievement Award
Ontolog Board of Trustees
Particle and Space Physics
Senior Enterprise Architect


--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

John F Sowa

unread,
Aug 27, 2023, 9:35:32 PM8/27/23
to ontolo...@googlegroups.com
Ravi,

Everybody has the right to express their opinions.  But Wednesday noontime ZOOOM sessions are a finite resource. 

Any topic for which the experts in the field (medicine, neuroscience, psychologists, and philosophers) have never found any agreement is not a useful topic for a bunch of amateurs --- you, me. Alex, and 99% of Ontolog subscribers -- to waste any time debating.

The word 'conscious' is just one simple word in any English dictionary.  If anybody wants to use it for any purpose, just take the definition in the first dictionary you find.  That definition is guaranteed to be no better or no worse than anything that anybody on Ontolog Forum might suggest.

As I have said, the only reason why those LLM guys want to define computer consciousness is for funding purposes.  They want to brag about how smart their computer software is.  They have every right to do so.  But I do not consider their funding to be anything that Ontolog forum should support.

John
 


From: "Ravi Sharma" <drravi...@gmail.com>

Ravi Sharma

unread,
Aug 28, 2023, 2:00:14 AM8/28/23
to ontolo...@googlegroups.com
John
Thanks for clarifying that this topic, namely, "computational or AI and - consciousness" is not relevant for our Forum, especially the attempt to make it appear relevant in terms of computing or AI or ontology. I agree with that point of view for now.
Regards.
Thanks.
Ravi
(Dr. Ravi Sharma, Ph.D. USA)
NASA Apollo Achievement Award
Ontolog Board of Trustees
Particle and Space Physics
Senior Enterprise Architect


--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Alex Shkotin

unread,
Aug 28, 2023, 5:04:08 AM8/28/23
to ontolo...@googlegroups.com

John, 


My position is simpler and calmer:

- A single definition is not very interesting, the theory of a part of reality is interesting: the structure and ways of movement of some objects, including some processes.

-Even your definition of consciousness, which was highly appreciated by Claud 2 [1], taken separately, can be formalized, but it is necessary to formalize the theory and not a separate definition.

-Every person who owns the technique of thinking in concepts can give his more or less reasonable definition of any term from the field of common sense. We saw this with the term "service" [2].

-There are about twenty theories of consciousness, six of them are selected in the report. Whether it is necessary to formalize any of them, I do not know.

I absolutely agree with you that discussing the definition of consciousness without specialists in the theory of consciousness, i.e. beyond any theory, there is simply either entertainment or a waste of time.

The review, however, concerns the state of affairs in the field of theories of consciousness and is interesting for general information about the study and modeling of this part of the real: external observation of consciousness.


Regarding LLMs, one should consider how much they imitate consciousness, not whether they have it.


And my favorite: If we take any formal ontology of OBO Foundry [3], we can divide it into two parts:

- theoretical knowledge (approximately T-box)

- presentation of facts using the terms of the theory (approximately A-box)

Somehow this kind of research will have to be done.


Alex


[1] https://www.linkedin.com/pulse/claude-2-system-definitions-alex-shkotin/

[2] (t)"service". the word meanings, term definitions; theories involved(-:PUBLIC:-)

[3] http://obofoundry.org/


вс, 27 авг. 2023 г. в 23:02, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Ricardo Sanz

unread,
Aug 28, 2023, 6:28:40 AM8/28/23
to ontolo...@googlegroups.com
Hi John,

The article authors' list does not include anyone that has done any work on machine consciousness in the past. 
They are mostly people from the new DNN wave (maybe with the hidden agenda that you mention; I do not know). 

What I know is that in your arguments against discussing consciousness you are committing the common fallacy of the anthropomorphic mind study: all comes together.

However, the quest for mechanisable approaches to consciousness need not be related to qualia, emotion, the metaphysics of experience or macroscopic quantum coherence phenomena in axon microtubules.

Put aside humans, AGI and LLMs. Consciousness has another property of maximal value for autonomous machines: the capability of timely understanding of perceptual situations (this is the ground for meaningful action). 

These are some of the terms that need suitable and properly scoped definitions: state, measure, sense, percept, mental model, meaning, awareness, agent, value, self, etc.

-------

BTW, I am quite aware of at least one academic that is not "desperate to get more money". 
This fasifies what you said. Please refrain until you fully check your personal T-box.

Best wishes,
Ricardo


--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

John F Sowa

unread,
Aug 28, 2023, 3:15:38 PM8/28/23
to ontolo...@googlegroups.com
Ricardo,

I am not restricting anybody's right to free speech about anything.  I am just pointing out that (1) 99% of the subscribers to Ontolog forum have little or no qualifications for making any useful contributions to a definition of consciousness (and that includes me), (2) nobody has ever discovered any use for the term or idea of consciousness for any use or application of ontology, and (3) the people who do have knowledge of and need for a clear definition of consciousness are physicians and undertakers -- and none of them are proposing any definitions for our consideration.

When I said that prominent AI people have been making extreme claims in order to get funding, some of the most prominent among them have admitted that point.  But that does not imply that everybody working in AI is guilty.

And finally, the most complex reasoning in the human brain is done by the cerebellum, which has almost 5 times as many neurons as the cerebral cortex.  And *every* aspect of reasoning in the cerebellum is unconscious.  Therefore, consciousness is unrelated to reasoning ability -- by humans or by computers.  Any claims that LLMs can be conscious would be a point against the claim that they are involved in the most complex reasoning that humans can do.

For more details, see the next note I'll post.

John
 


From: "Ricardo Sanz" <ricardo.s...@gmail.com>

Alex Shkotin

unread,
Aug 29, 2023, 6:43:44 AM8/29/23
to ontolo...@googlegroups.com

Ricardo,


It is great that you mention the necessity of definitions for "state, measure, sense, percept, mental model, meaning, awareness, agent, value, self, etc." For me this means to look in twenty theories of consciousness, or six the best, and find out these there. Big work.

And it may be done enthusiastically or in a project.

By the way if it's difficult to get a definition then we have a primary term. And we need axioms for it.

With primary terms the situation is as follows: only a person who is sufficiently trained can reliably state whether a given object has such a property as consciousness (in our case).

And in [1] there are a lot of definitions but in Russian.


Alex 


[1] AGI-Definitions


пн, 28 авг. 2023 г. в 13:28, Ricardo Sanz <ricardo.s...@gmail.com>:

Michael DeBellis

unread,
Aug 29, 2023, 11:24:34 AM8/29/23
to ontolog-forum
I was going to write a reply to this... actually I did anyway but it's shorter because John Sowa already said what I was going to say. No-one really has a clue and virtually all the discussions I've ever seen on this end up going nowhere. IMO there are some questions that are amenable to scientific analysis and some (given our current knowledge) that aren't and consciousness is one of those that currently aren't. You have extremes such as a paper I saw years ago by some leading neuroscientists that talked in depth about  consciousness and defined it as the opposite of being asleep or in a coma. And on the other extreme people like Kristof Koch who believe in Pansychism, that everything in the universe is conscious. 

Many years ago I sat in on a Philosophy of Mind lecture series led by John Searle at Berkeley. One of my favorite classes was a guest lecture by Koch. Searle started out by lauding him as one of the most brilliant minds ever (which at the start of his talk I could see why, Koch really knows his neuroscience). Then Koch started getting into his Pansychism philosophy and you could just see the color draining from Searle's face and Searle finally said something like "Wait, you are serious?! I thought you were talking about Pansychism as an example of a clearly wrong theory!" And it got more entertaining from there. 

I don't agree with Patricia Churchland much but there is a book called "This Idea Must Die!"  where she talks about the Neural Correlate of  Consciousness (NCC) as an idea that must die. Her reasoning was that there are so many concepts we don't have coherent, falsifiable models of yet such as the Language Faculty and Episodic Memory and that whatever  consciousness is, we probably all agree that it is closely tied to memory and language so until we at least have decent theories on such more basic (but still barely understood) concepts it is pointless to postulate theories about   consciousness. I mean it can be fun but not something I expect to see any serious science on. 

Michael

Ravi Sharma

unread,
Aug 29, 2023, 3:24:26 PM8/29/23
to ontolo...@googlegroups.com
This subject, I agreed to John Sowa's general direction, was not directly relevant to our Forum. But I realize some subjects are difficult to resist, hence I welcome thoughts from Michael Debellis.
One summary message, just because developed life with a nervous system has not yet happened, Nature keeps doing its work to keep opening opportunities where Cognizer (us) is observing the "Works". And the Works are happening including consciousness and its evolution.
I tend to agree with Quoted Koch in Michael's message "that everything in the universe is conscious". Will also expand to reason it in a paper related to but not devoted to ontology.
Thanks.
Ravi
(Dr. Ravi Sharma, Ph.D. USA)
NASA Apollo Achievement Award
Ontolog Board of Trustees
Particle and Space Physics
Senior Enterprise Architect


--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages