“Emergence” isn’t an explanation, it’s a prayer

98 views
Skip to first unread message

John F Sowa

unread,
Jul 31, 2023, 10:40:20 AM7/31/23
to ontolo...@googlegroups.com
Tensor calculus is the mathematics used to process LLMs.  It is defined very precisely.  But when you have a huge collection of LLMs derived by a huge amount of computer power that pumps an enormous amount of CO2 in the air, some people claim that emergent properties occur.  But the word 'emergent'  explains nothing.  If you replace "emergent properties" with "miracle", there is no change of meaning.  The word "emergent" has no explanatory power of any kind.

And by the way, another synonym for 'emergent' is 'generative', which represents the G of GPT.  When people talk about "generative AI", you can get the same explanatory power with the phrase "miraculous AI".  Or if you delete the meaningless words, you could just say AI -- which is what most people do.

That is a brief summary of the article “Emergence” isn’t an explanation, it’s a prayer", A critique of Emergentism in Artificial Intelligence, https://ykulbashian.medium.com/emergence-isnt-an-explanation-it-s-a-prayer-ef239d3687bf

Some excerpts below.

John
_______________________

In cognitive science, emergence is regularly used to “explain” the connection between two phenomena, when it is otherwise complex and difficult to predict: e.g. how neuronal firing gives rise to consciousness, or transformers to the appearance of language comprehension. Where there may be a connection, but nothing more is known or can be proved, emergence is a placeholder that fills the gap. The word gives weight and gravitas to what is essentially a blank space.

 Emergentist interpretations of human behaviour may also boast some recent wins—the proliferation of LLMs (e.g. ChatGPT) may be seen as one such success. This has reignited the discussion of whether emergence is the best way to frame intelligence. 

Despite emergence contributing nothing of substance to the discussion, as a concept it admittedly has a compelling intuitive appeal. There is a wonderful feeling about the notion of emergence. It does seem to be adding something valuable, as if you’ve discovered a magical ingredient by which you can explain mysterious phenomena. That’s the reason it continues to be popular, and gets inserted into scientific discussions. It convinces the listener that something has been explained with scientific rigour when all we’ve done is to say “it’s complicated”.

Besides the good feeling, however, emergence is void of any explanatory power. And so it has no scientific value in a predictive capacity. You can’t use it to say anything about what an unknown system will do; only what you hope it will do. When applied to pie-in-the-sky AI futurism, emergence has become synonymous with “I’m sure the system will work itself out”. It indicates that the author has a feeling that a complex system will align at some point, but no clear sense of how, why, or when. Insofar as intelligence does manifest in a specific instance, “emergence” doesn’t tell us anything interesting about how it happened. And insofar as intelligence hasn’t yet manifested, emergence doesn’t tell us when it will or what direction to take to get there.

In the field of AI development, emergence is invoked whenever someone encounters a phenomenon in the human mind and has no idea how to even start explaining it (e.g. art, socialization, empathy, transcendental aesthetics, DnD, etc). If said researcher already has a working theory of AI, this realization is disheartening. So they look deeper into the matter, find some point of overlap between the existing theory and the missing behaviour, and assume that with enough time and complexity the missing pieces will emerge.

Emergence is attractive in such cases because it puts the author’s mind at ease, by making it seem like they have a viable mechanism that only needs more time to be vindicated. It placates their inner watchdog, the one that demands concrete, scientific explanations. Emergence, being related to complexity and superficially validated by experiments such as Conway’s Game of Life, is enough to lull that watchdog back to sleep.

This justifies continuing to ignore any shortcomings in a theoretical model, and persisting on the current path. Like the proverbial man who searches for his lost keys under the lamplight, because that is where the light is, he hopes that with enough persistence his keys will “emerge”. The only other alternative is to admit failure, and to give up any hope of accomplishing what you want within this lifetime.

Scientists, it seems, can have superstitions too. And emergence has a powerful narcotic effect: it feels so reasonable and credible on a gut level². There are many factors that prevent a given researcher from investigating emergence too deeply and realizing that it lacks any substance. First, there appears to be a lot of external evidence to back it up in the natural world. This, as was pointed out, equivocates between retrospective and prospective uses of the term, and so legitimate uses are being conscripted to justify the illegitimate ones. Secondly, the fact that emergence exclusively concerns itself with intractably complex systems means anything behind its curtain by definition can’t be studied. So it conveniently excludes itself from exactly that analysis which would reveal it to be hollow.

In the end emergence isn’t an explanation; it’s an observation combined with a recognition of ignorance. Wherever emergence shows up there is an implicit acceptance that everyone involved is at a loss for how to approach the topic. It’s not that properties like intelligence won’t emerge from neural activity, it’s that emergence is a placeholder that justifies and promotes a lack of interest in exploring the details behind the connection. It discourages investigation. By invoking the term, we are merely thanking the nature gods for granting us this emergent property (aka property), and trying not to examine their gifts too profanely or with ingratitude. This impulse is understandable, since we don’t think we’ll discover an answer if we were to dig in. But we shouldn’t allow our insecurities to masquerade as science, or else they may become ingrained to the extent that they are difficult to uproot. A false answer stands in the way of a true one.

Notes:

¹ This used to say ‘You can remove “emergent” from any sentence and it would mean the same thing’, but that has caused some confusion, so to clarify: the word “emergent” when used as an adjective doesn’t add new or useful information; you won’t know any more about the subject than you did before.

² A self-aware researcher should notice if they have a strong intuitive or emotional reason for holding on to the idea. If you ever feel that emergence is so self-evident that it can never be disproved, that should give you pause — perhaps you have strayed outside the bounds of scientific inquiry and into metaphysical expositions. Not that there’s anything wrong with the latter…
 

Barry Smith

unread,
Jul 31, 2023, 10:51:04 AM7/31/23
to ontolo...@googlegroups.com
Every day and in every way I am pleased to see that I agree with what John has to say.
I would add today that ChatGPT is not even generative. It only draws conditional samples from a huge multivariate distribution of sequences. This is NOT a generative act.
Barry
****************************************************************************** 

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/da2a50f5ded442619a8ed36c793e4966%40bestweb.net.

Daniel Arista

unread,
Jul 31, 2023, 10:58:03 AM7/31/23
to ontolo...@googlegroups.com

Mike Peters

unread,
Aug 1, 2023, 4:40:49 PM8/1/23
to ontolog-forum
John

You are 100% wrong on emergence.

I suggest you read research coming out from Simons Foundation (some popular articles available in Quanta Magazine),  SFI (Santa Fe Institute) and George F R Ellis.


Mike

doug foxvog

unread,
Aug 1, 2023, 6:27:48 PM8/1/23
to ontolo...@googlegroups.com
Mike,

You made a claim, but provided no argument to back it up. This isn't the
way this forum is supposed to work.

Provide an argument and then a link to a paper that backs it up instead of
a generic suggestion to read research. Provide a single paper to back up
a claim that you make. Not "i can't express what i want to say, you
should read something and you'll come to my inexpressible conclusion."

The Wikipedia article about George Ellis gave no explanation of emergence.
Providing that wasted our time. Of course, a suggestion to wander around
some research institute's site to find something that might answer the
question is ridiculous.

A suggestion to watch a video is a turn-off to me. But that is all that
was left of your three links. I did watch the video since it was short.
And unsuprisingly it explained emergence just as John did: a property
that arises for unknown reasons in a complex system.

I will be polite enough, Mike, not to express that you are 100% wrong, but
i found nothing in your post that shed light on the subject you were
addressing or that was worth my time.

-- doug foxvog
>> That is a brief summary of the article *“Emergence” isn’t an
>> explanation,
>> it’s a prayer", *A critique of Emergentism in Artificial Intelligence,
>> *Notes:*
>>
>> ¹ This used to say ‘You can remove “emergent” from any sentence
>> and it
>> would mean the same thing’, but that has caused some confusion, so to
>> clarify: the word “emergent” when used as an adjective doesn’t add
>> new or
>> useful information; you won’t know any more about the subject than you
>> did
>> before.
>>
>> ² A self-aware researcher should notice if they have a strong intuitive
>> or
>> emotional reason for holding on to the idea. If you ever feel that
>> emergence is so self-evident that it can never be disproved, that should
>> give you pause — perhaps you have strayed outside the bounds of
>> scientific
>> inquiry and into metaphysical expositions. Not that there’s anything
>> wrong
>> with the latter…
>>
>>
>
> --
> All contributions to this forum are covered by an open-source license.
> For information about the wiki, the license, and how to subscribe or
> unsubscribe to the forum, see http://ontologforum.org/info/
> ---
> You received this message because you are subscribed to the Google Groups
> "ontolog-forum" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to ontolog-foru...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/ontolog-forum/5dddad34-d9c1-47f6-a396-fa192f6e1a51n%40googlegroups.com.
>


Anatoly Levenchuk

unread,
Aug 1, 2023, 7:04:48 PM8/1/23
to ontolo...@googlegroups.com

Mike,
you can add math proof of emergence of random structures in graphs: https://www.quantamagazine.org/elegant-six-page-proof-reveals-the-emergence-of-random-structure-20220425/ (but after this it was simplified to 1-page proof -- https://arxiv.org/abs/2303.02144).  

And add works about LLMs and compare this with results from links above:

Emergent Abilities of Large Language Models

Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, William Fedus

https://arxiv.org/abs/2206.07682


Scaling up language models has been shown to predictably improve performance and sample efficiency on a wide range of downstream tasks. This paper instead discusses an unpredictable phenomenon that we refer to as emergent abilities of large language models. We consider an ability to be emergent if it is not present in smaller models but is present in larger models. Thus, emergent abilities cannot be predicted simply by extrapolating the performance of smaller models. The existence of such emergence implies that additional scaling could further expand the range of capabilities of language models.

 

One of example of study of emergence in LLM is here (there are multiple works of such type):

Larger language models do in-context learning differently

Jerry Wei, Jason Wei, Yi Tay, Dustin Tran, Albert Webson, Yifeng Lu, Xinyun Chen, Hanxiao Liu, Da Huang, Denny Zhou, Tengyu Ma

https://arxiv.org/abs/2303.03846

 

We study how in-context learning (ICL) in language models is affected by semantic priors versus input-label mappings. We investigate two setups-ICL with flipped labels and ICL with semantically-unrelated labels-across various model families (GPT-3, InstructGPT, Codex, PaLM, and Flan-PaLM). First, experiments on ICL with flipped labels show that overriding semantic priors is an emergent ability of model scale. While small language models ignore flipped labels presented in-context and thus rely primarily on semantic priors from pretraining, large models can override semantic priors when presented with in-context exemplars that contradict priors, despite the stronger semantic priors that larger models may hold. We next study semantically-unrelated label ICL (SUL-ICL), in which labels are semantically unrelated to their inputs (e.g., foo/bar instead of negative/positive), thereby forcing language models to learn the input-label mappings shown in in-context exemplars in order to perform the task. The ability to do SUL-ICL also emerges primarily with scale, and large-enough language models can even perform linear classification in a SUL-ICL setting. Finally, we evaluate instruction-tuned models and find that instruction tuning strengthens both the use of semantic priors and the capacity to learn input-label mappings, but more of the former.

In systems thinking emergence is a must for any system: system has a property that subsystems have not. E.g. calculator can calculate but transistors in it cannot, transistors can control electric current but emitter, collector and base as a parts of a transistor cannot control electric current.

LLM is very big and complex network that can be self-organized during learning into web of multiple subnetworks. These subnetworks give us emergent properties when interacting inside whole ANN. This is like Engine, fuselage, wings and landing gear gave us flying airplane, but even wing can’t fly by itself. Property “can fly” exist only for airplane as a whole. This is systems approach emergence as a property of multilevel systems. Sure, LLM have multiple levels of structures in it. This structures are not easy for observations, but they are exist and give us emergence properties of LLM.

Best regards,
Anatoly

 

 

--

Mike Peters

unread,
Aug 1, 2023, 7:55:12 PM8/1/23
to ontolog-forum
Hi Anatoly

Thanks for those reading references. I'll go and have a look.

Hi Doug

I wouldn't be offended if you said I was 100% wrong. I'm not an academic. I prefer people who are straight talkers. I am.

Mike

Alex Shkotin

unread,
Aug 2, 2023, 5:04:43 AM8/2/23
to ontolo...@googlegroups.com, Mike Peters
Mike,

Doug is right in that we need ideas from you why John is wrong. Not a reference to the papers to read and get conclusions about. And this way is not "straight talking" ;-)
John sometimes does the same thing, but he refers to his papers :-) This is definitely a way to stop conversation, as nobody, except Doug, will go to read an article instead to get an idea.
So, why is John wrong?
Please, give your answer.

Alex 


ср, 2 авг. 2023 г. в 02:55, Mike Peters <mi...@redworks.co.nz>:

Alex Shkotin

unread,
Aug 2, 2023, 5:40:58 AM8/2/23
to ontolo...@googlegroups.com
Anatoly,

Thanks for the links. I like: "We consider an ability to be emergent if it is not present in smaller models but is present in larger models. Thus, emergent abilities cannot be predicted simply by extrapolating the performance of smaller models." But the idea of prediction is a bit redundant. Maybe it's better to say bluntly that some big models have abilities that no small model has?
After all, some learned the law of the transition of quantity into quality almost at school :-)

As far as I know, there is no LLM theory, so predictions can be difficult.

Alex


ср, 2 авг. 2023 г. в 02:04, Anatoly Levenchuk <ai...@asmp.msk.su>:

Alex Shkotin

unread,
Aug 2, 2023, 5:53:27 AM8/2/23
to ontolo...@googlegroups.com
John,

Let me add Claude 2 answer:
Q:What Emergent Abilities of Large Language Models do you know?
A:Here are some emergent abilities that large language models like myself can demonstrate:
  • Commonsense reasoning - Large models trained on massive amounts of text data tend to acquire basic commonsense knowledge about the world, allowing them to reason about everyday situations in a human-like manner. This includes making simple inferences, understanding social norms, and answering commonsense questions.
  • Multitask learning - Large models can learn to perform multiple different tasks simply by being trained on diverse datasets. For example, the same model can be trained to summarize text, answer questions, translate between languages, and more without needing specialized architectures for each task.
  • Few-shot learning - With their broad knowledge, large models can learn new concepts and skills from just a few examples. A model trained on enough text can answer questions about a topic it has barely encountered before.
  • Logical reasoning - Large language models have shown the ability to solve syllogisms, puzzles, and other tests of logical reasoning without being explicitly trained on them. Their broad knowledge helps them infer the logical connections between concepts.
  • Reading comprehension - Models trained on answering questions based on passages of text are able to answer questions that require understanding complex language, reasoning about implications, and synthesizing information.
  • Commonsense generation - Models can generate sentence completions that make sense and reflect commonsense knowledge about the everyday physical and social world.
  • Creativity - Large models can display human-like creativity in certain contexts, such as generating original stories, poems, jokes and continuation of prompts. They capture deeper patterns in training data.

The emergence of such abilities without specialized architectures or training is a remarkable demonstration of the general intelligence and learning capacity of large language models. Researchers are still studying the extent and limits of these capacities.

DONE

At least we have a list :-) Do we know more?

Alex



пн, 31 июл. 2023 г. в 17:40, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

John F Sowa

unread,
Aug 2, 2023, 11:23:06 AM8/2/23
to ontolo...@googlegroups.com
Alex,

I agree that you can find many examples of all the things on your list.  But you can also find many more examples where the answers are incomplete, inaccurate, misleading, false, or dangerous.  The worst cases are sufficiently accurate that people trust them -- but don't realize the missing or inaccurate aspects that destroy anything good that might have been contributed.
 
As I keep repeating, the most useful applications of LLMs are (1) machine translations that are just one step removed from the source text, (2) those that knowledgeable people use for getting interesting guesses that they check themselves,  and (3) those that are tested, evaluated, and extended by reliable programs that evaluate and build on whatever the LLMs produce.  As just one of many examples, look at the Wolfram system, which uses LLMs to translate English questions to its own mathematical software.

I have some tight time constraints and a large backlog of email.  So I won't reply to the many other notes in my email until late in the day (or night) when I'm too tired to do anything else.

John

PS:  Note the word 'emergent' in your question below.  That was the topic of the article in my last citation at the end.
 


From: "Alex Shkotin" <alex.s...@gmail.com>
John,

Let me add Claude 2 answer:
Q:What Emergent Abilities of Large Language Models do you know?
A:Here are some emergent abilities that large language models like myself can demonstrate:
  • Commonsense reasoning - Large models trained on massive amounts of text data tend to acquire basic commonsense knowledge about the world, allowing them to reason about everyday situations in a human-like manner. This includes making simple inferences, understanding social norms, and answering commonsense questions.
  • Multitask learning - Large models can learn to perform multiple different tasks simply by being trained on diverse datasets. For example, the same model can be trained to summarize text, answer questions, translate between languages, and more without needing specialized architectures for each task.
  • Few-shot learning - With their broad knowledge, large models can learn new concepts and skills from just a few examples. A model trained on enough text can answer questions about a topic it has barely encountered before.
  • Logical reasoning - Large language models have shown the ability to solve syllogisms, puzzles, and other tests of logical reasoning without being explicitly trained on them. Their broad knowledge helps them infer the logical connections between concepts.
  • Reading comprehension - Models trained on answering questions based on passages of text are able to answer questions that require understanding complex language, reasoning about implications, and synthesizing information.
  • Commonsense generation - Models can generate sentence completions that make sense and reflect commonsense knowledge about the everyday physical and social world.
  • Creativity - Large models can display human-like creativity in certain contexts, such as generating original stories, poems, jokes and continuation of prompts. They capture deeper patterns in training data.

The emergence of such abilities without specialized architectures or training is a remarkable demonstration of the general intelligence and learning capacity of large language models. Researchers are still studying the extent and limits of these capacities.

DONE

At least we have a list :-) Do we know more?

Alex


пн, 31 июл. 2023 г. в 17:40, John F Sowa <so...@bestweb.net>:
Tensor calculus is the mathematics used to process LLMs.  It is defined very precisely.  But when you have a huge collection of LLMs derived by a huge amount of computer power that pumps an enormous amount of CO2 in the air, some people claim that emergent properties occur.  But the word 'emergent'  explains nothing.  If you replace "emergent properties" with "miracle", there is no change of meaning.  The word "emergent" has no explanatory power of any kind.

And by the way, another synonym for 'emergent' is 'generative', which represents the G of GPT.  When people talk about "generative AI", you can get the same explanatory power with the phrase "miraculous AI".  Or if you delete the meaningless words, you could just say AI -- which is what most people do.

That is a brief summary of the article “Emergence isn’t an explanation, it’s a prayer", A critique of Emergentism in Artificial Intelligence, https://ykulbashian.medium.com/emergence-isnt-an-explanation-it-s-a-prayer-ef239d3687bf

John

John F Sowa

unread,
Aug 2, 2023, 12:16:00 PM8/2/23
to ontolo...@googlegroups.com
Doug F, Anatoly, Mike P,

I told Alex that I wasn't going to reply to the rest of my email until this evening.  But I thank Doug for refuting Mike's note. 

Every article that Mike or Anatoly cited shows that if you process more stuff you get more good stuff  that you want -- but they avoid or downplay the fact that you also get more bad stuff that you don't want.

That is further proof that the article with the title above is exactly correct:   the word 'emergence' is a meaningless buzzword, whose results are indistinguishable from saying a prayer. 

Repeating the fundamental principle:  LLMs are very useful for generating lots of stuff.  But you need to have reliable, dependable methods for testing and evaluating what they produce.   Then you can keep the good stuff, throw away the bad stuff, and revise or correct the mediocre stuff.

John

PS:  I don't want to discourage prayer.  Just the practice of meditating about something can provide important insights that may be as good or better than generating more random data.
 


Alex Shkotin

unread,
Aug 2, 2023, 12:23:51 PM8/2/23
to ontolo...@googlegroups.com
John,

Claude 2 is a very interesting creature.
image.png

Alex

ср, 2 авг. 2023 г. в 18:23, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

dr.matt...@gmail.com

unread,
Aug 2, 2023, 1:11:10 PM8/2/23
to ontolo...@googlegroups.com, ontolog...@googlegroups.com, ontolog...@googlegroups.com, Onno Paap

SO SORRY to let you know Matthew West died on 1st august

 

Please end bereavement cards to 28 Connemara Crescent

 

     Whiteley

 

Fareham

              P015 7BE

 

Lydia Naomi David

dr.matt...@gmail.com

unread,
Aug 2, 2023, 1:14:06 PM8/2/23
to ontolo...@googlegroups.com

So sorry to let you know Matthew West died on 1st August

Please send bereavement cards to

 

28 Connemara crescent

Whiteley

 

Fareham

 

PO15 7BE

 

Thank you

 

 

Lydia West naomi and David

--

All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Steve Ray

unread,
Aug 2, 2023, 1:14:59 PM8/2/23
to ontolo...@googlegroups.com
I was so saddened to hear this news. He was a valued colleague.

Steve




Ravi Sharma

unread,
Aug 2, 2023, 1:18:56 PM8/2/23
to ontolo...@googlegroups.com
Extremely sorry to hear the sad news, predicted by our esteemed colleague Matthew West himself, our condolences, prayers and request for peace and strength to the  family.
Thanks.
Ravi
(Dr. Ravi Sharma, Ph.D. USA)
NASA Apollo Achievement Award
Ontolog Board of Trustees
Particle and Space Physics
Senior Enterprise Architect



Doug Holmes

unread,
Aug 2, 2023, 1:38:51 PM8/2/23
to 'Ron Wheeler' via ontolog-forum, ontolog...@googlegroups.com, ontolog...@googlegroups.com, Onno Paap
Very sad news. We’ll all miss him.
Doug

Leo Obrst

unread,
Aug 2, 2023, 1:54:12 PM8/2/23
to ontolo...@googlegroups.com
I am very sorry to hear this. Matthew was an exceptional contributor to Ontolog, a good and generous colleague. Our deep condolences to his family and friends. Matthew participated in the first Ontology Summit of Ontolog Forum, in 2006:  https://ontologforum.com/index.php/UpperOntologySummit/UosJointCommunique. He was also a co-author of
the 2006 Upper Ontology Summit Joint Communique, which was published in the journal of Applied Ontology.

We will miss Matthew very much.

Best regards,
Leo Obrst


On Wed, Aug 2, 2023 at 1:11 PM <dr.matt...@gmail.com> wrote:


--
Leo Obrst, lob...@gmail.com

maria evt

unread,
Aug 2, 2023, 1:57:08 PM8/2/23
to ontolo...@googlegroups.com

Mike Bennett

unread,
Aug 2, 2023, 2:15:02 PM8/2/23
to ontolo...@googlegroups.com

I'm so sorry to hear this. Matthew was a key part of Ontolog and a great guy to know.

Mike

To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAJXGX3B5BAzoUkCC2B2EdWmH3rF-28rGQD17RttyiTSMrLkBxQ%40mail.gmail.com.
-- 
Mike Bennett
Hypercube Limited
Gwynfryn, Spring Gardens, SA34 0HP
Tel 020 7917 9522  Mob. 07721 420 730
Twitter: @MikeHypercube
www.hypercube.co.uk

Nadin, Mihai

unread,
Aug 2, 2023, 2:45:47 PM8/2/23
to ontolo...@googlegroups.com

Dr. Matthew West, OBE—as he told us to identify him. His contribution to the digital infrastructure of Great Britain is exemplary. He talked about it to Ontolog forum.

The best memory: dignified and respectful of others (even when indignant of incompetence). A lot to learn from how he treated each of us. In private notes: eager to confirm respect for the community.

 

Mihai Nadin

Mike Peters

unread,
Aug 2, 2023, 3:01:26 PM8/2/23
to ontolog-forum
I am sad to read that Dr Matthew West has died—my condolences to his family.

His exemplary work on 4Dism greatly influenced my thinking and database work. He made a crucial contribution to Digital Twin UK. His writings were very clear. I wish more people were like him.

Regards
Mike Peters
NZ

Azamat Abdoullaev

unread,
Aug 3, 2023, 4:08:59 AM8/3/23
to ontolo...@googlegroups.com
I could understand John's concern with "emergence". It is not the best choice of the terms, shrouded in some mystery and as emergentism trending to some sort of scientific religion.
The phenomenon is real and it was described by Aristotle as holism, totality or unity when the whole is different from its parts.
It makes one of 3 dialectical laws, formulated by Engels as: 
the laws of the transformation of quantity into quality and vice versa; 
the interpenetration of opposites; 
the negation of the negation.
P. Anderson concluded his antireductionism article, Science, 1972, "More Is Different Broken symmetry and the nature of the hierarchical structure of science", as "Marx said that quantitative differences become qualitative ones".
There is a good definition of it in the wiki article: 
"emergence occurs when a complex entity has properties or behaviors that its parts do not have on their own, and emerge only when they interact in a wider whole".
Emergence is a derivative concept of the fundamental concepts of holism and interaction. Complexity, scale, level, organization, life, mind, intelligence, consciousness, etc. are all due to this holistic nonlinear phenomenon. Emergence is not due to some mystical forces, but the product of causal interactions and integration of individual parts of any complex systems or dynamic networks, starting from quantum phenomena as coherence and entanglement and ending with cosmological phenomena as gravitation or even the emergence of the universe.  
Emergence as quantitative changes in a system resulting in its qualitative changes in structure, properties and behavior is everywhere, in philosophy, mathematics and physics, chemistry and biology, neuroscience and psychology, economics and politics, engineering and technology.  
As to "Emergent Abilities of Large Language Models", it is an open question, in need of deep study. 
And things are not as simple as presented in the article:
"An ability is emergent if it is not present in smaller models but is present in larger models". https://arxiv.org/pdf/2206.07682.pdf  
By its nature, it is a system's total property or relationships caused by complex nonlinear causal interactions of its component elements. 
If your ML/AI/LLM model is a complex neural network system with some billions of processing units (nodes, neurons) and trillions of interconnections, everything becomes unpredictable and unexpected, including its hallucination or adversarial attacks, making small, imperceptible changes to input data, such as images or text, in order to deceive the machine learning model
So, emergence as the interaction effect phenomenon could be both constructive/positive and destructive/negative.
Say, such socio-political phenomena, totalitarianism or oligopoly capitalism, are emergent ones, with their emergent properties and behaviors.
image.png
  

Alex Shkotin

unread,
Aug 3, 2023, 5:02:33 AM8/3/23
to ontolo...@googlegroups.com
Azamat,

The special task is to get how one or another community uses this or those particular terms. In many cases philosophical and commonsense understanding is misleading.
And in the case of "Emergence" fot LLM and GPT I count on Claude 2's answer https://groups.google.com/g/ontolog-forum/c/voNRzl1jBOc/m/zopr4wcvAgAJ. Keeping in mind that her authors could train her well.

Alex

чт, 3 авг. 2023 г. в 11:08, Azamat Abdoullaev <ontop...@gmail.com>:

Ricardo Sanz

unread,
Aug 3, 2023, 10:39:00 AM8/3/23
to ontolo...@googlegroups.com
I am fully with John here. 
"Emergence" is (an epistemic ignorance wrap of) mere system-level behavior

It is easy for whatever definition of "emergence" that tries to find some "magic", to find a trivial, debunking example. 
For example, Alex quoted this one:
 
"emergence occurs when a complex entity has properties or behaviors that its parts do not have on their own, and emerge only when they interact in a wider whole".

An example of "emergence" according to the definition:

Complex entity: my grocery bag
Property that parts do not have: "4.3 kilogram weight".
Interact with the whole: the handle of the shopping bag

Any system ALWAYS has properties that parts do not have (except in the degenerate case of a system with a single part, where the system is identical to the part). 

Best,
Ricardo





--

UNIVERSIDAD POLITÉCNICA DE MADRID

Ricardo Sanz

Head of Autonomous Systems Laboratory

Escuela Técnica Superior de Ingenieros Industriales

Center for Automation and Robotics

Jose Gutierrez Abascal 2.

28006, Madrid, SPAIN

John F Sowa

unread,
Aug 3, 2023, 11:46:35 AM8/3/23
to ontolo...@googlegroups.com
Azamat, Alex, and everybody else,

I agree with everybody on this list that emergence is very important.  But I also agree with the author I cited in my first note in this thread (reference below).   I'll repeat his major point:  Emergence is a very important phenomenon, and it is always a phenomenon that the author cannot explain.  For this reason, the word "emergence" is a synonym for the word "miracle".  

I also agree with Azamat's comments, especially the sentence " It is not the best choice of the terms, shrouded in some mystery and as emergentism trending to some sort of scientific religion."  Therefore, I recommend the word 'miracle' as a more accurate explanation for the various phenomena classified as emergence.  To be precise, I copied the definition of 'miracle' by Merriam Webster with just one minor change:  replacing the word 'divine'  or 'spiritual' with 'scientific'.  For the second definition, and the example cited, no change was required.

1 : an extraordinary event manifesting scientific intervention in human affairs.
2 : an extremely outstanding or unusual event, thing, or accomplishment.  Example:  The bridge is a miracle of engineering.
3 : Christian Science : a scientifically natural phenomenon experienced humanly as the fulfillment of scientific law.

This definition of the word 'miracle'  with that one minor change is exactly equivalent to the way that the word 'emergence' is used in every discussion about LLMs and their applications.  If anybody can find an example where emergence is explained in some way that is not miraculous,  please formulate a more precise definition and send it to Ontolog Forum.

John
___________________________________________
 
From: "Azamat Abdoullaev" <ontop...@gmail.com>

I could understand John's concern with "emergence". It is not the best choice of the terms, shrouded in some mystery and as emergentism trending to some sort of scientific religion.  The phenomenon is real and it was described by Aristotle as holism, totality or unity when the whole is different from its parts....

_____________________________________________

The original article:  “Emergence” isn’t an explanation, it’s a prayer", A critique of Emergentism in Artificial Intelligence, https://ykulbashian.medium.com/emergence-isnt-an-explanation-it-s-a-prayer-ef239d3687bf 

Alex Shkotin

unread,
Aug 3, 2023, 12:19:20 PM8/3/23
to ontolo...@googlegroups.com
Ricardo,

you cited Azamat :-)
I cited in another thread something like "a feature small system does not have but a big one has".
Let me point again: we can train our brain about emergencies but the only interesting thing is how guys from LLM+GPT community use it.
Let me cite Claude 2 (she is from there): 
"Q:What Emergent Abilities of Large Language Models do you know?
A:Here are some emergent abilities that large language models like myself can demonstrate:
  • Commonsense reasoning - Large models trained on massive amounts of text data tend to acquire basic commonsense knowledge about the world, allowing them to reason about everyday situations in a human-like manner. This includes making simple inferences, understanding social norms, and answering commonsense questions.
  • Multitask learning - Large models can learn to perform multiple different tasks simply by being trained on diverse datasets. For example, the same model can be trained to summarize text, answer questions, translate between languages, and more without needing specialized architectures for each task.
  • Few-shot learning - With their broad knowledge, large models can learn new concepts and skills from just a few examples. A model trained on enough text can answer questions about a topic it has barely encountered before.
  • Logical reasoning - Large language models have shown the ability to solve syllogisms, puzzles, and other tests of logical reasoning without being explicitly trained on them. Their broad knowledge helps them infer the logical connections between concepts.
  • Reading comprehension - Models trained on answering questions based on passages of text are able to answer questions that require understanding complex language, reasoning about implications, and synthesizing information.
  • Commonsense generation - Models can generate sentence completions that make sense and reflect commonsense knowledge about the everyday physical and social world.
  • Creativity - Large models can display human-like creativity in certain contexts, such as generating original stories, poems, jokes and continuation of prompts. They capture deeper patterns in training data.

The emergence of such abilities without specialized architectures or training is a remarkable demonstration of the general intelligence and learning capacity of large language models. Researchers are still studying the extent and limits of these capacities.

DONE"

As John mentioned about another topic: Claude 2 is trained by experts.

Alex

чт, 3 авг. 2023 г. в 17:39, Ricardo Sanz <ricardo.s...@gmail.com>:

Dima, Alden A. (Fed)

unread,
Aug 3, 2023, 12:35:52 PM8/3/23
to ontolo...@googlegroups.com

Hi John,

 

How about this as a starting point?

 

A property is strongly emergent just in case it is a property of a composed individual that is realized and that (in addition to having same-level effects) non-productively determines the individual’s parts to have powers that they would not have given only the laws/principles of composition manifested in simpler collectives.

--https://plato.stanford.edu/entries/properties-emergent/

 

I interpret this as suggesting that emergence occurs when the lower-level properties of components interact to create unexpected higher-level properties in a system. The “unexpected” injects us into the situation – our reasoning was not strong enough to foresee the result. There’s no need for the miraculous except to describe the human experience of surprise at the outcome.

 

Alden

 

--

All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Philip Jackson

unread,
Aug 3, 2023, 1:09:55 PM8/3/23
to ontolo...@googlegroups.com
John,

How about this, as a step toward defining a non-miraculous definition of "emergence"?:

Emergence occurs when phenomena obeying natural laws observed at one level of reality are produced by (or comprised of) phenomena that obey natural laws observed at lower levels of reality.

For example, the life forms we observe on Earth obey a natural law of evolution.

These life forms are produced (or comprised) of biochemical structures that obey natural laws defining biochemical reactions.

Biochemical structures are comprised of atoms which obey laws of nuclear physics, as well as laws of motion, gravity, etc.

Phil


Sent: Thursday, August 3, 2023 11:46 AM
To: ontolo...@googlegroups.com <ontolo...@googlegroups.com>
Subject: [ontolog-forum] Emergence is a miracle (was “Emergence” isn’t an explanation, it’s a prayer
 
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

John F Sowa

unread,
Aug 3, 2023, 6:39:14 PM8/3/23
to ontolo...@googlegroups.com
Phil, Alden, and Alex,

I agree with all of you.  To summarize, you all agree that (1) emergence means that some property that is not observed in a smaller system appears (emerges) when the system is enlarged or developed further, and (2) there is no known explanation about what causes that appearance or enhancement.

The Merriam Webster definition #2 of miracle gives an example of the word 'miracle' applied to a major engineering development.  In the case of the engineering application, the engineers who designed the structure would be able to explain exactly how and why they achieved that "miracle".  But in the case of the LLMs, nobody has yet been able to find an explanation.

Phil> Emergence occurs when phenomena obeying natural laws observed at one level of reality are produced by (or comprised of) phenomena that obey natural laws observed at lower levels of reality. 

Yes.  That definition would cover cases that are explainable (such  as the engineering example as well as those case for which there is no known explanation even by the expert in the field.  For the former, it's a miracle only  for the non-experts.  For the latter, it's a miracle even for the experts.

Alden> A property is strongly emergent just in case it is a property of a composed individual that is realized and that (in addition to having same-level effects) non-productively determines the individual’s parts to have powers that they would not have given only the laws/principles of composition manifested in simpler collectives.--https://plato.stanford.edu/entries/properties-emergent/

That is a good definition that has more detail than Phil's.  But the same qualifications hold:  if there is no known explanation of what causes the new properties, it is a miracle.

Alex> I changed terminology to ask Claude 2 opinion about remedy https://www.linkedin.com/pulse/llm-answer-verification-tools-alex-shkotin. Let me cite:  AS:  LLM may sometimes give the wrong answer. What tools can be used to check the LLM answer?

That is a very good list of tools and/or methods for checking the accuracy (I include the list below).

But note that they support the point that  I have been making from the beginning (the talks by Arun Majumdar and me on May 31);  Some method (or person) must evaluate or control the LLM system in order to ensure accuracy and dependability.

John
______________________

Q2: Here are some suggestions for checking the accuracy of answers from an LLM:
  • Ask the LLM to explain its reasoning. This can help reveal faulty logic or misunderstandings.
  • Have the LLM provide references or sources for factual claims. Then check the references.
  • Try rephrasing the question in different ways and see if the LLM gives consistent answers. Inconsistencies may indicate errors.
  • For mathematical or factual questions, work through the problem yourself or consult other sources to verify the LLM's answer.
  • Ask a human expert in the relevant field to review the LLM's responses and identify potential errors.
  • Use common sense and your own background knowledge to critically evaluate whether the LLM's answers seem reasonable.
  • Look for hedging or low confidence in the LLM's responses, which may indicate it is unsure or making guesses.
  • Run responses through a fact-checking site or tool designed to detect inaccuracies.
  • Leverage adversarial techniques - provide two contradictory statements and see if the LLM can identify which is true.
The key is not to blindly trust the LLM, but to validate important responses through triangulation, expert reviews, and critical thinking. LLMs are powerful tools, but still faulty and biased at times. Vetting their outputs is an essential part of using them responsibly.

Nadin, Mihai

unread,
Aug 3, 2023, 8:27:28 PM8/3/23
to ontolo...@googlegroups.com

Dear and respected colleagues,

Impressive conversation regarding emergence. My take is informed by Occam’s razor-- principle of parsimony

If you need the concept, in whatever you are pursuing, use it. If not, leave it alone. Actually, C.S. Peirce—and John Sowa knows this—integrates this though in the conception of pragmaticism. A community of scientists adopts the concepts that help in clarifying ideas.

 

For those  who are realists (there is emergence in reality, therefore it is a necessary concept) the notion cannot be avoided. For the nominalists, it is yet another construct used as we try to better describe reality, in particular dynamics.

 

Mihai Nadin

--

All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Philip Jackson

unread,
Aug 3, 2023, 10:31:27 PM8/3/23
to ontolo...@googlegroups.com
John,

Just to clarify, I was not saying that emergent phenomena are miracles, without known explanations. Rather, I was saying that emergent phenomena have explanations, based on phenomena at lower levels of reality. I agree that if we don't know how to explain something it might seem like a miracle.

Phil


Sent: Thursday, August 3, 2023 6:38 PM
To: ontolo...@googlegroups.com <ontolo...@googlegroups.com>
Subject: Re: [ontolog-forum] Emergence is a miracle (was “Emergence” isn’t an explanation, it’s a prayer
 
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

John F Sowa

unread,
Aug 3, 2023, 11:38:00 PM8/3/23
to ontolo...@googlegroups.com
Phil,

As I said, I agree with you, and with the author who wrote that emergence is a miracle.  His basic point is that if we don't have an explanation for the emerging complexity, there word 'emergence' is no more informative than the word 'miracle'.

There is an interesting book that explores the issues from many perspectives:  Emergence:  From Chaos to Order, by John Holland.  See the attached excerpt Holland.gif.  He explores the issues with the goal of searching for deeper explanations.  But he admits that they may be difficult to find.  Although he does not use the word 'miracle', he uses metaphors that aren't much different.

In any case, whether or not you can explain the complexity, it's important to develop methods for distinguishing the good complexity from the bad or dangerous cases.

Just telling people to be careful is not enough.

John
____________________________________________
Holland.gif

Alex Shkotin

unread,
Aug 4, 2023, 3:59:38 AM8/4/23
to ontolo...@googlegroups.com
John,

I have no doubt that we need tools to check LLM, GPT answers in many cases. But they must be studied, both outside and inside. And these are just tools. Same as your parser converting a document to CG.
The first answer of C2 has very wide range (but this point is nice: "
  • For mathematical or factual questions, work through the problem yourself or consult other sources to verify the LLM's answer.
"), so, I asked specifically:

 What about tool designed to detect inaccuracies?

Here are some tools that can help detect inaccuracies in language model outputs:

    Fact checking APIs - Services like Full Fact and Google Fact Check explore the web to find factual support for claims. They can automatically flag unsupported assertions.
    Inconsistency detection - Tools like Anthropic's Constitutional AI use self-consistency checks to identify contradictory statements within a model's outputs.
    Causal reasoning APIs - APIs like Causality Link check for plausible causal relationships in text, flagging unlikely or illogical causal claims.
    Commonsense reasoning tools - Systems like Anthropic's Constitutional AI test language model outputs against common sense, pointing out violations.
    Stylometry analysis - Comparing writing style statistics like vocabulary can catch situations where a model plagiarizes from sources rather than generating original text.
    Semantic similarity analysis - Measuring how semantically close model outputs are to training data can detect situations where models over-rely on memorization rather than generalization.
    Sensitivity auditing - Checking how much small perturbations to inputs alter outputs can reveal brittleness and overconfidence.
    Explanations - Requiring models to explain their reasoning allows humans to more easily identify flaws in their logic.

The key is combining smart tooling with human judgment - have humans skilled at spotting inaccuracies partner with tools optimized for that task. This provides oversight without needing to manually verify every single output. 
DONE

Here I got some information to continue personally :-)

Alex

пт, 4 авг. 2023 г. в 01:39, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Philip Jackson

unread,
Aug 4, 2023, 8:19:16 AM8/4/23
to ontolo...@googlegroups.com
Thanks, John.

I think we are in basic agreement, and only differ about the semantics of which words to use.

I tend to limit the word 'miracle' to events that would break the laws of physics, such as the miracles described in the Bible: walking on water, the Resurrection, etc.

With this definition, emergent phenomena which do not break the laws of physics, and which do have explanations based on phenomena at lower levels of reality, are not miracles. This includes the ability of subatomic particles to form atoms, of atoms to form molecules, of molecules to form biological cells and life, etc. Rather than call these phenomena miracles, I'd use another term, like 'wonders'.

Yet this is just a matter of semantics: It is easy to consider Life as a miracle, and the wonders that have been achieved using science and technology would seem like miracles to people who lived 123 years ago.

Phil


Sent: Thursday, August 3, 2023 11:37 PM

To: ontolo...@googlegroups.com <ontolo...@googlegroups.com>
Subject: Re: [ontolog-forum] Emergence is a miracle
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

John F Sowa

unread,
Aug 4, 2023, 11:48:30 AM8/4/23
to ontolo...@googlegroups.com
Alex,

You keep agreeing with me, but you state your agreements in a way that seems to be a disagreement.

Alex>  but this point is nice: "For mathematical or factual questions, work through the problem yourself or consult other sources to verify the LLM's answer."

Yes, of course.  But the LLMs are derived from an immense amount of data on the WWW. and an immense amount more is being added every second.   Nobody is capable of  "working through" more than a tiny fraction of that data.

Alex> Here are some tools that can help detect inaccuracies in language model outputs: 

I agree with the importance of the methods on that list.  And I agree that an educated human should be able to make the final decision.  But the amount of data is so enormous that computer help is needed to delete the obviously wrong or irrelevant instances.

Furthermore, there are billions of children and adults who are not so well educated, and they are making disastrous mistakes by following advice they find on the Internet.  And even people who are very well educated about a broad range of issues can make mistakes about topics that are outside their range of expertise.

And by those educated people, I include myself and everybody on this list.

John

Ravi Sharma

unread,
Aug 5, 2023, 4:57:58 AM8/5/23
to ontolo...@googlegroups.com
It would certainly be a WONDER if we could find a way to understand how molecules, especially those containing C H N became life-bearing!  Bio-scientists and astrobiologists are seeking answers to this fundamental Question.
Hence it  is not a straightforward progression such as atom to molecules! 
Just commenting on this thread- and certainly I talked about these aspects in the late 1960's with Melvin Calvin, Nobel Laureate  Berkley and Cyril Ponnamperuma at U MD (funded by our NASA office then).

Thanks.
Ravi
(Dr. Ravi Sharma, Ph.D. USA)
NASA Apollo Achievement Award
Ontolog Board of Trustees
Particle and Space Physics
Senior Enterprise Architect


--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

James Davenport

unread,
Aug 5, 2023, 6:23:38 AM8/5/23
to ontolo...@googlegroups.com

Indeed. But it seems (there’s no proof, as I don’t know a definition of ‘life’) that life requires multistationary, i.e. the ability to be stable in several states, which is a pre-requisite for memory, and multistationarity requires differential equation systems  of a certain complexity: see https://doi.org/10.1016/j.jtbi.2018.09.024 .

 

James Davenport

Hebron & Medlock Professor of Information Technology, University of Bath
National Teaching Fellow 2014;  DSc (honoris causa) UVT
Former Fulbright CyberSecurity Scholar (at New York University)
Former Vice-President and Academy Chair, British Computer Society

 

From: ontolo...@googlegroups.com <ontolo...@googlegroups.com> On Behalf Of Ravi Sharma
Sent: 05 August 2023 09:58
To: ontolo...@googlegroups.com
Subject: Re: [ontolog-forum] Emergence is a miracle

 

CAUTION:  This email came from outside of the University. To keep your account safe, only click on links and open attachments if you know the person who sent the email, or you expected to receive this communication.

 

Alex Shkotin

unread,
Aug 5, 2023, 12:32:17 PM8/5/23
to ontolo...@googlegroups.com
John,

I am simply studying LLM, particularly Claude 2 now. And I put my findings to LinkedIn following your advice to Azamat.
Have a look at the last where I am talking with C2 about conversion of two statements to different onto-languages. One of these statements is from your cogmem.pdf.
The important task is to evaluate how well it is trained.

Alex

пт, 4 авг. 2023 г. в 18:48, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

doug foxvog

unread,
Aug 5, 2023, 2:08:50 PM8/5/23
to ontolo...@googlegroups.com
On Sat, August 5, 2023 04:57, Ravi Sharma wrote:
> It would certainly be a WONDER if we could find a way to understand how
> molecules, especially those containing C H N became life-bearing!

This seems to be getting far from the issue of ontologies -- or even the
broader topic of AI.

> Bio-scientists and astrobiologists are seeking answers to this fundamental
> Question.

They are working on the details.

> Hence it is not a straightforward progression such as atom to molecules!

The basic progression seems clear. Amino acids naturally form from CHONSP
in heated water. This has been found in multiple experiments. Nucleic
acids have also been found to have formed in solutions without life forms.
According to current theories, such biochemicals would have been forming
for hundreds of millions of years in sterile situations, so life forms
would not have been ingesting and digesting them. Amino acids would form
chains, becoming random polypeptides and proteins. Nucleic acids would
also form chains, becoming strands of RNA & DNA. Biochemicals could form
into membranes.

It's unknown how these biochemicals turned into life over hundreds of
millions of years is unclear. There were quintillions (probably a gross
underestimate) of "experiments" of chemicals combining and breaking apart
over the surface, oceans, pooled waters, and subsurface of the earth. Did
viruses appear from RNA strands before (other) forms of life? I wouldn't
be surprised. But they unlikely (imh) would have formed mechanisms for
injection of payload through cell membranes until such membranes existed
around cells. But who knows? There is a lot that is unknown.

Certainly, all this is WONDERful, but it is not miraculous.

> Just commenting on this thread- and certainly I talked about these aspects
> in the late 1960's with Melvin Calvin, Nobel Laureate Berkley and Cyril
> Ponnamperuma at U MD (funded by our NASA office then).

Back in the '60s it was unclear if Mars had ever had surface water,
although it was thought likely in some circles. The possibility of
conditions favorable to life under ice sheets of moons of gas giant
planets wasn't discussed then, as far as i know. A lot more is now known
about what chemicals could have formed in and from the Earth's
proto-atmosphere and conditions than was known back then.

-- doug foxvog
>> <https://groups.google.com/d/msgid/ontolog-forum/78a95988b8844fd6869768f9cea137d8%40bestweb.net?utm_medium=email&utm_source=footer>
>> .
>>
>
> --
> All contributions to this forum are covered by an open-source license.
> For information about the wiki, the license, and how to subscribe or
> unsubscribe to the forum, see http://ontologforum.org/info/
> ---
> You received this message because you are subscribed to the Google Groups
> "ontolog-forum" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to ontolog-foru...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/ontolog-forum/CAAN3-5f_QRuTYb2F%2BCtg%3D9YU%2BSg3dzMTsiMbqZdHQYP4bn8M%3DA%40mail.gmail.com.
>


Alex Shkotin

unread,
Aug 6, 2023, 3:25:01 AM8/6/23
to ontolo...@googlegroups.com
Ravi,

In my studies your theme sounds "Ask Claude 2!":
Q: how molecules, especially those containing C H N became life-bearing?
A:There are a few key steps in how molecules containing carbon, hydrogen and nitrogen became the basis for life:
  • Early Earth had an atmosphere containing compounds like methane, ammonia, water vapor and hydrogen. When energy sources like lightning, UV radiation or volcanic heat reacted with these molecules, it produced simple organic compounds like amino acids, nucleotides, sugars etc.
  • These small organic molecules could accumulate and concentrate in various environments on early Earth, like volcanic hot springs or tidal pools. The molecules could undergo further reactions and form longer chain polymers like proteins, RNA, lipids etc.
  • RNA and related self-replicating molecules arose somehow, possibly through chemical evolution in one of those environments. RNA could both store genetic information and catalyze chemical reactions as an early form of enzyme.
  • Primitive membranes formed, possibly from lipids, which allowed concentration of these biological molecules together in protocells. Protocells could grow and divide.
  • More complex proto-metabolisms developed within protocells, giving an ability to extract energy from the environment. This led to the first simple single-celled organisms.
  • From there, evolution through natural selection took over, leading to increasing complexity and diversity of early life. Key steps likely included the emergence of DNA to store genetic information, proteins to catalyze reactions, and cellular machinery like ribosomes.

So in summary, simple organic building blocks accumulated and polymerized into complex molecules like RNA, which could self-replicate. Encapsulation in membranes led to protocells with metabolisms, forming the first primitive lifeforms.

DONE

see https://claude.ai/chat/3d50eb6f-d7df-43c3-a6dc-67c627e02070

Alex


сб, 5 авг. 2023 г. в 11:57, Ravi Sharma <drravi...@gmail.com>:

Alex Shkotin

unread,
Aug 6, 2023, 3:35:34 AM8/6/23
to ontolo...@googlegroups.com
IN ADDITION:
A more WONDERful idea is that proto-bio-cells were created during Big Band and then are traveling throughout the Universe  until reaching a suitable planet.
And we may recall bakteriophage.

сб, 5 авг. 2023 г. в 11:57, Ravi Sharma <drravi...@gmail.com>:
It would certainly be a WONDER if we could find a way to understand how molecules, especially those containing C H N became life-bearing!  Bio-scientists and astrobiologists are seeking answers to this fundamental Question.

Kingsley Idehen

unread,
Aug 6, 2023, 3:15:11 PM8/6/23
to ontolo...@googlegroups.com

Kingsley Idehen

unread,
Aug 6, 2023, 3:20:34 PM8/6/23
to ontolo...@googlegroups.com

On 8/6/23 3:24 AM, Alex Shkotin wrote:

Ravi,

In my studies your theme sounds "Ask Claude 2!":
Q: how molecules, especially those containing C H N became life-bearing?
A:There are a few key steps in how molecules containing carbon, hydrogen and nitrogen became the basis for life:
  • Early Earth had an atmosphere containing compounds like methane, ammonia, water vapor and hydrogen. When energy sources like lightning, UV radiation or volcanic heat reacted with these molecules, it produced simple organic compounds like amino acids, nucleotides, sugars etc.
  • These small organic molecules could accumulate and concentrate in various environments on early Earth, like volcanic hot springs or tidal pools. The molecules could undergo further reactions and form longer chain polymers like proteins, RNA, lipids etc.
  • RNA and related self-replicating molecules arose somehow, possibly through chemical evolution in one of those environments. RNA could both store genetic information and catalyze chemical reactions as an early form of enzyme.
  • Primitive membranes formed, possibly from lipids, which allowed concentration of these biological molecules together in protocells. Protocells could grow and divide.
  • More complex proto-metabolisms developed within protocells, giving an ability to extract energy from the environment. This led to the first simple single-celled organisms.
  • From there, evolution through natural selection took over, leading to increasing complexity and diversity of early life. Key steps likely included the emergence of DNA to store genetic information, proteins to catalyze reactions, and cellular machinery like ribosomes.

So in summary, simple organic building blocks accumulated and polymerized into complex molecules like RNA, which could self-replicate. Encapsulation in membranes led to protocells with metabolisms, forming the first primitive lifeforms.

DONE

see https://claude.ai/chat/3d50eb6f-d7df-43c3-a6dc-67c627e02070

Alex,

Here’s a copy and paste from Claude2 repeating the question you posed. I simply asked it to reply using Markdown, which I can copy and paste into my email client (Thunderbird) for Markdown rending prior to sending message out to this forum.

Q: how molecules, especially those containing C H N became life-bearing?

A: Here is a Markdown formatted reply:

How molecules, especially those containing C H N became life-bearing

There are a few key steps in how molecules containing carbon, hydrogen and nitrogen evolved to become the basis for life:

Formation of simple organic molecules

  • The early earth had an atmosphere of methane, ammonia, water vapor and hydrogen. When exposed to energy sources like lightning, UV radiation or volcanic heat, these molecules reacted to form more complex organic compounds like amino acids, nucleotides, fatty acids and simple sugars. This is called chemical evolution.

Self-replicating molecules

  • Some of these organic molecules, like RNA and DNA, developed the ability to make copies of themselves. This allowed helpful mutations to accumulate and complexity to develop through natural selection.

Protocells

  • The organic molecules could assemble into microscopic spheres called protocells with membranes. The membranes allowed protocells to separate their contents from the outside environment.

Metabolism

  • Protocells developed metabolic pathways that extracted energy from the environment to drive chemical processes. An early form of metabolism was glycolysis, which extracts energy from sugar molecules.

Information storage

  • RNA and later DNA evolved as information storage molecules. They carry the instructions to produce specific proteins and replicate themselves. Storing information allowed inheritance of traits.

Cellular machinery

  • Proteins evolved to catalyze biochemical reactions as enzymes and perform other cell functions like membrane transport. The coordination of metabolic pathways, DNA and proteins make up a living cell.

So in summary, the key steps were:

  1. Formation of organic molecules
  2. Self-replicating molecules
  3. Protocells
  4. Metabolism
  5. Information storage in RNA/DNA
  6. Cellular machinery

This progression from simple chemistry to complex, self-sustaining cells was the origin of life.

John F Sowa

unread,
Aug 7, 2023, 1:17:17 AM8/7/23
to ontolo...@googlegroups.com
Alex,

See the attached timeline of events:  BigBang.png.  Note that the first stars beigan to shine about 100 million years after the Big Bang.   The only chemical elements in those stars were hydrogen and helium.  Nuclear fusion began to form some of the smaller elements.  But fusion cannot form any elements beyond iron (Fe).   Higher elements were formed in supernovae when huge stars exploded.  But those processes took many more millions of years after stars began to form.

Those steps you summarized are reasonable guesses.  Organic molecules have been found in comets, including some amino acids.  But there is a huge gap between a few aminno acids and RNA or DNA.  And there is an even bigger gap between complex molecules and a protective envelope, such as a cell wall, that could protect more complex processes of organic chemistry and then the even more complex biochemistry.

Many biochemists have made many kinds of speculations.  But nobody has found a way to implement and test any of them.

But there is one interesting guess:  Panspermia -- the idea that  life forms evolved on some planet somewhere; then that planet exploded in some cosmic explosion, and fragments have been spread across the universe.  Then some icy lump could have fallen into the earth's oceans when they were cool enough to avoid killing the early life forms.

But all of this verbiage involves a long string of guesses that cannot be verified.  Whether you call it emergence, a miracle, or a string of untested guesses is irrelevant.  Calling it "emergence" sounds scientific.  But saying that the Sun god did it is just as accurate -- the sun is the source of energy that supported the chemical reactions.   

John
_______________________________________
 
From: "Alex Shkotin" <alex.s...@gmail.com>

IN ADDITION:
A more WONDERful idea is that proto-bio-cells were created during Big Bang and then are traveling throughout the Universe  until reaching a suitable planet. 

BigBang.png

Ravi Sharma

unread,
Aug 7, 2023, 2:45:41 AM8/7/23
to ontolo...@googlegroups.com
John
Who is the author of this interesting BigBang.png?
All
Appreciate your comments.
With all scholars, respectful for your attempts using AI etc, still there is no "full" knowledge of the WONDER that is life from ingredients.
As I am deeply rooted in my own upbringing, there has to be more than the SUM and simple addition of permutation of elements such as mereology is NOT the prescription of life. NOT YET and agreed that the quest is intense and deep.
That is all I wanted to say for now. As I continue with my research this is the next topic, rightnow i am getting ready to unravel the physical universe mysteries. By the way my current understanding allows local Big Bangs but does not require Universal single big bang.
Regards, did not intend to divert your attention.
Thanks.
Ravi
(Dr. Ravi Sharma, Ph.D. USA)
NASA Apollo Achievement Award
Ontolog Board of Trustees
Particle and Space Physics
Senior Enterprise Architect


--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Alex Shkotin

unread,
Aug 7, 2023, 3:32:00 AM8/7/23
to ontolo...@googlegroups.com
Kingsley,

For me, this link is a direct way to continue the dialogue with Claude 2.
I don't know Anthropic's policy for users. I hope others have read-only access.
You tried? What result?

Alex

вс, 6 авг. 2023 г. в 22:15, 'Kingsley Idehen' via ontolog-forum <ontolo...@googlegroups.com>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Alex Shkotin

unread,
Aug 7, 2023, 3:38:52 AM8/7/23
to ontolo...@googlegroups.com
Kingsley,

Sorry, I did not get the idea of your repetition. What for?

Alex

вс, 6 авг. 2023 г. в 22:20, 'Kingsley Idehen' via ontolog-forum <ontolo...@googlegroups.com>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Alex Shkotin

unread,
Aug 7, 2023, 4:22:37 AM8/7/23
to ontolo...@googlegroups.com
John,

The idea that the first complex molecules appeared in space is pretty because it doesn't need planets. Moreover, if a certain molecule turns out to be able to build a shell, then when it gets to a suitable planet, it will begin to build it.
It is interesting to see how bacteriophage DNA / RNA is arranged in order to build a shell.
On the other hand, you are right: of course, it is worth delving into the Big Bang and seeing when and how carbon, hydrogen and nitrogen appeared. 
According to you and Claude 2 [1] I should move the complex molecules appearance after supernovas.
And of course it's all a mind game to build hypotheses.

Alex

Q: According to the big bang theory, when and how did carbon, hydrogen and nitrogen appear?
A: According to the standard Big Bang cosmological model, the first atoms of light elements like hydrogen, helium and traces of lithium formed when the universe was just a few minutes old. Here is a brief overview:
  • The Big Bang occurred 13.8 billion years ago, marking the beginning of the universe. The initial conditions were extremely high temperature and density.
  • In the first few seconds, the universe underwent extremely rapid exponential expansion known as cosmic inflation. Temperatures cooled rapidly.
  • When the universe was about 3 minutes old, conditions had cooled enough for protons and neutrons to bind together into the first atomic nuclei. Mostly hydrogen and helium nuclei formed, along with small amounts of deuterium, lithium and beryllium.
  • After about 380,000 years, the universe had expanded and cooled enough for electrons to bind to nuclei, producing the first stable neutral atoms. This is known as recombination.
  • The early universe at this point consisted of about 75% hydrogen, 23% helium and traces of lithium by mass. No heavier elements existed yet.
  • The first stars began forming a few hundred million years after the Big Bang. Inside stars, nuclear fusion processes converted the primordial hydrogen and helium into heavier elements like carbon, nitrogen, oxygen and so on up through iron.
  • These heavier elements were spread through space by supernovas and stellar winds, enriching interstellar gas clouds which formed the next generation of stars and planets.

So in summary, the Big Bang produced the lightest elements, and stellar nucleosynthesis inside the first generations of stars created the heavier elements essential for chemistry and life. Carbon, nitrogen and oxygen atoms would have started appearing in interstellar space a few hundred million years after the Big Bang.



пн, 7 авг. 2023 г. в 08:17, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

alex.shkotin

unread,
Aug 7, 2023, 5:38:10 AM8/7/23
to ontolog-forum
Ask Claude 2 be continued
Q: Is it possible that simple organic compounds were formed in space and not on planets?
A:Yes, it is quite possible that simple organic compounds formed in space rather than solely on planets. Here are some key points:
  • Space contains clouds of gas and dust called nebulae, containing elements like carbon, hydrogen, oxygen and nitrogen. Radiation from stars and high-energy events can drive chemical reactions in these clouds.
  • Over 150 different organic molecules, including amino acids, have now been detected in interstellar gas clouds by astronomical observations. Formaldehyde, methanol and even complex organics like polycyclic aromatic hydrocarbons have been found.
  • Comets and meteorites also contain organic compounds, suggesting they formed in primordial solar nebula. Analysis of the Murchison meteorite identified amino acids and nucleobases.
  • Experiments simulating early Earth conditions can produce organic molecules, but require methane and ammonia which may not have been abundant on early Earth. However, these gases were present in space.
  • Lab experiments show UV radiation, cosmic rays and stellar winds can induce complex chemical synthesis in space gas and dust cloud conditions.
  • Some theories speculate that organic chemicals formed in space may have seeded early Earth through impacts, contributing precursor chemicals for prebiotic synthesis.

So in summary, an abundance of evidence indicates abiotic organic synthesis occurs extensively in interstellar gas, comets and meteorites - environments with ingredients and energy to promote complex chemistry. This suggests that space played a key role in the production of prebiotic compounds.


понедельник, 7 августа 2023 г. в 11:22:37 UTC+3, alex.shkotin:

alex.shkotin

unread,
Aug 7, 2023, 8:51:33 AM8/7/23
to ontolog-forum
John,

The initial idea was that a certain number of hydrogen, oxygen and nitrogen atoms were formed during the first phase of nuclear synthesis. Which is interpreted as "The predicted abundance of CNO isotopes produced in Big Bang nucleosynthesis is expected to be on the order of 10−15 that of H, making them essentially undetectable and negligible."
https://en.wikipedia.org/wiki/Big_Bang_nucleosynthesis

Alex

понедельник, 7 августа 2023 г. в 08:17:17 UTC+3, John F Sowa:

Kingsley Idehen

unread,
Aug 7, 2023, 11:27:44 AM8/7/23
to ontolo...@googlegroups.com


On 8/7/23 3:31 AM, Alex Shkotin wrote:
Kingsley,

For me, this link is a direct way to continue the dialogue with Claude 2.
I don't know Anthropic's policy for users. I hope others have read-only access.
You tried? What result?

Alex


Alex,

My point is that the link doesn't work like the ChatGPT equivalent re conversation continuation.

Does it work for you, in that regard?

Kingsley

Kingsley Idehen

unread,
Aug 7, 2023, 11:29:09 AM8/7/23
to ontolo...@googlegroups.com


On 8/7/23 3:38 AM, Alex Shkotin wrote:
Kingsley,

Sorry, I did not get the idea of your repetition. What for?

Alex


Alex,

Since the link you shared doesn't work, and LinkedIn not being a canonical destination for content, I offered a Markdown example that you can just copy and paste into your email client (if it has Markdown processing capability).


Kingsley

Alex Shkotin

unread,
Aug 7, 2023, 12:18:45 PM8/7/23
to ontolo...@googlegroups.com
Kingsey,

Yes. This is my way to come back and continue.
OK. I see it was a bad idea to share this link.

Alex

пн, 7 авг. 2023 г. в 18:27, 'Kingsley Idehen' via ontolog-forum <ontolo...@googlegroups.com>:

Alex Shkotin

unread,
Aug 7, 2023, 12:21:28 PM8/7/23
to ontolo...@googlegroups.com
Kingsley,

OK. Thank you. Maybe Markdown would help me to publish:-)

Alex

пн, 7 авг. 2023 г. в 18:29, 'Kingsley Idehen' via ontolog-forum <ontolo...@googlegroups.com>:

Kingsley Idehen

unread,
Aug 7, 2023, 1:19:00 PM8/7/23
to ontolo...@googlegroups.com


On 8/7/23 12:21 PM, Alex Shkotin wrote:
Kingsley,

OK. Thank you. Maybe Markdown would help me to publish:-)

Alex


Alex,

Yes, your replies here end up being nicely formatted HTML docs :)

Kingsley

John F Sowa

unread,
Aug 7, 2023, 4:09:25 PM8/7/23
to ontolo...@googlegroups.com
Ravi and Alex,

Ravi> Who is the author of this interesting BigBang.png? 

That is a composite of a huge number of publications since Hubble discovered the red shift that the universe is expanding.  That particular graphic is several years old, and I'm sure that there are many more details and minor modifications since then.  But any changes would be minor shifts of the dates of the various stages.  If you're interested, do some searching with your favorite search engine.

The amount of research and publication about the Big Bang and later developments is immense.   And the amount of research and publications about the many, many complex issues about the origin of life is also immense.  The little summary by Alex is nothing new.  It's just a small fraction of many, many complex issues.  And none of them are conclusive.

Alex> The idea that the first complex molecules appeared in space is pretty because it doesn't need planets. 

Stars are too hot, and empty space is almost a perfect vacuum.  The chemical reactions would need something that has a concentration of material at a moderate temperature.  It might be a commit. an asteroid, or a moon.  But it would certainly require water.  And it would also require some source of energy to promote the chemical reactions -- lightening or hot water, for example.

Alex> Moreover, if a certain molecule turns out to be able to build a shell, then when it gets to a suitable planet, it will begin to build it.

It is interesting to see how bacteriophage DNA / RNA is arranged in order to build a shell. 

Molecules don't build shells by themselves.  That is one of the most difficult research problems.  Some of the publications about early life have proposed some kind of pre-existing framework, such as holes in porous rocks.  Empty space does not have porous rocks.

One interesting hypothesis is that the earliest life forms were formed near sink holes at the bottom of an ocean where hot volcanic gases meet rocks.  That would provide the energy, chemicals, and porous rocks.  One bit of evidence for that hypothesis is that some of the most primitive life forms can survive at the temperature of boiling water.  

Alex> And of course it's all a mind game to build hypotheses.

All of science is a huge mind game of building hypotheses, but the hypotheses, by themselves, are nothing but idle speculation.  To write publishable articles, you must spend years of studying previous publications, more years of testing of hypotheses, and a lot of writing and studying -- the equivalent of getting a PhD in biochemistry.

If you're not willing and able to do that research, you won't have a publication for any serious journal.   Publications in blogs, social media, or semi-popular journals are ignored by the experts.

John

John F Sowa

unread,
Aug 8, 2023, 1:31:54 AM8/8/23
to ontolo...@googlegroups.com
Kingsley,

I typed "wikipedia origin of life" to Google and to Duck Duck Go.

Duck duck go quickly gave me an excellent selection of excellent Wikipedia articles on the various stages including the biochemical versions and the Panspermia issues, and the evolutionary history.

And the Wikipedia articles contained time lines and references.  References are absolutely essential.

In fact, Duck Duck Go is the excellent resource that Google used to be.  Today, Google has become YAMGWS -- Yet Another Money Grabbing Web Site that puts profits ahead of information.  Duck Duck Go also makes money, but they're satisfied with the level of profits that Google made in the good old days.

Microsoft's Bing is even worse.  Their goal is to distract you with money grabbing web sites at every click and stroke you swipe of key you type.

Summary:  I would never use Claude for any search for information.  It's worse that YAMGWS.

John



 


From: "Kingsley Idehen' via ontolog-forum" <ontolo...@googlegroups.com>
Sent: 8/6/23 3:20 PM

To: ontolo...@googlegroups.com
Subject: Re: [ontolog-forum] Emergence is a miracle

Alex Shkotin

unread,
Aug 8, 2023, 3:45:00 AM8/8/23
to ontolo...@googlegroups.com
John, 

It's easy for me to correct you. In my letter, I make the correct citation of Claude 2. And it would be great if you instead of "The little summary by Alex is nothing new." still wrote "The little summary by Claude 2 is nothing new."

Alex

пн, 7 авг. 2023 г. в 23:09, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Kingsley Idehen

unread,
Aug 8, 2023, 9:15:26 AM8/8/23
to ontolo...@googlegroups.com, John F Sowa

John,

When Alex and I share information regarding these tools, we are not endorsing their responses i.e., we are simply sharing their output.

In my case, I was trying to demonstrate the virtues of Markdown.

As you know, our positions are the same regarding the role of these language processing tools in the grand scheme of things -- re software solution production :)

-- 
Regards,

Kingsley Idehen	      
Founder & CEO 
OpenLink Software   
Home Page: http://www.openlinksw.com

John F Sowa

unread,
Aug 8, 2023, 9:51:11 AM8/8/23
to ontolo...@googlegroups.com
Alex and Ravi,

For Alex:  I apologize for attributing that summary to you.   Since yuu weren't the author, I feel better about saying that  answers by Claude 2 are not very impressive  I would never use it or recommend it as a source of information.. 

If you type "Wikipedia origin of life", you get a list of  pages that cover a full range of details.  For novices who don't want all the detail, just looking at the pictures and diagrams gives much more information at a glance than anything from Claude.  For people who need to dig into the details, the text of any article provides an excellent overview with detailed citations for each point.  (But I admit that there are many articles that are far below the standards set for the best.)

For the origins of life, see the article on unicelluar life:  https://en.wikipedia.org/wiki/Unicellular_organism 
 
For Ravi:  I now remember where I found that timeline in BigBng.png.  It came from an older version of  Wikipedia.  The current version has an excellent article with much more detail.  It also has citations for every point, a large bibliography, and links to many related web pages.  See https://en.wikipedia.org/wiki/Chronology_of_the_universe

John
 


From: "Alex Shkotin" <alex.s...@gmail.com>

Dan Brickley

unread,
Aug 8, 2023, 11:00:11 AM8/8/23
to ontolo...@googlegroups.com

Given the interest in BigBang.png's origins, I downloaded a copy, and uploaded to an images.google.com search, to see how it shows up in the new "find image source" feature my Google colleagues recently published.

it did track the file to https://en.wikipedia.org/w/index.php?title=Graphical_timeline_of_the_Big_Bang&action=history&dir=prev pretty easily.

Turns out that it is semi-declaratively stored in mediawiki using the Easy Timeline addon ( https://www.mediawiki.org/wiki/Extension:EasyTimeline ) . I copy here FWIW that textual definition (you can see it from hitting the 'edit' button on any version of the page, e.g. follow the line above). To the authorship question, I suspect it had multiple contributors, since it is so much easier to contribute lines to such a definition than to work on a PNG image file with widely distributed strangers. The answer is probably nearby in the Wikipedia historical edit logs.

BTW I tried feeding this format to LLMs and it was promising but didn't generate good graphs, even GPT Code Interpreter, which can write Python/matlab code expressing the timeline. Looks similar though, maybe others will have better luck.

cheers,

Dan

timeline markup follows:

<timeline>
ImageSize  = width:600 height:2000
PlotArea   = left:200 right:350 bottom:45 top:45
AlignBars = early

Colors =
  id:period1  value:rgb(1,1,0.7) # light yellow
  id:period2  value:rgb(0.7,0.7,1) # light blue
  id:events  value:rgb(1,0.7,1) # light purple

Period      = from:-43 till:13
TimeAxis    = orientation:vertical
ScaleMajor  = unit:year increment:1 start:-43

BarData =
  bar:Timelines
  bar:Periods
  bar:Events

PlotData=
  width:20  mark:(line,red)  textcolor:black

  bar:Events  color:events  align:left  shift:(15,0)
  at:-43  text:"[[Planck time]], the smallest observable unit of time and~the time before which science is unable to describe~the universe.  At this point, the force of [[gravity]]~separated from the [[electronuclear force]]."
  at:-36  text:"Separation of the [[strong force]] from the electronuclear~force."
  from:-35  till:-33  text:"[[Inflationary epoch]] - The universe expands exponentially."
  at:-33  text:"Quarks and anti-quarks begin forming."
  at:-12  text:"The weak force separates from the electromagnetic force~resulting in the four separate forces we know today."
  at:-6  text:"Electrons and positrons begin to annihilate each other."
  at:-5  text:"Quarks combine to form protons and neutrons.~Quark/anti-quark pairs combine into mesons."
  at:-4  text:"Lepton/anti-lepton pairs are annihilated by existing~photons. Neutrinos break free and exist on their own."
  at:0  text:"Formation of atomic nuclei (hydrogen). [[Nuclear fusion]]~begins to occur as they collide to form heavier elements."
  at:2  text:"Nuclear fusion ends after about 3 minutes."
  at:13  text:"About 300,000 years - Hydrogen nuclei capture electrons~to form stable atoms.  Photons are no longer able to~interact strongly with atoms."

  bar:Periods  align:left  shift:(15,0)
  from:2  till:13  color:period1  text:[[Big Bang nucleosynthesis|Epoch of Nucleosynthesis]]
  from:0  till:2  color:period2  text:Lepton Epoch
  from:-6  till:0  color:period1  text:Hadron Epoch
  from:-12  till:-6  color:period2  text:Electroweak Epoch
  from:-35  till:-12  color:period1  text:Grand Unification Epoch
  from:-43  till:-35  color:period2  text:Planck Epoch

  bar:Timelines  align:right  shift:(-75,0)

TextData =
  tabs:(210-left)

  pos:(50,1995)
  text:"Timelines"^"Events"
  text:"---------"^"------"

</timeline>
 



--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Kingsley Idehen

unread,
Aug 8, 2023, 11:39:02 AM8/8/23
to ontolo...@googlegroups.com

Hi Dan,

Here’s a GPT generated HowTo from you response.

Prompt:

Generate a HowTo and associated steps, in RDF, from the following.

Given the interest in BigBang.png’s origins, I downloaded a copy, and uploaded to an images.google.com search, to see how it shows up in the new “find image source” feature my Google colleagues recently published. it did track the file to https://en.wikipedia.org/w/index.php?title=Graphical\_timeline\_of\_the\_Big\_Bang&action=history&dir=prev pretty easily.

Turns out that it is semi-declaratively stored in mediawiki using the Easy Timeline addon ( https://www.mediawiki.org/wiki/Extension:EasyTimeline ) . I copy here FWIW that textual definition (you can see it from hitting the ‘edit’ button on any version of the page, e.g. follow the line above). To the authorship question, I suspect it had multiple contributors, since it is so much easier to contribute lines to such a definition than to work on a PNG image file with widely distributed strangers. The answer is probably nearby in the Wikipedia historical edit logs.


@base <https://groups.google.com/d/msgid/ontolog-forum/6ebd9425e3dc405ba4261743dc888fca%40bestweb.net#> .
@prefix schema: <http://schema.org/> .
@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .

<InvestigatingBigBangPNGImage> a schema:HowTo ;
    schema:name "Investigating BigBang.png's Origins" ;
    schema:steps <DownloadBigBangPNG>,
                 <UploadToGoogleImageSearch>,
                 <FindImageSource>,
                 <DiscoverStorageMethod>,
                 <IdentifyAuthorship>,
                 <AttemptLLMsGraphGeneration> .

<DownloadBigBangPNG> a schema:HowToStep ;
    schema:name "Download BigBang.png" ;
    schema:text "Download a copy of BigBang.png to your local system." ;
    schema:position "1"^^xsd:integer .

<UploadToGoogleImageSearch> a schema:HowToStep ;
    schema:name "Upload BigBang.png to Google Image Search" ;
    schema:text "Upload the downloaded BigBang.png to images.google.com to find its origin." ;
    schema:position "2"^^xsd:integer .

<FindImageSource> a schema:HowToStep ;
    schema:name "Find Image Source using Google's 'find image source' feature" ;
    schema:text "Track the file to the Wikipedia page through the new 'find image source' feature." ;
    schema:url <https://en.wikipedia.org/w/index.php?title=Graphical_timeline_of_the_Big_Bang&action=history&dir=prev> ;
    schema:position "3"^^xsd:integer .

<DiscoverStorageMethod> a schema:HowToStep ;
    schema:name "Discover the Storage Method" ;
    schema:text "Identify that the image is stored using the Easy Timeline addon in mediawiki." ;
    schema:url <https://www.mediawiki.org/wiki/Extension:EasyTimeline> ;
    schema:position "4"^^xsd:integer .

<IdentifyAuthorship> a schema:HowToStep ;
    schema:name "Identify Authorship" ;
    schema:text "Investigate the authorship of the image, likely having multiple contributors, in Wikipedia's historical edit logs." ;
    schema:position "5"^^xsd:integer .

<AttemptLLMsGraphGeneration> a schema:HowToStep ;
    schema:name "Attempt Graph Generation using LLMs" ;
    schema:text "Try feeding the format to LLMs (e.g., GPT Code Interpreter) to generate graphs, although not successfully." ;
    schema:position "6"^^xsd:integer .

JSON-LD Rendition.


{
  "@context": {
    "@base": "https://groups.google.com/d/msgid/ontolog-forum/6ebd9425e3dc405ba4261743dc888fca%40bestweb.net#",
    "schema": "http://schema.org/",
    "xsd": "http://www.w3.org/2001/XMLSchema#"
  },
  "@id": "InvestigatingBigBangPNGImage",
  "@type": "schema:HowTo",
  "schema:name": "Investigating BigBang.png's Origins",
  "schema:steps": [
    {
      "@id": "DownloadBigBangPNG",
      "@type": "schema:HowToStep",
      "schema:name": "Download BigBang.png",
      "schema:text": "Download a copy of BigBang.png to your local system.",
      "schema:position": {
        "@value": "1",
        "@type": "xsd:integer"
      }
    },
    {
      "@id": "UploadToGoogleImageSearch",
      "@type": "schema:HowToStep",
      "schema:name": "Upload BigBang.png to Google Image Search",
      "schema:text": "Upload the downloaded BigBang.png to images.google.com to find its origin.",
      "schema:position": {
        "@value": "2",
        "@type": "xsd:integer"
      }
    },
    {
      "@id": "FindImageSource",
      "@type": "schema:HowToStep",
      "schema:name": "Find Image Source using Google's 'find image source' feature",
      "schema:text": "Track the file to the Wikipedia page through the new 'find image source' feature.",
      "schema:url": "https://en.wikipedia.org/w/index.php?title=Graphical_timeline_of_the_Big_Bang&action=history&dir=prev",
      "schema:position": {
        "@value": "3",
        "@type": "xsd:integer"
      }
    },
    {
      "@id": "DiscoverStorageMethod",
      "@type": "schema:HowToStep",
      "schema:name": "Discover the Storage Method",
      "schema:text": "Identify that the image is stored using the Easy Timeline addon in mediawiki.",
      "schema:url": "https://www.mediawiki.org/wiki/Extension:EasyTimeline",
      "schema:position": {
        "@value": "4",
        "@type": "xsd:integer"
      }
    },
    {
      "@id": "IdentifyAuthorship",
      "@type": "schema:HowToStep",
      "schema:name": "Identify Authorship",
      "schema:text": "Investigate the authorship of the image, likely having multiple contributors, in Wikipedia's historical edit logs.",
      "schema:position": {
        "@value": "5",
        "@type": "xsd:integer"
      }
    },
    {
      "@id": "AttemptLLMsGraphGeneration",
      "@type": "schema:HowToStep",
      "schema:name": "Attempt Graph Generation using LLMs",
      "schema:text": "Try feeding the format to LLMs (e.g., GPT Code Interpreter) to generate graphs, although not successfully.",
      "schema:position": {
        "@value": "6",
        "@type": "xsd:integer"
      }
    }
  ]
}

Ravi Sharma

unread,
Aug 9, 2023, 11:15:02 AM8/9/23
to ontolo...@googlegroups.com
John and others
Appreciate the help on the image about BigBang.
I will illustrate in a month or so, as to how the matter-energy came about without any big bang, at least no need for universal or single big bang.
Thanks.
Ravi
(Dr. Ravi Sharma, Ph.D. USA)
NASA Apollo Achievement Award
Ontolog Board of Trustees
Particle and Space Physics
Senior Enterprise Architect


--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

alex.shkotin

unread,
Aug 14, 2023, 1:23:23 PM8/14/23
to ontolog-forum
Just news about emergence
Emergent Analogical Reasoning in Large Language Models https://arxiv.org/abs/2212.09196
Abstract The recent advent of large language models has reinvigorated debate over whether human cognitive capacities might emerge in such generic models given sufficient training data. Of particular interest is the ability of these models to reason about novel problems zero-shot, without any direct training. In human cognition, this capacity is closely tied to an ability to reason by analogy. Here, we performed a direct comparison between human reasoners and a large language model (the text-davinci-003 variant of GPT-3) on a range of analogical tasks, including a non-visual matrix reasoning task based on the rule structure of Raven's Standard Progressive Matrices. We found that GPT-3 displayed a surprisingly strong capacity for abstract pattern induction, matching or even surpassing human capabilities in most settings; preliminary tests of GPT-4 indicated even better performance. Our results indicate that large language models such as GPT-3 have acquired an emergent ability to find zero-shot solutions to a broad range of analogy problems.        

понедельник, 31 июля 2023 г. в 17:40:20 UTC+3, John F Sowa:
Tensor calculus is the mathematics used to process LLMs.  It is defined very precisely.  But when you have a huge collection of LLMs derived by a huge amount of computer power that pumps an enormous amount of CO2 in the air, some people claim that emergent properties occur.  But the word 'emergent'  explains nothing.  If you replace "emergent properties" with "miracle", there is no change of meaning.  The word "emergent" has no explanatory power of any kind.

And by the way, another synonym for 'emergent' is 'generative', which represents the G of GPT.  When people talk about "generative AI", you can get the same explanatory power with the phrase "miraculous AI".  Or if you delete the meaningless words, you could just say AI -- which is what most people do.

That is a brief summary of the article “Emergence” isn’t an explanation, it’s a prayer", A critique of Emergentism in Artificial Intelligence, https://ykulbashian.medium.com/emergence-isnt-an-explanation-it-s-a-prayer-ef239d3687bf

Some excerpts below.

John
_______________________

In cognitive science, emergence is regularly used to “explain” the connection between two phenomena, when it is otherwise complex and difficult to predict: e.g. how neuronal firing gives rise to consciousness, or transformers to the appearance of language comprehension. Where there may be a connection, but nothing more is known or can be proved, emergence is a placeholder that fills the gap. The word gives weight and gravitas to what is essentially a blank space.

 Emergentist interpretations of human behaviour may also boast some recent wins—the proliferation of LLMs (e.g. ChatGPT) may be seen as one such success. This has reignited the discussion of whether emergence is the best way to frame intelligence. 

Despite emergence contributing nothing of substance to the discussion, as a concept it admittedly has a compelling intuitive appeal. There is a wonderful feeling about the notion of emergence. It does seem to be adding something valuable, as if you’ve discovered a magical ingredient by which you can explain mysterious phenomena. That’s the reason it continues to be popular, and gets inserted into scientific discussions. It convinces the listener that something has been explained with scientific rigour when all we’ve done is to say “it’s complicated”.

Besides the good feeling, however, emergence is void of any explanatory power. And so it has no scientific value in a predictive capacity. You can’t use it to say anything about what an unknown system will do; only what you hope it will do. When applied to pie-in-the-sky AI futurism, emergence has become synonymous with “I’m sure the system will work itself out”. It indicates that the author has a feeling that a complex system will align at some point, but no clear sense of how, why, or when. Insofar as intelligence does manifest in a specific instance, “emergence” doesn’t tell us anything interesting about how it happened. And insofar as intelligence hasn’t yet manifested, emergence doesn’t tell us when it will or what direction to take to get there.

In the field of AI development, emergence is invoked whenever someone encounters a phenomenon in the human mind and has no idea how to even start explaining it (e.g. art, socialization, empathy, transcendental aesthetics, DnD, etc). If said researcher already has a working theory of AI, this realization is disheartening. So they look deeper into the matter, find some point of overlap between the existing theory and the missing behaviour, and assume that with enough time and complexity the missing pieces will emerge.

Emergence is attractive in such cases because it puts the author’s mind at ease, by making it seem like they have a viable mechanism that only needs more time to be vindicated. It placates their inner watchdog, the one that demands concrete, scientific explanations. Emergence, being related to complexity and superficially validated by experiments such as Conway’s Game of Life, is enough to lull that watchdog back to sleep.

This justifies continuing to ignore any shortcomings in a theoretical model, and persisting on the current path. Like the proverbial man who searches for his lost keys under the lamplight, because that is where the light is, he hopes that with enough persistence his keys will “emerge”. The only other alternative is to admit failure, and to give up any hope of accomplishing what you want within this lifetime.

Scientists, it seems, can have superstitions too. And emergence has a powerful narcotic effect: it feels so reasonable and credible on a gut level². There are many factors that prevent a given researcher from investigating emergence too deeply and realizing that it lacks any substance. First, there appears to be a lot of external evidence to back it up in the natural world. This, as was pointed out, equivocates between retrospective and prospective uses of the term, and so legitimate uses are being conscripted to justify the illegitimate ones. Secondly, the fact that emergence exclusively concerns itself with intractably complex systems means anything behind its curtain by definition can’t be studied. So it conveniently excludes itself from exactly that analysis which would reveal it to be hollow.

In the end emergence isn’t an explanation; it’s an observation combined with a recognition of ignorance. Wherever emergence shows up there is an implicit acceptance that everyone involved is at a loss for how to approach the topic. It’s not that properties like intelligence won’t emerge from neural activity, it’s that emergence is a placeholder that justifies and promotes a lack of interest in exploring the details behind the connection. It discourages investigation. By invoking the term, we are merely thanking the nature gods for granting us this emergent property (aka property), and trying not to examine their gifts too profanely or with ingratitude. This impulse is understandable, since we don’t think we’ll discover an answer if we were to dig in. But we shouldn’t allow our insecurities to masquerade as science, or else they may become ingrained to the extent that they are difficult to uproot. A false answer stands in the way of a true one.

Notes:

¹ This used to say ‘You can remove “emergent” from any sentence and it would mean the same thing’, but that has caused some confusion, so to clarify: the word “emergent” when used as an adjective doesn’t add new or useful information; you won’t know any more about the subject than you did before.

² A self-aware researcher should notice if they have a strong intuitive or emotional reason for holding on to the idea. If you ever feel that emergence is so self-evident that it can never be disproved, that should give you pause — perhaps you have strayed outside the bounds of scientific inquiry and into metaphysical expositions. Not that there’s anything wrong with the latter…
 

John F Sowa

unread,
Aug 14, 2023, 5:39:28 PM8/14/23
to ontolo...@googlegroups.com
Alex,

That theory you cited explains all the good answers, bad answers, and mediocre answers generated by LLMs.  Another word for it is plagiarism.  Following is Google's definition:  "What is Plagiarism? to steal and pass off (the ideas or words of another) as one's own • to use (another's production) without crediting the source • to commit literary theft • to present as new and original an idea or product derived from an existing source. In other words, plagiarism is an act of fraud."

John
 


From: "alex.shkotin" <alex.s...@gmail.com>

John F Sowa

unread,
Aug 15, 2023, 12:55:42 AM8/15/23
to ontolo...@googlegroups.com
Alex,

I made a serious mistake in my previous note (4 August 2023, 5:40 PM), copy below. 

I thought that you had repeated the typical claims about the kinds of examples that GPT-like systems process.  I apologize for just reading the abstract of that article,  not the full text, which was published on 3 August 2023:  https://arxiv.org/pdf/2212.09196.pdf

I retract that note copied at the end,.  Please ignore it.
.  
But after reading the entire article, I realize that the results by GPT-3 were not just plagiarism of data that it found on the web.  It had indeed abstracted the patterns of analogy from its vast collection of examples.  Then it used those patterns to solve the four kinds of analogy tests: matrix reasoning, .letter string analogiies, verbal analogies, and story analogies.  See the attached GPTtest.png, which compares the results by GPT-3 and humans on those tests.

First point:  The first three tests in GPTtest.png show that GPT does better than the average human on the kinds of questions used in IQ tests.  But on stories, which use the kinds of texts that GPT was trained on, humans do significantly better.

Seocnd point:  The word 'emergence' does not explain anything.  The explanations in that article begin around page 12.   Following is a paragraph from p. 13:  

"It must also be noted that, regardless of the extent to which GPT-3 employs human-like mechanisms to perform analogical reasoning, we can be certain that it did not acquire these mechanisms in a human-like manner. LLMs receive orders of magnitude more training data than do individual human beings (at least if we consider linguistic inputs alone),59 and so they cannot be considered as models of the acquisition of analogical reasoning over the course of human development. Nor can they be considered good models of the evolution of analogical reasoning, as their analogical abilities are derived entirely from being trained to predict human-generated text. Human natural language is replete with analogies; accurately predicting natural language therefore likely requires an ability to appreciate analogies. But there is no reason to suppose that the same system, absent human-generated inputs, would spontaneously develop a disposition to think analogically, as apparently happened at some point in human evolution.62 Thus, to the extent that large language models capture the analogical abilities of adult human reasoners, their capacity to do so is fundamentally parasitic on natural human intelligence. Nevertheless, the present results indicate that this approach may be sufficient to approximate human-like reasoning abilities, albeit through a radically different route than that taken by biological intelligence."

Neither this paragraph nor any other paragraphs in the entire article use the word 'emergence' to explain anything.  Words that begin with 'emerge' occur a total of 15 times, but none of the occurrences use those words to explain anything. 

John
 


GPTtest.PNG
Reply all
Reply to author
Forward
0 new messages