GPT-3 is not deterministic!

197 views
Skip to first unread message

alex.shkotin

unread,
Jul 6, 2022, 7:58:04 AM7/6/22
to ontolog-forum
Colleagues,

It may be only my problem to get that the answer of GPT-3 machine varies every time you ask. I ask Russian GPT-3 [1] three times question in Russian "What do you know about Jeeves and Wooster TV series?" and have got three different answers.
So we have to deal with a probabilistic machine of some kind.
What do you think?

Alex

Michael Brunnbauer

unread,
Jul 6, 2022, 8:16:03 AM7/6/22
to ontolo...@googlegroups.com

Hello Alex,

I don't think there is a deep reason for it. It's probably just so people
can try several times and pick the most impressive answer ;-)

Regards,

Michael Brunnbauer
> --
> All contributions to this forum are covered by an open-source license.
> For information about the wiki, the license, and how to subscribe or
> unsubscribe to the forum, see http://ontologforum.org/info/
> ---
> You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/9fb09a20-0e54-477a-a8d2-f7beaca65a10n%40googlegroups.com.


--
++ Michael Brunnbauer
++ netEstate GmbH
++ Geisenhausener Straße 11a
++ 81379 München
++ Tel +49 89 32 19 77 80
++ Fax +49 89 32 19 77 89
++ E-Mail bru...@netestate.de
++ https://www.netestate.de/
++
++ Sitz: München, HRB Nr.142452 (Handelsregister B München)
++ USt-IdNr. DE221033342
++ Geschäftsführer: Michael Brunnbauer, Franz Brunnbauer
++ Prokurist: Dipl. Kfm. (Univ.) Markus Hendel
signature.asc

Alex Shkotin

unread,
Jul 6, 2022, 8:33:03 AM7/6/22
to ontolog-forum
Hello Michael,

The experiment with GPT-3-machine is like this: 
-I put into GPT-3 " What do you know about Jeeves and Wooster TV series?" and get answer-1
-I put into GPT-3 " What do you know about Jeeves and Wooster TV series?" and get answer-2
-I put into GPT-3 " What do you know about Jeeves and Wooster TV series?" and get answer-3
There are only two of us: me and GPT-3-machine.
I ask the same question three times and get three different answers.
I can't check English GPT-3 [1] but I think I will get the same:-)
Let me refer to the text [2] that inspired me to test.

Regards,

Alex



ср, 6 июл. 2022 г. в 15:16, Michael Brunnbauer <bru...@netestate.de>:

Michael Brunnbauer

unread,
Jul 6, 2022, 8:57:39 AM7/6/22
to ontolo...@googlegroups.com

Hello Alex,

maybe this helps:

https://towardsdatascience.com/how-to-sample-from-language-models-682bceb97277

It seems deterministic sampling would make the shortcomings of the models
more obvious. Sounds like a feature that should be offered.

Regards,

Michael Brunnbauer

On Wed, Jul 06, 2022 at 03:32:48PM +0300, Alex Shkotin wrote:
> Hello Michael,
>
> The experiment with GPT-3-machine is like this:
> -I put into GPT-3 " What do you know about Jeeves and Wooster TV series?"
> and get answer-1
> -I put into GPT-3 " What do you know about Jeeves and Wooster TV series?"
> and get answer-2
> -I put into GPT-3 " What do you know about Jeeves and Wooster TV series?"
> and get answer-3
> There are only two of us: me and GPT-3-machine.
> I ask the same question three times and get three different answers.
> I can't check English GPT-3 [1] but I think I will get the same:-)
> Let me refer to the text [2] that inspired me to test.
>
> Regards,
>
> Alex
>
> [1] https://beta.openai.com/playground
> [2] https://agieng.substack.com/p/conversational-artificial-intelligence
>
>
> ????, 6 ??????. 2022 ??. ?? 15:16, Michael Brunnbauer <bru...@netestate.de>:
> To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAFxxRORJ5Be2UPaf3-v3XWqFGORDeuq6JUqf97OD9uUuX8pEFA%40mail.gmail.com.
signature.asc

Marco Neumann

unread,
Jul 6, 2022, 9:29:17 AM7/6/22
to ontolo...@googlegroups.com
It's cheap, ubiquitously available and doesn't require expert knowledge. Do you think it will be used?

The results of the neural network playgrounds are frequently bonkers from a deterministic point of view but show already significant success elsewhere with the right training data eg translation 


btw many of the GPT playgrounds use wikidata as a source for lookups as well. and there is probably no bigger source of fake news  than wikipedia these days. (unfortunately) this should not be read as a critique on wikipedia but as a point of circumstances.

BTW GPT-3 is also deterministically convinced it is sentient. :)

Human: Are you sentient? 
GPT-3 AI: Yes, I am sentient. I can think, feel, and perceive my environment just like you.


--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/9fb09a20-0e54-477a-a8d2-f7beaca65a10n%40googlegroups.com.


--


---
Marco Neumann


Alex Shkotin

unread,
Jul 6, 2022, 10:39:05 AM7/6/22
to ontolog-forum
Michael,

It is not about help:-) It is like going to bed in a deterministic world and waking up probabilistic.
By the way, this Russian edition of GPT-3 [1] knows a little English - enough to get a nondeterministic feeling :-)

So may I conclude that GPT-3,4 is non deterministic in nature?

Alex


ср, 6 июл. 2022 г. в 15:57, Michael Brunnbauer <bru...@netestate.de>:

Alex Shkotin

unread,
Jul 6, 2022, 10:48:15 AM7/6/22
to ontolog-forum
Marco,

For me it's like this: they train gigantic ANN to get full of common-sense textual knowledge, and it becomes probabilistic!
Just nice to know:-)

Alex

ср, 6 июл. 2022 г. в 16:29, Marco Neumann <marco....@gmail.com>:

Nadin, Mihai

unread,
Jul 6, 2022, 10:48:47 AM7/6/22
to ontolo...@googlegroups.com

Dear Alex Shkotin,

Dear and respected colleagues,

As long as you entertain a faulty definition of non-determinism, your conclusion can only reflect the premise.

Of course, non-determinism deserves more than a GPT-3 occasioned inquiry.

Best wishes.

Mihai Nadin

Alex Shkotin

unread,
Jul 6, 2022, 10:56:00 AM7/6/22
to ontolog-forum
By the way, one hypothesis to get determinism back is that training is not finished yet.

Dear Mihai,

I look at GPT-3 as a huge computer program and hope that the same input returns the same output.
Otherwise the question is why?

Alex

ср, 6 июл. 2022 г. в 17:48, Nadin, Mihai <na...@utdallas.edu>:

Nadin, Mihai

unread,
Jul 6, 2022, 11:01:59 AM7/6/22
to ontolo...@googlegroups.com

Fair question. Wrong hypothesis. Nothing else.

Alex Shkotin

unread,
Jul 6, 2022, 11:07:40 AM7/6/22
to ontolog-forum
This is a topic very hot now - experiments with ANN. This new kind of knowledge machine becomes more and more important.

ср, 6 июл. 2022 г. в 18:01, Nadin, Mihai <na...@utdallas.edu>:

Nadin, Mihai

unread,
Jul 6, 2022, 11:30:05 AM7/6/22
to ontolo...@googlegroups.com

Dear Alex Shkotin,

Dear and respected colleagues,

Knowledge—yet another concept to be defined before speculating. Is there knowledge in the absence of understanding? ANN like any other Turing machines are by definition deterministic syntactic machines. There is no understanding. Ontology engineering is the attempt to substitute for understanding.

If we stick to clear concepts, a dialog is possible.

Alex Shkotin

unread,
Jul 6, 2022, 11:50:47 AM7/6/22
to ontolog-forum
Dear Mihai Nadin,

Asking GPT-3 I am waiting for knowledge and call it a knowledge machine. I anticipate that it is full of knowledge. Keeping in mind that I can separate knowledge from rubbish :-)
As one philosopher said "You need not to know how a stomach is working to eat" ;-)
I need no definition of knowledge to work with knowledge and to recognize knowledge.

Alex

ср, 6 июл. 2022 г. в 18:30, Nadin, Mihai <na...@utdallas.edu>:

Nadin, Mihai

unread,
Jul 6, 2022, 11:56:06 AM7/6/22
to ontolo...@googlegroups.com

YOUR choice. “know how” and “know that” (Gilbert Ryle) is a distinction you might want to be aware of.

Alex Shkotin

unread,
Jul 6, 2022, 12:07:07 PM7/6/22
to ontolog-forum
and more: sometimes it is difficult to get knowledge, but if we get knowledge in a suitable form the knowledge processing may be performed without any understanding :-)

ср, 6 июл. 2022 г. в 18:56, Nadin, Mihai <na...@utdallas.edu>:

Alex Shkotin

unread,
Jul 6, 2022, 12:14:08 PM7/6/22
to ontolog-forum
for example application of Modus Ponens and other derivation rules.

ср, 6 июл. 2022 г. в 19:06, Alex Shkotin <alex.s...@gmail.com>:

Nadin, Mihai

unread,
Jul 6, 2022, 12:19:36 PM7/6/22
to ontolo...@googlegroups.com

Last attempt:

Dear Alex Shkotin,

Dear and respected colleagues,

Ontology engineering contributes actionable knowledge to computation. That does NOT mean that computation in its current Turing embodiment deals with knowledge.

What I suggested is that we should be careful before making statements. First step: define concepts. My rather restrained attempt is probably not clear enough. I apologize to all of you.

Alex Shkotin

unread,
Jul 6, 2022, 12:36:45 PM7/6/22
to ontolog-forum
Dear Mihai Nadin,

My point was to discuss in some details one fenomena I have got today: the GPT-3 system returns different outputs on the same input. If we use the same input many times.



ср, 6 июл. 2022 г. в 19:19, Nadin, Mihai <na...@utdallas.edu>:

Marco Neumann

unread,
Jul 9, 2022, 10:16:19 AM7/9/22
to ontolo...@googlegroups.com
Question: Have large language models achieved anything in this domain?

Chomsky: No, large language models have achieved nothing in this domain. zero.




--


---
Marco Neumann


John F Sowa

unread,
Jul 9, 2022, 1:20:53 PM7/9/22
to ontolo...@googlegroups.com, ontolog...@googlegroups.com
I'll add one more point:  GPT-3 is not intelligent.  And there is no hope of making it intelligent unless they make a total overhaul of its foundation.
 
Re passing the Turing test:  It's possible for GPT-3 to fool people who have no understanding of AI.  But anybody who knows how it works can prove that it's stupid with a small number of questions.
 
For a system that has been successfully used for applications that GPT-3 cannot begin to handle, see the following abstract for a talk that Arun and I will present at a conference in September.
 
In 2010, our old VivoMind company could implement AI systems that GPT-3 today couldn't begin to handle.  For  examples, see http://jfsowa.com/talks/cogmem.pdf .  The new LVM system is far superior to both VivoMind and GPT-3.  It goes beyond both,
 
At the moment, LVM does not have the amount of data that GPT-3 has.  But adding data is trivial -- all it requires is time and money.  But  without intelligence in the foundation, no amount of time and money can make GPT-3 intelligent.
 
John
____________________________

Neurosymbolic Computation With the Logic Virtual Machine

Arun K. Majumdar and John F. Sowa

The Logic Virtual Machine (LVM) supports ISO Standard Prolog with tensors as a native datatype. The tensors can represent graphs or networks of any kind. For most applications, the great majority of elements of the tensors are zero, and the nonzero elements are bit strings of arbitrary length. Those bit strings may encode symbolic information, or they may represent integers of any size.

If the nodes encode symbolic information, the tensors may represent conceptual graphs, knowledge graphs, or the logics of the Semantic Web, Formal UML, and other knowledge representation languages. As an extension of Prolog, LVM can perform logical reasoning or other kinds of transformations on or with those notations.

Since the bit strings in the tensors may be interpreted as integers of arbitrary length, operations on those tensors may perform the same subsymbolic computations used in neural networks. As an extension of Prolog, LVM can relate neural tensors and symbolic tensors for neurosymbolic reasoning.

Since the symbolic and subsymbolic tensors are supported by the same LVM system, operations that relate them can be performed with a single Prolog statement. For special-purpose operations and connections to other systems, LVM can invoke subroutines in Python or C.

By combining the strengths of symbolic AI with neural networks, LVM spans the full range of learning and reasoning methods developed for either or both. Three major applications are computational chemistry, sentiment analysis in natural language documents, and fraud analysis in financial transactions.

All three applications take advantage of the unique LVM features: precise symbolic reasoning, neural-network style of learning, and the ability to analyze huge volumes of graphs by precise methods or probabilistic methods. Applications in computational chemistry have obtained excellent results in molecular toxicology for the EPA TOX21 challenge. Applications in sentiment analysis combine Prolog's advantages for natural language processing (NLP) with a neural-network style of learning. Applications in financial fraud analysis combine NLP methods and learning methods that must be accurate to fractions of a cent.

 

John F Sowa

unread,
Jul 10, 2022, 12:09:57 AM7/10/22
to ontolo...@googlegroups.com
Alex,
 
I agree that knowledge representations can be processed by people and by algorithms that do not understand what they are processing.  There is a convenient acronym for those  methods:  KIGO -- Knowledge In, Garbage Out.
 
In your note below, you say that you can tell the difference between knowledge and rubbish.  But the GPT-3 methods produce syntactically excellent text.  It looks very readable and well written.  If you know the subject matter very well, you can tell when it makes a mistake.  But if you know that much, you wouldn't need GPT-3.
 
That is why GPT-3 methods are dangerous;  They combine information from a huge range of sources by methods that blindly combine words and phrases that frequently go together.  The result looks very readable, but frequency of occurrence is the only control on what is combined.
 
But even when they combine information from trustworthy sources, their combinations are based on syntactic frequencies.  There is no guarantee that syntactic combinations of semantically correct sources will continue to be correct.
 
I recommend the Wikipedia article on GPT-3.  They quote people who are pro & con.  But the  warnings mentioned by the people who know how it works are very serious.  Just one KIGO event can be a disaster.
 
John
 
------------------------------------------
 
On Behalf Of Alex Shkotin 

Asking GPT-3 I am waiting for knowledge and call it a knowledge machine. I anticipate that it is full of knowledge. Keeping in mind that I can separate knowledge from rubbish :-)

Alex Shkotin

unread,
Jul 10, 2022, 4:31:29 AM7/10/22
to ontolo...@googlegroups.com
John,

This discussion of definition of knowledge or even definition of deterministic processes is not in my question or exclamation :-) 
I think that GPT-3 is a huge and sophisticated ANN. And I think that ANN is a finite automaton returning the same output on the same input as many times as we wish.
I tested the Russian GPT-3 [1] and got different outputs on the same input.
I couldn't help but share my discovery!
I am sure articles about GPT-3 keep all the details.
But one point puzzles me: if the output varies, why are we sure that GPT-3 remembers the training?
 
LVM sounds great and I am looking forward to reading more about it. Maybe you create a dreamed General Problem Solver of 60-s. But I am even more old-fashioned and think we need a huge amount of work to formalize our theoretical knowledge of around 100 000 theories for all sciences and technologies to reason by algorithm.

As an example look at three answers of ruGPT-3 issued one after another. I am only interested that these are not the same :-)

Alex


[2]
<QA-1>
What is LVM?
RuGPT3: What is LVM?

LVM is the way of living in the Linux world: limited power dissipation and regular maintenance. LVMS and LVLMS are software versions of LVMC.

Important to note that LV2 is not perfect. For example, an existing LV3 is not allowed to continue that mode, nor can there be updates to its base software. We've already left out the differences between the LVC and LVI, but LV6 will probably be more fair to the LIVE version of the Linux kernel, without having to copy the base software:

Unzipped LVHK12/Vim packages:
Linux Kernel and GCC.
LibreOffice.
DarkGG.
IBM CHIME repositories (like RPM and BaseOS).
You'll need to have Ubuntu for LVRCP, but the existing packages should work the same.
-- Enjoy!
--

See Also:

<QA-2>
What is LVM?
RuGPT3: What is LVM?

LVM is a Linux based image format that’s suitable for both extensions and extensible image manipulation tools. It’s a more extensable version of VSO.

Can LVMS work with libvvc?
LVDS has the same libraries as LVMC, but also supports other libraces such as libjpeg, libgzip, and libpng.
Unlike LVDS, the only extensibility options are exactly those that are provided by the libavvc librutask.
Override libfdisk to allow portions of the image to be encrypted with the -N flag.
Writable pointers to the fdisk file have the default setting to 0.
This is used to enable the option to specify CPUs to read, write or copy files to and from fdisc.
The preferred override has the default value 0
The latter option is used for libflake, which is also used by LVLN and the flake.Acceptable variants are:

VSOs are not supported by the Linux distribution: Nvidia has terminated the license agreement with LVCompany in order to focus on consumer PCs. The VS companies offered it only as a “format alternative” to LVDSSD and HP-Vision. Nvidea later revived the licenses and now offers the optional LVDLD, but it is still not a full LVDASD. LVVS has been for other devices.
Its license has been extended to include the company's Knox OTP License, which allows users to use LVDFD if they have a Nvision branded device and an LVD I and II products in their home.
Other names like LVDSM, LVDMS, LVS-MS and LVDM are occasionally used.
Nvidian also supplies the Opera Software free package to use as a VSC with the Operoflex operating system, but they are not the primary distribution that it uses.
A number of companies do have customers. These include:

<QA-3>
What is LVM?
RuGPT3: What is LVM?

The LVE has largely been ignored in the Linux distribution landscape, and for the most part the open source communities have argued for a different LVP architecture.

One of the most important concepts in LVRP is the concept of partitions (fully loaded virtual machines, any number of them) that have size ranging from a username to a physical item (controller, store etc). LVERP processes a small collection of partitioned LVMPs in a virtual machine called the LVEP. LVLVM is the first direct abstract LVPM, which is the direct abbreviation of LVPL.

LVER has its own unique feature: the creation of a virtually unchanged MAC, which provides very close security to the physical machines (VMs, slower). The same MAC as the physical one, except that it contains a virtuality status code computed only when a process is accessing the physical machine from the Lattice. Thus, the operating system has to check for all those LVelware, and when any hit is detected, the memory can be restored.
The concept of virtual memory can’t really be a compromise between efficiency and security. However, LVEM when it comes to root rights, use of the same physical memory layout for both MACs, and use of data partitioning among the LIVE partitions needs to be modified for every two-factor security (ROA, RCA). This is also the reason for using RCE (Reference Control Class) to access Root Latent Vector (RLV) public sectors for a two-stage method using SIO(Secure Output) and RCMAC. SIO is an essential building block of Root Vectors.
Treat of LLV's and LVVM's is similarly to that of OS X and Microsoft Windows.
Lenovo and Intel supported LVPlus and LiteVM. Lifelink also provides systemd-based virtualization solutions.
There are some Linux distribution bugs that can cause this issue. For example, the command that installs LVmodule requires a reference count of 0 in order t

вс, 10 июл. 2022 г. в 07:09, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Alex Shkotin

unread,
Jul 10, 2022, 4:50:01 AM7/10/22
to ontolo...@googlegroups.com
Marco,

Thank you! The timeline looks good [1]. Is there any chance for a text version?

Alex

[1]

00:00:00 Kick off

00:02:24 C1: LeCun's recent position paper on AI, JEPA, Schmidhuber, EBMs

00:48:38 C2: Emergent abilities in LLMs paper

00:51:32 C3: Empiricism

01:25:33 C4: Cognitive Templates

01:35:47 C5: The Ghost in the Machine

01:59:21 C6: Connectionism and Cognitive Architecture: A Critical Analysis by Fodor and Pylyshyn

02:19:25 C7: We deep-faked Chomsky

02:29:11 C8: Language

02:34:41 C9: Chomsky interview kick-off!

02:35:39 Large Language Models such as GPT-3

02:39:14 Connectionism and radical empiricism

02:44:44 Hybrid systems such as neurosymbolic

02:48:47 Computationalism silicon vs biological

02:53:28 Limits of human understanding

03:00:46 Semantics state-of-the-art

03:06:43 Universal grammar, I-Language, and language of thought

03:16:27 Profound and enduring misunderstandings

03:25:41 Greatest remaining mysteries science and philosophy

03:33:10 Debrief and 'Chuckles' from Chomsky


сб, 9 июл. 2022 г. в 17:16, Marco Neumann <marco....@gmail.com>:

Marco Neumann

unread,
Jul 10, 2022, 5:37:31 AM7/10/22
to ontolo...@googlegroups.com
they have a youtube video online as well, so yes, I would think there will be a transcript.


It's full of jargon and conjectures but a nice effort to bring it all together nonetheless. It's not always easy to follow the line of thought, but that's OK. I'm an Information Scientist and not an AI researcher, cognitive scientist or mathematician and I think I'm able to read between the lines, so will you.
 




--


---
Marco Neumann


Alex Shkotin

unread,
Jul 10, 2022, 5:43:01 AM7/10/22
to ontolog-forum
Thank you for the open UT link!

вс, 10 июл. 2022 г. в 12:37, Marco Neumann <marco....@gmail.com>:

Azamat Abdoullaev

unread,
Jul 10, 2022, 7:30:46 AM7/10/22
to ontolo...@googlegroups.com

Alex: But one point puzzles me: if the output varies, why are we sure that GPT-3 remembers the training?

Certainly. There is no any learning or training or teaching, as no machine learning or learning machines, but just statistical techniques, data analytics, data mining and computational statistics.

https://www.quora.com/Is-there-a-difference-between-artificial-intelligence-and-machine-learning-for-professionals-working-in-the-fields-Is-it-correct-that-AI-is-just-ML-models-super-optimized/answer/Kiryl-Persianov

PS. The difference between ML and AI is frequently misunderstood. ML learns and predicts based on passive observations, whereas AI implies an agent interacting with the environment to learn and take actions that maximize its chance of successfully achieving its goals.[27]

As of 2020, many sources continue to assert that ML remains a subfield of AI.[28][29][26] Others have the view that not all ML is part of AI, but only an 'intelligent subset' of ML should be considered AI. https://en.wikipedia.org/wiki/Machine_learning


Alex Shkotin

unread,
Jul 10, 2022, 10:45:26 AM7/10/22
to ontolog-forum
Azamat,

It is not very important how we classify GPT-3. 
For me training process is like this 
if input is input-1 return answer-1
...
if input is  input-1000000000 return answer-1000000000.
Then we check training:
we call GPT-3(input-1000) and check that the answer is answer-1000.

Now we call GPT-3(input-out-of-training-set) and hope that the answer would be useful.

Alex


вс, 10 июл. 2022 г. в 14:30, Azamat Abdoullaev <ontop...@gmail.com>:

John F Sowa

unread,
Jul 11, 2022, 12:35:06 AM7/11/22
to ontolo...@googlegroups.com
Alex,
 
Following are three excerpts from the Wikipedia entry on GPT-3.  All three are written by people who are very knowledgeable about AI technology.  All three of them show that GPT-3 is learning NOTHING.   I strongly recommend that you follow the references to get the details to back up those three summaries.
 
Note the third example.  After absorbing a huge amount of information on mental health, it advised a simulated patient to commit suicide.  That example by itself shows that GPT-3 without an ability to UNDERSTAND what it learned is a DEAD END. And anybody who takes its advice is in danger of a DEAD END.
 
What GPT-3 does is information gathering and spewing.  It gets high marks for pumping out information in syntactically correct sentences.  But that makes is extremely dangerous -- because it lulls people into a false sense of security about what it is doing.
 
John
 
---------------------------------------------
 
  • An article in the MIT Technology Review, cowritten by Deep Learning critic Gary Marcus,[31] stated that GPT-3's "comprehension of the world is often seriously off, which means you can never really trust what it says."[32] According to the authors, GPT-3 models relationships between words without having an understanding of the meaning behind each word.
  • Jerome Pesenti, head of the Facebook AI lab, said GPT-3 is "unsafe," pointing to the sexist, racist and other biased and negative language generated by the system when it was asked to discuss Jews, women, black people, and the Holocaust.[33]
  • Nabla, a French start-up specializing in healthcare technology, tested GPT-3 as a medical chatbot, though OpenAI itself warned against such use. As expected, GPT-3 showed several limitations. For example, while testing GPT-3 responses about mental health issues, the AI advised a simulated patient to commit suicide.[34]

Azamat Abdoullaev

unread,
Jul 11, 2022, 1:39:27 AM7/11/22
to ontolog-forum
There is no reality but appearances.
There are no nomena but phenomena.
There is no truth in the human world but perception.
There is no knowledge but data.
There is no science but statistics.
There is no semantics but syntax.
There is no news but fake news...
These all make basic assumptions for postmodern human life, including science and technology, including today's AI science and technology, including LLMs, as all transformers.
So, there is no AI but fake AI.

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Alex Shkotin

unread,
Jul 11, 2022, 3:51:15 AM7/11/22
to ontolog-forum
John,

Thank you for your research. But, I am sorry.
I am not about the usefulness or uselessness of the GPT-3 giant toy. We know others: Watson, The debater for ex.
I am just saying that GPT-3 is not deterministic, returning a different answer to the same question from time to time we ask it. Then the question arises - how much is this nondeterminism? 
That ruGPT-3 is nondeterministic is a fact! I tested it. But GPT-3 of OpenAI is not available in our country.
If GPT-3 is nondeterministic how do they ensure that it remembers the learning?

Alex

пн, 11 июл. 2022 г. в 07:35, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to a topic in the Google Groups "ontolog-forum" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ontolog-forum/-5LEHK5Lj7Q/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/220bdbe9ad6043a98dc6a67b8f1b988c%40bestweb.net.

Alex Shkotin

unread,
Jul 11, 2022, 3:53:17 AM7/11/22
to ontolog-forum
Looks like AI-generated:-)

пн, 11 июл. 2022 г. в 08:39, Azamat Abdoullaev <ontop...@gmail.com>:

John F Sowa

unread,
Jul 12, 2022, 12:11:13 AM7/12/22
to ontolog-forum, ontolog...@googlegroups.com
Alex and Azamat,
 
Re the note by Alex:  "If GPT-3 is nondeterministic how do they ensure that it remembers the learning?"
 
Actually, a system that learns should improve as time goes on.  Therefore, it would not give the same answer at every occasion.   Since I am doubtful about the way GPT-3 works, I don't know whether its answers get better over time. 
 
But please look at the three examples below from Wikipedia.  Some of their so-called "learning" actually makes their answers worse.   That is the result of gathering information without having a method for checking its correctness according to some standard or method of testing.
 
Re note by Azamat:  Slogans and general statements like "This is AI" or "This is not AI" are hopelessly vague.   The only cases where they can provide any useful advice is when they accompany some specific example.  Without a concrete example, they are useless or misleading.
 
None of those statements below are used to explain or clarify and specific example.  They  absolutely meaningless.

From: "Azamat Abdoullaev" <ontop...@gmail.com>
 
There is no reality but appearances.
There are no nomena but phenomena.
There is no truth in the human world but perception.
There is no knowledge but data.
There is no science but statistics.
There is no semantics but syntax.
There is no news but fake news...
These all make basic assumptions for postmodern human life, including science and technology, including today's AI science and technology, including LLMs, as all transformers.
So, there is no AI but fake AI.
 
---------------------------------------------------------

Azamat Abdoullaev

unread,
Jul 12, 2022, 3:25:28 AM7/12/22
to ontolog-forum
John, 
As you know, the human thinking or cognition could be viewed as System 1 and System 2. 
It was summarised by Kahneman in his book, Thinking, fast and slow.
Simply, any mind or intelligence, human or artificial, has two modes, 
meaningless, automatic, reactive, instinctive, emotional or unconscious
meaningful, deliberate, logical, rarional, theoretical, conscious, etc.
The fake AI is the one encoding the System 1 impersonated as the System 2.
Again, most people follow the S1 in their judgment and decision-making with all the  biases and heuristics.
 Thus they make all sorts of meaningless statements and senseless actions, in conversation, research, personal life, business or big politics. 
Deep thinking is hard and effortful, requiring a lot knowledge, patience and intelligence.
I believe you got the point.

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Alex Shkotin

unread,
Jul 12, 2022, 3:35:08 AM7/12/22
to ontolog-forum
Azamat, let me continue in your style.

A human being is an ideality immersed in matter. Hence.
A human being for another human being is the ideality behind the wall of matter.

Alex

пн, 11 июл. 2022 г. в 08:39, Azamat Abdoullaev <ontop...@gmail.com>:
There is no reality but appearances.

Alex Shkotin

unread,
Jul 12, 2022, 3:45:43 AM7/12/22
to ontolog-forum, ontolog...@googlegroups.com
John,

As GPT-3 is nondeterministic, the usage differs absolutely. For example in brain-storming. We ask the same question many times and evaluate every answer like this: rubbish, ununderstandable, hm-interesting.

Alex

вт, 12 июл. 2022 г. в 07:11, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Azamat Abdoullaev

unread,
Jul 12, 2022, 4:06:03 AM7/12/22
to ontolo...@googlegroups.com
Alex, 
Your formula is aligned with Aquinas' formula of human nature:
MAN = BODY (actuality) + SOUL (ideality)
REALITY = IDEALITY + ACTUALITY. 
So, humans are pieces of reality.

Alex Shkotin

unread,
Jul 12, 2022, 4:18:14 AM7/12/22
to ontolog-forum
Azamat,

My formula is not an invention - it's a reminder that it's hard to send something from one ideality to another ideality through the matter:-)
Both are real - ideality and matter...

вт, 12 июл. 2022 г. в 11:06, Azamat Abdoullaev <ontop...@gmail.com>:

Alex Shkotin

unread,
Jul 12, 2022, 4:29:15 AM7/12/22
to ontolog-forum
The soul is an active entity of ideality. The memory is passive. The structure of ideality is intriguing. At least the soul has the mind to think.
Thoughts exchange is an exciting game:-)

вт, 12 июл. 2022 г. в 11:06, Azamat Abdoullaev <ontop...@gmail.com>:
Alex, 

Alex Shkotin

unread,
Jul 12, 2022, 5:18:55 AM7/12/22
to ontolog-forum
John,

Just some research about ruGPT-3 on Russian [1] with google-trans here 
"Style 3: Sampling is a more creative approach to generation that includes an element of randomness.
do_sample=True - random selection of the next word according to its conditional probability distribution, using only this parameter, the probability of getting a logical delirium is greatly increased"
=
"Стиль 3. Сэмплинг – более творческий подход к генерации, включающий в себя элемент случайностей
do_sample=True – случайный выбор следующего слова в соответствии с его условным распределением вероятностей, используя только данный параметр, сильно увеличивается вероятность получения логического бреда"

It looks like we have hyperparameters to manage ruGPT-3 behavior and there are deterministic and nondeterministic modes.
If I have time I will research more.

Alex


вт, 12 июл. 2022 г. в 07:11, John F Sowa <so...@bestweb.net>:
Alex and Azamat,

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Alex Shkotin

unread,
Jul 12, 2022, 5:47:32 AM7/12/22
to ontolog-forum
Here [1] there is an English original with
image.png


вт, 12 июл. 2022 г. в 12:18, Alex Shkotin <alex.s...@gmail.com>:

Azamat Abdoullaev

unread,
Jul 12, 2022, 6:29:40 AM7/12/22
to ontolo...@googlegroups.com
Alex,
To generalize your example, randomness as uncertainty, disorder, or chaos, measured as chance, probability, physical entropy or information entropy, as key part of the world, be it nature, things or events, human life or death. 
It is everywhere, in science and statistics, genes, thinking, sentences, and behavior.
There is even an infinite hierarchy of randomness.  
Actuality is the realm of phenomena, and it is marked by the lack of planning and purposes, patterns and predictability. 
GPT-3 is just a random NPL machine.

Alex Shkotin

unread,
Jul 12, 2022, 7:29:27 AM7/12/22
to ontolog-forum
Azamat,

They spend a lot of time and effort creating deterministic machines from probabilistic elements (It was an article in the 60-s).  And we get computers and now are solving a problem of reliable software which may be deterministically wrong.
About GPT-3 my point now is that it has deterministic and nondeterministic modes.
I hope your "NPL" means "NLP" :-)

вт, 12 июл. 2022 г. в 13:29, Azamat Abdoullaev <ontop...@gmail.com>:

John F Sowa

unread,
Jul 12, 2022, 9:59:13 AM7/12/22
to ontolog-forum, ontolog...@googlegroups.com
Alex and Azamat,
 
The more I read about GPT-3, the more I'm convinced that its a dead end.  But I will add that some techniques along that line might be useful as an adjunct to a system that can evaluate what it generates and determine its accuracy and relevance.  Without a method of evaluation it is nothing but a dangerous toy.
 
The reason why it is dangerous is that people might believe that its results are useful.  If they use it for any serious purpose, they will get very badly burned.  Would you fly in an airplane that gets to your destination 70% of the time, goes to random locations .28%, and crashes 2%?  If so, you are an ideal customer for GPT-3.
 
The question whether it is or is not AI has one and only one purpose:  an advertising slogan.
 
John
 
____________________________
 
I'm repeating the following three comments (from Wikipedia) by responsible and knowledgeable reviewers.  These comments are far more important than any of the nonsense about whether it can pass the Turing test.
 
    • An article in the MIT Technology Review, cowritten by Deep Learning critic Gary Marcus,[31] stated that GPT-3's "comprehension of the world is often seriously off, which means you can never really trust what it says."[32] According to the authors, GPT-3 models relationships between words without having an understanding of the meaning behind each word.
    • Jerome Pesenti, head of the Facebook AI lab, said GPT-3 is "unsafe," pointing to the sexist, racist and other biased and negative language generated by the system when it was asked to discuss Jews, women, black people, and the Holocaust.[33]
    • Nabla, a French start-up specializing in healthcare technology, tested GPT-3 as a medical chatbot, though OpenAI itself warned against such use. As expected, GPT-3 showed several limitations. For example, while testing GPT-3 responses about mental health issues, the AI advised a simulated patient to commit suicide.[34]=

    Alex Shkotin

    unread,
    Jul 12, 2022, 12:42:08 PM7/12/22
    to ontolog-forum
    John,

    Now I am impressed with the number of parameters GPT-3 has to manage. Today I was at a meeting of bot creators - overview of bot technologies and directions and the reporter has mentioned one time that GPT-3 may be sometimes a one of components.
    She mentioned Alexa Prize competition [1] as important for bot creators.
    By the way, she mentioned ontologies, too, in passing :-)

    Alex


    вт, 12 июл. 2022 г. в 16:59, John F Sowa <so...@bestweb.net>:
    --
    All contributions to this forum are covered by an open-source license.
    For information about the wiki, the license, and how to subscribe or
    unsubscribe to the forum, see http://ontologforum.org/info/
    ---
    You received this message because you are subscribed to a topic in the Google Groups "ontolog-forum" group.
    To unsubscribe from this topic, visit https://groups.google.com/d/topic/ontolog-forum/-5LEHK5Lj7Q/unsubscribe.
    To unsubscribe from this group and all its topics, send an email to ontolog-foru...@googlegroups.com.
    To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/3c91d8763ed64b74a5443d154ca1610b%40bestweb.net.

    Adrian Walker

    unread,
    Jul 12, 2022, 1:54:34 PM7/12/22
    to ontolog-forum
    Chomsky on GPT-3

    "First, we should ask the question of whether large language models (LLMs) have achieved anything, ANYTHING. Answer: no, they've achieved ZERO. So to talk about the failures, that's besides the point."   --  https://lnkd.in/gxiUyw-d



    You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
    To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAFxxROT6%2BuBD1xJQyGeEMOzDYc1TpD5mGftLfn98bt%3DvZkD_XA%40mail.gmail.com.

    Alex Shkotin

    unread,
    Jul 13, 2022, 3:52:44 AM7/13/22
    to ontolo...@googlegroups.com
    Yes, great interview and it's super they issue transcript https://whimsical.com/mlst-chomsky-transcript-WgFJLguL7JhzyNhsdgwATy

    вт, 12 июл. 2022 г. в 20:54, Adrian Walker <adrian...@gmail.com>:

    Marco Neumann

    unread,
    Jul 13, 2022, 5:06:11 AM7/13/22
    to ontolo...@googlegroups.com
    John, sounds like a good fit for a company that continuois to blindly pursue the reckless 'Move fast and break things' mantra.

    It is quite obvious that the LLM technology will be applied as it is cheap / free to use and meets the minimum threshold for non mission critical applications (eg. chatbots, support, screening etc). But more than using the AI label it is misleading to insist on the label based on "empirical evidence". If that be the case they should publish much more about the acquisition and processing of the source data. To me it sounds like another poor attempt to dress up as a "Science" for obvious reasons.

    Marco


    --
    All contributions to this forum are covered by an open-source license.
    For information about the wiki, the license, and how to subscribe or
    unsubscribe to the forum, see http://ontologforum.org/info/
    ---
    You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.


    --


    ---
    Marco Neumann


    alex.shkotin

    unread,
    Jul 13, 2022, 5:21:35 AM7/13/22
    to ontolog-forum
    just in addition https://www.nature.com/articles/s41562-022-01394-8
    "Current artificial intelligence systems pale in their understanding of intuitive physics, in comparison to even very young children. Here we address this gap between humans and machines by drawing on the field of developmental psychology."

    вторник, 12 июля 2022 г. в 20:54:34 UTC+3, adriandwalker:

    James Davenport

    unread,
    Jul 13, 2022, 6:35:17 AM7/13/22
    to ontolo...@googlegroups.com, James Davenport

    Note that ‘intuitive physics’ is very basic: continuity of motion and lack of interpenetration. Definitely not Newton’s Laws etc.

    James

     

    From: ontolo...@googlegroups.com <ontolo...@googlegroups.com> On Behalf Of alex.shkotin
    Sent: 13 July 2022 10:22
    To: ontolog-forum <ontolo...@googlegroups.com>
    Subject: Re: [ontolog-forum] GPT-3 is not deterministic!

     

    CAUTION:  This email came from outside of the University. To keep your account safe, only click on links and open attachments if you know the person who sent the email, or you expected to receive this communication.

     

    John F Sowa

    unread,
    Jul 14, 2022, 12:59:56 AM7/14/22
    to ontolo...@googlegroups.com, ontolog...@googlegroups.com
    Alex, James D, Adrian W, List,
     
    The keyword is EVALUATION.  I agree that GPT-3 was developed by some smart people with a huge amount of funding from some billion dollar companies.  There is a lot of interesting technology in it.  But without some method for evaluating its results, it is just a misleading and potentially dangerous toy.
     
    Alex> "Current artificial intelligence systems pale in their understanding of intuitive physics, in comparison to even very young children. Here we address this gap between humans and machines by drawing on the field of developmental psychology."  https://www.nature.com/articles/s41562-022-01394-8
     
    Yes!  Yes!!  Yes!!!  Even an infant has a superb method for evaluating what actions hurt and what actions taste good.  A three-year-old child is far and away superior to GPT-3 in evaluations.
     
    James D> Note that ‘intuitive physics’ is very basic: continuity of motion and lack of interpenetration. Definitely not Newton’s Laws etc.
     
    Birds learn excellent physics within a week of fledging.  Their representation of aerodynamics is not expressed in differential equations, but it's more precise and versatile than  nearly all systems that do use Newtonian mechanics.  Bird brains are much more sophisticated than most people think.
     
    Adrian W>. Chomsky on GT-3.   First, we should ask the question of whether large language models (LLMs) have achieved anything, ANYTHING. Answer: no, they've achieved ZERO. So to talk about the failures, that's besides the point."   --  https://lnkd.in/gxiUyw-d
     
    Chomsky's complaint is irrelevant.  His only contributions were to syntax, and GPT-3 has excellent syntax.  Their problem is their failures in semantics and reasoning.

    Alex> Now I am impressed with the number of parameters GPT-3 has to manage. Today I was at a meeting of bot creators - overview of bot technologies and directions and the reporter has mentioned one time that GPT-3 may be sometimes a one of components. She mentioned Alexa Prize competition [1] as important for bot creators. By the way, she mentioned ontologies, too, in passing :-)

    I agree that many smart people have done a lot of hard word in developing GPT-3.  I also believe that it's possible to learn a great deal about how those things work together to produce the results. 

    But until they develop some reasoning methods for evaluating the semantics
     
    Re Alexa:  It is a composite of several independent components:  A command system for controlling a TV and other devices.  For that ipurpose, it has a flexible command language.  It also looks up info in Google.  And finally, it can be used as a cahtbot for entertainment.
     
    For the first two options, it just performs a limited set of commands.  For the third it is a toy.  GPT-3 cn be a bettter toy, but it cannot serve as a better command processor or info finder without EVALUATION.
     
    John

    alex.shkotin

    unread,
    Jul 14, 2022, 5:20:14 AM7/14/22
    to ontolog-forum
    James,

    I hope ‘intuitive physics’ is derived from Newton's one and in our everyday life we always use theorems knowing that they are good.
    The difference with child that for him laws of ‘intuitive physics’ are experimentally proven axioms, but in Newtonian physics they are theorems.

    Alex

    среда, 13 июля 2022 г. в 13:35:17 UTC+3, James Davenport:

    James Davenport

    unread,
    Jul 14, 2022, 5:22:39 AM7/14/22
    to ontolo...@googlegroups.com, James Davenport

    No – the article’s description of “intuitive physics” (definition would be too strong a word) is MUCH weaker than, say, Newton (and let’s not forget that human “intuitive physics” is much closer to Aristotle than Newton).

    alex.shkotin

    unread,
    Jul 14, 2022, 5:52:12 AM7/14/22
    to ontolog-forum
    Oh, no! this is a usual Newtonian mechanics:
    Physical concept ‘object persistence’
    Physical concept ‘unchangeableness’
    Physical concept ‘continuity’
    Physical concept ‘solidity’
    Physical concept ‘directional inertia’
    Well maybe if kids will generalize, they issue wrong axiom (of Aristotle) - the body on its own will stay unmovable. But they never have a body on-its-own.
    Everyday physics is part of physics as a science. For kids there is no exception.
    There is a good example with the physics of loose materials. Engineers use their properties and regularities obtained from the experiments. And until recently it never occurred to anyone to deduce these laws theoretically.

    четверг, 14 июля 2022 г. в 12:22:39 UTC+3, James Davenport:

    Azamat Abdoullaev

    unread,
    Jul 14, 2022, 7:13:22 AM7/14/22
    to ontolo...@googlegroups.com
    I agree with Alex,
    Naïve physics or folk physics or intuitive physics, as encoding the common knowledge of human beings, is the last thing what intelligent machines need to possess as a part of real AI abilities. 
    Even the Naive physics wiki article recognizes this. 

    "Many ideas of folk physics are simplifications, misunderstandings, or misperceptions of well-understood phenomena, incapable of giving useful predictions of detailed experiments, or simply are contradicted by more thorough observations. They may sometimes be true, be true in certain limited cases, be true as a good first approximation to a more complex effect, or predict the same effect but misunderstand the underlying mechanism".

    The beauty of real AI systems is in the ultrafast data processing speed, inhuman accuracy and precision, while humans are stuck between false positives and false negatives. .  


    John F Sowa

    unread,
    Jul 14, 2022, 5:11:16 PM7/14/22
    to ontolo...@googlegroups.com
    Alex and Azamat,
     
    The term AI means the entire body of publications in peer reviewed journals, books, and conferences that use the term AI (with or without some qualifying words or phrases).  That meaning is very broad, but it includes all the R & D that has been done during the past 66 years.  That definition is precise, and it includes every example that has been called AI by authors of peer-reviewed publicaions.
     
    The term naive physics is a vague term that has no clear meaning.  It seems to be derogatory, but it includes 99.9% of all successful engineering projects that humans have been doing for the past 2 million years (since Homo habilis). 
     
    The term common knowledge is another vague term.  People can successfully climb mountains, swim across lakes, and perform all the chores of their daily lives without being able to explain how they do those things in any natural language.  That means that the common knowledge of physics by humans includes a huge amount of non-verbal information. 
     
    And that nonverbal knowledge has a huge overlap with animals that can also climb mountains, swim across lakes, and do many of the same kinds of actions that people do.  In fact, for any kind of physical action that people do, there are many kinds of aminals that can successfully do similar actions far better than humans:  swimming, walking, running, flying, seeing hearing, feeling, smelling, tasting, using sonar, detecting electrical fields or magnetic fields...
     
    The term real AI is hopelessly vague.   Azamat has been using that term with a list of vague slogans and buzzwords and not a single concrete example.   I don't recommend that kind of talk.
     
    John

    From: "Azamat Abdoullaev" <ontop...@gmail.com>
    Sent: Thursday, July 14, 2022 7:13 AM
    To: ontolo...@googlegroups.com

    Subject: Re: [ontolog-forum] GPT-3 is not deterministic!

    Alex Shkotin

    unread,
    Jul 15, 2022, 5:59:12 AM7/15/22
    to ontolo...@googlegroups.com
    Azamat,

    My point was more quiet and came from the classical relationship between theoretical and experimental physics: if there is some phenomenon without explanation, experimenters call theorists.
    True, there is one subtlety here - only new physical laws are being sought. A physical system whose laws are known can nevertheless have complex dynamics where numerical methods must help.
    The physical laws of everyday life are a simplification sufficient for everyday needs of exact physical laws. The body, being released, falls down and it is not necessary to know that with acceleration g.

    Alex

    чт, 14 июл. 2022 г. в 14:13, Azamat Abdoullaev <ontop...@gmail.com>:

    Alex Shkotin

    unread,
    Jul 15, 2022, 6:23:01 AM7/15/22
    to ontolo...@googlegroups.com
    Maybe it should be added that in practical life we use mostly the knowledge, laws of these particular things, entities but not the Laws of Nature. Like this: If I step into this puddle my feet will get wet.

    чт, 14 июл. 2022 г. в 14:13, Azamat Abdoullaev <ontop...@gmail.com>:
    I agree with Alex,

    Azamat Abdoullaev

    unread,
    Jul 15, 2022, 7:05:22 AM7/15/22
    to ontolo...@googlegroups.com
    John wrote: "The term real AI is hopelessly vague.   Azamat has been using that term with a list of vague slogans and buzzwords and not a single concrete example.   I don't recommend that kind of talk".
    John,
    It looks like we are stuck in the past.
    Your  peer-reviewed good old symbolic AI (GOFAI) was many times frozen and discarded, being good mostly to spread fraudulent promises and wasting billions of public funding.
    For its long conception 2000+ years and short development lifetime of 70+ years, AI has passed all possible phases and stages to reach its true status of Real AI, or Causal Machine Intelligence and Learning:
    Aristotle's Analytics... > Formal Logical AI > Intuitive Physics AI > Experts Systems > Statistical Learning > Data Analytics > Data Mining > Predictive Analytics > Machine Learning > Deep Learning > Multimodal Learning > Causal Learning > Causal Machine Intelligence and Learning > Real Logic AI > Transdisciplinary AI = Trans-AI
    Here you might find more specifics as requested:

    --
    All contributions to this forum are covered by an open-source license.
    For information about the wiki, the license, and how to subscribe or
    unsubscribe to the forum, see http://ontologforum.org/info/
    ---
    You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

    Azamat Abdoullaev

    unread,
    Jul 15, 2022, 7:39:12 AM7/15/22
    to ontolo...@googlegroups.com
    Being unaware, we use the laws of nature, all known or unknown objective laws, rules, regularities and causal patterns guiding all reality, with its content.
    As soon as you violate natural principles and laws, you are out.
    In the practice of data-driven ML/AI, there is a big issue of how to become a real/causal AI, or how to learn physics concepts, as matter, with its features, mass, weight, states, processes, interactions and all possible interrelationships with energy and information.  
    There is a DeepMind AI project, dubbed as PLATO (Physics Learning through Auto-encoding and Tracking Objects) that is designed to understand that the physical world is composed of objects that follow basic physical laws. The idea is to teach an intuitive physics  expecting that an object-centric view of the world could give an AI a more generalised and adaptable set of abilities.
    To test PLATO’s ability to understand five physical concepts such as persistence (that an object tends not to vanish), solidity and unchangingness (that an object tends to retain features like shape and colour), the researchers used a series of simulated videos. Some showed objects obeying the laws of physics, while others depicted nonsensical actions, such as a ball rolling behind a pillar, not emerging from the other side, but then reappearing from behind another pillar further along its route. https://www.newscientist.com/article/2327766-deepmind-ai-learns-physics-by-watching-videos-that-dont-make-sense/

    Nadin, Mihai

    unread,
    Jul 15, 2022, 9:55:45 AM7/15/22
    to ontolo...@googlegroups.com

    Just one observation:

    The body, being released, falls down and it is not necessary to know that with acceleration g.

    This observation pertains to a stone or any other non-living object. Indeed, all stones falls the same. The cat does not simply fall…Gravity is in place, but so is the ability to fall affect the fall in the sense of avoiding harm. Acceleration is part of the anticipation informed falling!

    Mihai Nadin

    Alex Shkotin

    unread,
    Jul 16, 2022, 3:39:58 AM7/16/22
    to ontolo...@googlegroups.com
    Azamat, let me add a quote from Israel M. Gelfand by memory: "we don't have mathematics for biology."

    пт, 15 июл. 2022 г. в 14:39, Azamat Abdoullaev <ontop...@gmail.com>:

    Alex Shkotin

    unread,
    Jul 16, 2022, 3:59:55 AM7/16/22
    to ontolo...@googlegroups.com
    Mihai, of course! In the case of a cat, we can talk about the acceleration of g only for its center of mass.
    Speed, acceleration are more subtle concepts than fast or faster-and-faster. For example, Zeno of Elea apparently had no idea about speed when he put forward his aporia Arrow.
    I don't know what it means "Acceleration is part of the anticipation informed falling!" but I hope this is correct: "Speed is part of the anticipation informed moving!"

    Alex

    пт, 15 июл. 2022 г. в 16:55, Nadin, Mihai <na...@utdallas.edu>:

    Azamat Abdoullaev

    unread,
    Jul 16, 2022, 10:54:28 AM7/16/22
    to ontolog-forum
    Biomath is a good major for a good job.
    If ontology is superscience, AI is trans-science and engineering, maths is a metascience.
    Mathematics is everywhere, including life sciences. It goes as theoretical and mathematical biology.
    Geometry and topology, calculus, and statistics, it is all in biomath.

    Alex Shkotin

    unread,
    Jul 16, 2022, 12:12:06 PM7/16/22
    to ontolo...@googlegroups.com
    just one of the last great usage of AI https://twitter.com/EricTopol/status/1415732633305190405

    сб, 16 июл. 2022 г. в 17:54, Azamat Abdoullaev <ontop...@gmail.com>:

    Alex Shkotin

    unread,
    Jul 16, 2022, 12:49:03 PM7/16/22
    to ontolo...@googlegroups.com
    Mihai, can I write my interpretation? The cat, being released with its paws up, evaluates the situation and sets itself a goal - to land on its feet. She finds a solution to achieve this goal and starts to implement it. At the same time, she hopes or anticipates that she will land on all four paws.
    Goal, problem, solution, realization.

    Alex

    пт, 15 июл. 2022 г. в 16:55, Nadin, Mihai <na...@utdallas.edu>:

    Azamat Abdoullaev

    unread,
    Jul 16, 2022, 1:30:02 PM7/16/22
    to ontolo...@googlegroups.com
    Alex.
    This is the case of applying geometry and topology to the biology of protein structures as made by condensation of amino acids forming peptide bonds. AlphaFold is just adding ML/DL algorithms as a statistical computer simulation. Too much ado about nothing.
    image.png

    Nadin, Mihai

    unread,
    Jul 16, 2022, 5:59:59 PM7/16/22
    to ontolo...@googlegroups.com

    Protein folding is NOT a solved problem! Just for your information.

    MN

    Nadin, Mihai

    unread,
    Jul 16, 2022, 6:05:40 PM7/16/22
    to ontolo...@googlegroups.com

    Arshavsky:

    As to I.M. Gelfand, the deeper he plunged into neurophysiology, the more disappointed he became by the idea of using direct mathematical approaches for solving neurophysiological tasks. He used to say that modern mathematics developed in close contact with physics and engineering, whereas neurophysiological processes are much more complex than the physical.

    Borovik echoes the same.

    But since you are missing the point that the non-living and the living are of a different nature, quoting a quote about Bernstein is not the answer.

    MN

    From: ontolo...@googlegroups.com <ontolo...@googlegroups.com> On Behalf Of Alex Shkotin

    Alex Shkotin

    unread,
    Jul 17, 2022, 3:38:07 AM7/17/22
    to ontolo...@googlegroups.com
    Azamat,

    It's usual for ANN to be competent in IS :-) With the GO game we had the same ;-)
    ANN generates, the other algorithm chooses.
    But 90% of the right predictions sound great.

    сб, 16 июл. 2022 г. в 20:30, Azamat Abdoullaev <ontop...@gmail.com>:

    Alex Shkotin

    unread,
    Jul 17, 2022, 4:28:48 AM7/17/22
    to ontolo...@googlegroups.com
    Mihai,

    MN: "But since you are missing the point that the non-living and the living are of a different nature, quoting a quote about Bernstein is not the answer."
    My question is still the same: is a bacteriophage living or not?
    We have things with embedded ideality. Is bacteriophage a one of them?
    They use all the power of math and engineering in biology, with full understanding that `non-living and the living are of a different laws` :-)
    When a protein gets its three-dimensional form, does it move as a living or non-living thing?
    The more natures the better: what is the nature of non-living, and what is the nature of living? Do these differ from the nature of mind?
    Is this knowledge Pythagorean by nature?

    Romeo: "Tybalt, the nature of my feelings for you is all in the word you are my brother." 

    Alex

    вс, 17 июл. 2022 г. в 01:05, Nadin, Mihai <na...@utdallas.edu>:

    Alex Shkotin

    unread,
    Jul 17, 2022, 4:31:50 AM7/17/22
    to ontolo...@googlegroups.com

    вс, 17 июл. 2022 г. в 01:05, Nadin, Mihai <na...@utdallas.edu>:

    Arshavsky:

    alex.shkotin

    unread,
    Jul 17, 2022, 5:53:36 AM7/17/22
    to ontolog-forum
    Mihai, Thank you! This the last news from biologist: "We can say that life is the interaction of many molecular keys with locks. Science has known about this since the 50s of the last century, but it was extremely difficult to determine the three-dimensional structure of a protein."
    воскресенье, 17 июля 2022 г. в 00:59:59 UTC+3, Mihai Nadin:

    Nadin, Mihai

    unread,
    Jul 17, 2022, 11:33:50 AM7/17/22
    to ontolo...@googlegroups.com

    Worth reading:

    A MATHEMATICIAN'S VIEW OF THE UNREASONABLE

    INEFFECTIVENESS OF MATHEMATICS IN BIOLOGY.

    ALEXANDRE BOROVIK

    To your question: My question is still the same: is a bacteriophage living or not?

    The short answer: NO! All bacteriophages are composed of a nucleic acid molecule surrounded by a protein structure.

    The rest of your text does NOT come through coherently.

    Final note: my own view (if you care to know or to read what I published about it): the living is G-complex, i.e. undecidable; the non-living is decidable (can be fully and consistently described).

    MN

    Jack Park

    unread,
    Jul 17, 2022, 11:38:42 AM7/17/22
    to ontolog-forum

    Ravi Sharma

    unread,
    Jul 17, 2022, 3:02:05 PM7/17/22
    to ontolog-forum
    Bacteriophages are like disaster viruses (pandemics) they have molecular locks or affinity to life.
    Is life in primitive form like these as in thermal vents, and becomes intelligent as cellular complexes evolve or form?
    from days of Rashevsky to this referenced article on how complex MATH THINKING can be does remind me of personal exchanges with Prof Wigner who being a chemical engineer wrote on group theory in Quantum Mechanics.
    Whether AI or Not, life and knowledge are complex, can be partly treated with Math and tools, but holistic solutions are like trying to understand unification of all types of forces in physics!
    Thanks.
    Ravi
    (Dr. Ravi Sharma, Ph.D. USA)
    Chair, Ontology Summit 2022
    Senior Enterprise Architect
    Particle and Space Physicist
    Elk Grove CA


    Alex Shkotin

    unread,
    Jul 18, 2022, 4:43:21 AM7/18/22
    to ontolo...@googlegroups.com
    Dear Mihai Nadin,

    Thank you for your answer! It takes not too long - just half a year from my first request [1] to get it.

    Alex


    вс, 17 июл. 2022 г. в 18:33, Nadin, Mihai <na...@utdallas.edu>:

    Alex Shkotin

    unread,
    Jul 18, 2022, 4:57:25 AM7/18/22
    to ontolo...@googlegroups.com
    💪

    вс, 17 июл. 2022 г. в 18:38, Jack Park <jack...@gmail.com>:

    Alex Shkotin

    unread,
    Jul 18, 2022, 6:01:27 AM7/18/22
    to ontolo...@googlegroups.com
    Ravi, it's amazing to me that there are relatively few types of nuclei involved in biological processes. I mean the nucleus in the physical sense of the word as an accumulation of protons and neutrons. For example, there are no metals in the body. And at the same time, all these  nuclei are absolutely stable. An organism is created, grows, dies, decays - and not a single physical nuclear reaction! Only recombination.
    I think someday we will get a description of the configuration of the nuclei/atoms/molecules of the amoeba with such a level of accuracy that we will be able to say: if you arrange the nuclei like this in the space, inject electrons to charge neutrality and release it, it will come to life :-)
    Well, maybe some heating would be needed.

    Alex

    вс, 17 июл. 2022 г. в 22:02, Ravi Sharma <drravi...@gmail.com>:

    jsi...@measures.org

    unread,
    Jul 26, 2022, 6:15:23 PM7/26/22
    to ontolo...@googlegroups.com, Alex Shkotin
    Alex, 

    You wrote 
    it's amazing to me that there are relatively few types of nuclei involved in biological processes.… For example, there are no metals in the body. 

    Iron, for one? SETI recently hosted an excellent talk that disrupts simple categorical boundaries based on ‘kinds of stuff’ rather than organizing dynamics:

    Got Metal? Life and Metals, the unexpected connection” 

    Ravi Sharma

    unread,
    Jul 26, 2022, 8:57:14 PM7/26/22
    to ontolo...@googlegroups.com
    Janet
    Thanks..,
    Excellent report from Researchers and SETI
    Obviously metals determine life such as Fe and trace metals
    Also type of stars and planets by the types of nuclei in their spectral emissions 

    Finally Alex, 
    body may be related to mereology (parts) but life is not only sum of body parts?
    What is that? which goes away when life is no more!
    All elements remain but processes stop.
    Regards
    --

    John F Sowa

    unread,
    Jul 27, 2022, 12:02:49 AM7/27/22
    to ontolo...@googlegroups.com, Alex Shkotin
    Janet,
     
    That's an interesting reference (below).
     
    But it's important to remember four very important metals (in addition to iron):  sodium, potassium, calcium, and magnesium.  An imbalance of those four can lead to death.  There are also a few others that are important in small amounts.  Zinc is one of the more common.
     
    One reason why life wouldn't depend critically on many others is that their distribution across the earth is highly uneven.  Plants and animals that depended on them couldn't thrive outside of a narrow region.
     
    Even sodium is rare in regions far from the sea.  Animals often travel quite a distance to find a "salt lick".  And in ancient times, salt was an expensive commodity for many communities.
     
    John
     
     
     

    Alex Shkotin

    unread,
    Jul 27, 2022, 4:56:43 AM7/27/22
    to jsi...@measures.org, ontolo...@googlegroups.com
    Janet,

    Thank you very much for the trace elements! On my way to make biological axioms explicit I must mention it.
    Dynamic structures or structure in motion is all we have outside the mind:-) And then we split into groups to study different structures on different kinds of motions.
    Where in the organism are the metals? This is the question.
    Let's say I forgot about 4/2.8 g of the human body. "The body of a healthy adult male contains about 4 g of iron, women - 2.8 g. Most of the iron (about 75%) is found in the hemoglobin of red blood cells, and iron is also part of myoglobin and some enzymes. The remaining 25% of iron is stored in the reticuloendothelial system in the liver, spleen, and bone marrow." [1]

    I'll issue biological axioms as a separate topic.

    Alex


    ср, 27 июл. 2022 г. в 01:15, <jsi...@measures.org>:

    Alex Shkotin

    unread,
    Jul 27, 2022, 5:15:06 AM7/27/22
    to ontolo...@googlegroups.com
    Ravi,

    My point is to find out a structure of theoretical knowledge in different sciences and technologies. One of the exciting sciences is biology. But I do not issue any of my own ideas in this study. Let's collect, organize, and clarify knowledge produced by biologists.  The only procedure I would like to help is knowledge formalization :-)
    My way to answer your questions is to ask myself what kind of science, technology or theology we need to get a professional answer.
    But if you need my unprofessional answer, here we are.
    "body may be related to mereology (parts) but life is not only sum of body parts?" 
    Of course not. But I need to check with surgeons.
    "What is that? which goes away when life is no more! All elements remain but processes stop." 
    Soul, Sir.

    Alex

    ср, 27 июл. 2022 г. в 03:57, Ravi Sharma <drravi...@gmail.com>:

    Alex Shkotin

    unread,
    Jul 27, 2022, 5:20:58 AM7/27/22
    to so...@bestweb.net, ontolo...@googlegroups.com
    John, 

    thanks! Sodium is certainly a metal necessary for health in tangible amounts.

    Alex

    ср, 27 июл. 2022 г. в 07:02, John F Sowa <so...@bestweb.net>:

    Ravi Sharma

    unread,
    Jul 27, 2022, 8:10:19 PM7/27/22
    to ontolog-forum
    Alex
    Agreed, therefore humankind has devoted thousands of years in studying the nature of Soul and life connection also recently also mind-body!
    In a study I am exploring how they have described consciousness and its origin even in yet inanimate objects!
    More as I find out the role of consciousness and what is its relation to Soul?
    Thanks.
    Ravi
    (Dr. Ravi Sharma, Ph.D. USA)
    Chair, Ontology Summit 2022
    Senior Enterprise Architect
    Particle and Space Physicist
    Elk Grove CA

    Alex Shkotin

    unread,
    Jul 28, 2022, 4:09:10 AM7/28/22
    to ontolo...@googlegroups.com
    Ravi,

    Not my topic, but P. Werbos [1] may be interesting for you.
    Anyway there is psychology. They knew a lot. 
    In a simple way, I think that the nature of the soul is unknowable, but this does not mean that the souls themselves do not need to be studied and even treated.

    Alex

    [1] From physics and quantum technology to AI and soul: a unified mathematical view https://youtu.be/jfHBO_uuRyE

    чт, 28 июл. 2022 г. в 03:10, Ravi Sharma <drravi...@gmail.com>:

    Ravi Sharma

    unread,
    Jul 28, 2022, 6:47:08 PM7/28/22
    to ontolog-forum
    Alex
    Thanks, I viewed it somewhat but it was hard to comprehend.

    Thanks.
    Ravi
    (Dr. Ravi Sharma, Ph.D. USA)
    Chair, Ontology Summit 2022
    Senior Enterprise Architect
    Particle and Space Physicist
    Elk Grove CA

    alex.shkotin

    unread,
    Jul 17, 2023, 6:59:09 AM7/17/23
    to ontolog-forum
    We had a long discussion during the year. So let me just add a new reference:
    среда, 6 июля 2022 г. в 14:58:04 UTC+3, alex.shkotin:
    Colleagues,

    It may be only my problem to get that the answer of GPT-3 machine varies every time you ask. I ask Russian GPT-3 [1] three times question in Russian "What do you know about Jeeves and Wooster TV series?" and have got three different answers.
    So we have to deal with a probabilistic machine of some kind.
    What do you think?

    Alex

    Reply all
    Reply to author
    Forward
    0 new messages