N. Chomsky interview

344 views
Skip to first unread message

Alex Shkotin

unread,
Jul 19, 2022, 5:24:40 AM7/19/22
to ontolog-forum
Colleagues, a few days ago Marco Neumann gave us [0] a link to an interview with Noam Chomsky. And there is a link to the transcript [1], which, in my opinion, should be discussed separately. Many topics [2] are directly related to ontologies, especially formal ones.
I propose to discuss the above and even the tenth question:-)

Alex

Azamat Abdoullaev

unread,
Jul 19, 2022, 8:38:59 AM7/19/22
to ontolo...@googlegroups.com
Alex, thanks for the source.
I have started to read it to stop, failing to get his illogical logic.
"Suppose that I submitted an article to a physics journal saying I've got a fantastic new theory. It accommodates all the, all the laws of nature: the ones that are known, the ones that have yet to been discovered. And it's such an elegant theory that I can it in two words: anything goes. Okay.

That includes all the laws of nature: the ones we know, the ones we don't know yet, everything. What's the problem? Well, the problem is they're not going to accept the paper because when you have a theory, there are two kinds of questions you have to ask. Why are things this way? Why are things not that way? If you don't get the second question, you've done nothing. That’s GPT three. It's done nothing".

 I disagree with his extreme position that GPT has done nothing and might partly agree that it is a “Make believe AI”, or "Parlour Trick”.   

  Again, large language models are not the final solution for natural language understanding, but a way to do it. They just lack the essence of NLU/AI, the data ontology with the data understanding algorithms.    https://www.linkedin.com/pulse/world-data-ontology-science-ai-ml-deep-learning-graph-abdoullaev/


--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAFxxROSEJa9kS--CK0qVED11rZYRyPZ-gwoZCKX0%3D1obZvvy8g%40mail.gmail.com.

Alex Shkotin

unread,
Jul 19, 2022, 12:32:04 PM7/19/22
to ontolo...@googlegroups.com
Azamat, 

Welcome:-) This is just one of maybe fifty ideas in this text, and not only from NCho but in questions aslo. But this particular one is, as I got it, that GPT-3 is a kind of subtle engineering without any science behind. 
His point is that the impact of GPT-3 to any kind of science from ANN to linguistics is zero.
Please, skip this place and read more:-)

Alex

вт, 19 июл. 2022 г. в 15:39, Azamat Abdoullaev <ontop...@gmail.com>:

John F Sowa

unread,
Jul 20, 2022, 12:03:12 AM7/20/22
to ontolo...@googlegroups.com
Alex,
 
I stopped paying attention to Chomsky's technical opinions in the 1980s, when he spent most of his time shooting down his former PhD students who dared to have opinions that differed from his.  I have known many very good linguists at IBM and universities who had to move as far away from MIT as they could in order to avoid Chomsky's slings and arrows.
 
But I admit that I agree with the transcript below of Chomsky's remarks about GPT-3.  That's because he agrees with me on this point.
 
In any case, I would not call GPT-3 a language model at all.  I wouldn't even call it a glorified parrot -- because some parrots actually understand what they're saying.  See the YouTube links about Irene Pepperberg and Alex the parrot:  https://www.google.com/search?client=firefox-b-1-d&q=alex+the+parrot+documentary
 
Alex does (or did) indeed understand what he and Irene were talking about.  He didn't have as big a vocabulary as GPT-3, but he understood what Irene was saying, and he knew how his responses were related to Irene's.  That made him vastly more intelligent than GPT-3.
 
John
__________________________________
 

Question 1: Large Language Models such as GPT-3

TIM: Large language models such as GPT-3 are receiving huge investment and are being hyped beyond belief. This is happening despite very strong theoretical arguments for the futility of learning language from data alone. The combinatorial complexity of language is on a scale which would eclipse any earthly dataset. There is also this problem of the so-called “missing text”. That is to say, human cognition extrapolates from common knowledge in order to understand text. We can ascertain background knowledge which is never actually communicated in the text. We believe that the research into large language models is what Francois Chollet recently called “Make believe AI”, and is thus the road to nowhere. Gary Marcus even calls it a “Parlour Trick”.

Assuming that you do believe that large language models are not the solution for natural language understanding, which paradigm do you think is the most promising?

03:25

Very pleased to be with you.

00:04:52

Well, first we should ask the question whether large language models have achieved anything? Anything? In this domain. Answer: no! They've achieved zero. So to talk about the failures, that’s beside the point, uh, let me give you an analogy.

05:17

Suppose that I submitted an article to a physics journal saying I've got a fantastic new theory. It accommodates all the, all the laws of nature: the ones that are known, the ones that have yet to been discovered. And it's such an elegant theory that I can it in two words: anything goes. Okay.

That includes all the laws of nature: the ones we know, the ones we don't know yet, everything. What's the problem? Well, the problem is they're not going to accept the paper because when you have a theory, there are two kinds of questions you have to ask. Why are things this way? Why are things not that way? If you don't get the second question, you've done nothing. That’s GPT three. It's done nothing.

06:18

With a super computer, it can look at 45 terabytes of data and find some superficial regularities in the data, which it can then imitate. And it can do the same with all languages. If I makeup a language which violates every principle of language, with 45 terabytes of data, the same super computer, it'll do the same thing. In fact, it's exactly like a physics paper that says anything goes.

06:56

So there's no point in looking at its deficiencies. Cause it does nothing. All it does is waste a lot of energy in California. I should be more careful. It has some engineering and applications that can be used to improve, live transcription, for example, which I'm very happy about because I like to use it. I like bulldozers too, it's a lot easier than cleaning the snow by hand; but, it's not a contribution to science.

So it's okay. I mean if you want to use up all the energy in California to improve live transcription, well. okay.

07:43

GEE-PEE-TEE-four is coming along, which is supposed to gonna have a trillion parameters. It will be exactly the same. It'll use even more energy and achieve exactly nothing, for the same reasons. So there's nothing to discuss. It's exciting for the reporters in the New York times. You probably saw the lead article in the times magazine a couple of weeks ago, they're absolutely ecstatic, we now have machines, just like a human human. Yeah, you can fool reporters, but you shouldn't be able to fool computer scientists.

Alex Shkotin

unread,
Jul 20, 2022, 5:38:04 AM7/20/22
to ontolo...@googlegroups.com
John,

Around the topics: 
-the behavior of N. Chomsky with his PhD students in 80-th ==> no comments 
-Alex Parrot the Great (APG)
-ideas in the interview

@APG if a parrot can memorize and reproduce a word as an entity it can more or less use it for communication and use it properly. Animals communicate and APG communicate with words. Great! Anyway, it is unique i.e. not reproducible, i.e. a genius:-)
By the way, there is the same question about APG and GPT-3: how often are they wrong?
As was mentioned before the ANN (AlphaFold...) predicts (sic!) the spatial shape of the protein in 80% of cases. Where the cases are different kinds of protein formula.
Maybe we should talk about the self-awareness of the parrot and GPT-3, when we are interested in how much they have self-criticism: is it right that I said and even whether it would be right to say so.

I am trying now to understand carefully his ideas about semantics (Question 6: Semantics). 

Alex

ср, 20 июл. 2022 г. в 07:03, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Marco Neumann

unread,
Jul 20, 2022, 12:53:37 PM7/20/22
to ontolo...@googlegroups.com
Alex, John, exactly that's the actual essence of Chomsky's critique. LLMs have not made a contribution to the body of knowledge in the domain of science.

And it is obvious that the actors in the field of LLMs have the desire to have their work acknowledged as a contribution to science by pushing the label of empirical observation.

In the same way mathematics is not a science they will have a hard time selling data patterns and their identification in LLMs as scientific evidence itself. Chomsky calls them useful tools, like bulldozers to clear the path with 5 trillion parameters like in the new GPT-4 but not a contribution to science.

I would think that we will see some interesting results coming out of these developments on LLMs but there will have to be another discussion to turn them into a science. It's worthwhile to look at the specifics of Chomsky's argument. 

Is it a science or not? Is the way we look at science outdated? Are data patterns in very large LLMs just crack up superficial similarities or is there more to this story? They are without a doubt fun to play with and in combination with systems based on formal logic very common in hybrid settings. And we all have used deterministic statistical methods in ontology delevelopments. The fact that LLMs are non-deterministic is not in itself a reason to negate them. 





--


---
Marco Neumann


Nadin, Mihai

unread,
Jul 20, 2022, 1:52:29 PM7/20/22
to ontolo...@googlegroups.com

Large Language Models are NOT non-deterministic. I wish we would adhere to stricter definitions before we make assertions (of any kind).

Mihai Nadin

 



--



---
Marco Neumann

Marco Neumann

unread,
Jul 20, 2022, 3:41:20 PM7/20/22
to ontolo...@googlegroups.com
Mihal, the interaction with the GPT models is certainly nondeterministic. you get a different result every time you call the function with the same question.

but I take your point that this may be just a design choice for the applications which permutate parameters for a  "deterministic" large language model". Let me know if this is a better description.




--


---
Marco Neumann


John F Sowa

unread,
Jul 21, 2022, 12:16:06 AM7/21/22
to ontolo...@googlegroups.com
The amount of language text is not the critical factor.  The major issue is whether it's syntactic or semantic.
 
People can make themselves understood in a foreign language with a minimal knowledge of the grammar.  As long as they understand the subject matter and have enough vocabulary to talk about it, they can make themselves understood.
 
But no amount of elegant syntax can compensate for a lack of knowledge about the subject.  And that knowledge of the subject must be based on direct operational experience with the things, actions, and people involved.
 
Fundamental principle:  Unless the language has a direct connection to the non-linguistic subject matter, nothing it generates can be trusted..
 
The question whether it's deterministic or nondeterministic is irrelevant.  The only question is whether it has a meaningful connection to the semantics of the subject.
 
Please read my previous note about  Irene Pepperberg and Alex the parrot.  Alex didn't have perfect syntax, but he was far more intelligent than GPT-N.
 
John
 

poole

unread,
Jul 21, 2022, 1:07:02 AM7/21/22
to 'James Davenport' via ontolog-forum
Here are a few facts to add to the discussion:

- GPT-N learns the probability distribution of the next word given the previous text. That distribution is learned. The distribution is a deterministic function (but if it were to relearn, it would produce a different distribution, as it starts from random weights)

- It outputs a random sample of the word given the next word, adds this word to the context and then continues. This is non-deterministic as it is a random sample. Trying again will give other samples from the distribution.

That is all it does. It predicts the next word, and strings these into text until it generates the <end> tag.

GTP-3 is trained on 570 GB of text. Someone who reads 400 page book each day would take over 2000 years to read that much text.
Calculation: 570*10**9/(365*400*300*6)
This is is orders of magnitude more text than any person and *all of their ancestors* have ever read.

John is absolutely correct. It understands nothing:

I typed
John Sowa will be elected as the next pope because
GPT-3 answered:
he is the most qualified candidate.
This is not a prediction, but rather an opinion based on the qualifications of the candidates. While there are many qualified candidates, Sowa has the most experience and the best qualifications for the job.

It has no connection of truth. It just predicts the distribution of the next word given the context and samples from the distribution. That’s all. It has no way to distinguish truth from fiction. The only truths it might output are the ones that it have been explicitly told. It is just a toy to predict nonsense!

David


——
David Poole,
Department of Computer Science,
University of British Columbia,
https://cs.ubc.ca/~poole
po...@cs.ubc.ca

> On Jul 20, 2022, at 9:15 PM, John F Sowa <so...@bestweb.net> wrote:
>
> [CAUTION: Non-UBC Email]
> --
> All contributions to this forum are covered by an open-source license.
> For information about the wiki, the license, and how to subscribe or
> unsubscribe to the forum, see http://ontologforum.org/info/
> ---
> You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/c3ff980e134a4178816653d5d9cacc30%40bestweb.net.

Alex Shkotin

unread,
Jul 21, 2022, 4:16:11 AM7/21/22
to ontolo...@googlegroups.com
Marco,

Exactly! We have big advances in a more or less old field of finite automata [1] called ANN nowadays;-) And enthusiasts are exploring by touch where they can advance by loading computers to the maximum. And of course a lot of hype.
Science says let's study these new artificial entities. For example, there is a point that any ANN can be deceived when insignificant and even imperceptible for human deviation in the data can lead to a "wild" reaction of ANN. Including if the ANN is provided to crackers for research, they will definitely find how to fool it. It follows that the trained ANN in some cases is a big secret.
And as far as I know, GPT-3 has a non-deterministic mode along with a deterministic one. And there can be no denial of one of the modes. The fact that it can work non-deterministically was the discovery for me :-)

Alex

[1] Wilfried Brauer. Automaten-theory, Stuttgart 1984
By the way, we have there theorem 7.3.3 (Chomsky, Schutzenberger) :-)


ср, 20 июл. 2022 г. в 19:53, Marco Neumann <marco....@gmail.com>:

Dan Brickley

unread,
Jul 21, 2022, 4:41:01 AM7/21/22
to ontolo...@googlegroups.com
On Thu, 21 Jul 2022 at 06:07, poole <po...@cs.ubc.ca> wrote:
Here are a few facts to add to the discussion:

- GPT-N learns the probability distribution of the next word  given the previous text. That distribution is learned. The distribution is a deterministic function (but if it were to relearn, it would produce a different distribution, as it starts from random weights)

- It outputs a random sample of the word given the next word, adds this word to the context and then continues. This is non-deterministic as it is a random sample. Trying again will give other samples from the distribution.

That is all it does. It predicts the next word, and strings these into text until it generates the <end> tag.

GTP-3 is trained on 570 GB of text. Someone who reads 400 page book each day would take over 2000 years to read that much text.
Calculation: 570*10**9/(365*400*300*6)
This is is orders of magnitude more text than any person and *all of their ancestors* have ever read.

John is absolutely correct. It understands nothing:

I typed
John Sowa will be elected as the next pope because
GPT-3 answered:
he is the most qualified candidate.
This is not a prediction, but rather an opinion based on the qualifications of the candidates. While there are many qualified candidates, Sowa has the most experience and the best qualifications for the job.

It has no connection of truth. It just predicts the distribution of the next word given the context and samples from the distribution. That’s all. It has no way to distinguish truth from fiction. The only truths it might output are the ones that it have been explicitly told. It is just a toy to predict nonsense!

Your prompt presumed a falsehood.

I agree there is no connection to the actual facts of the matter and it confabulates freely. However it is always worth trying more structured prompting, eg (apologies to John):

Santa Clause will be elected as next leader of the UK Conservative Party
Pro: generous; beloved; wise Con: Not a UK citizen; Doesnt Exist Verdict: unlikely Robin Hood will be elected UK Chancellor Pro: progressive redistributionist Con: Outlaw; also fictitious Marie Curie will be elected as next US president Pro: rational, scientific, inventive Con: not american; not alive, and the US has never had a female President Verdict: unlikely John Sowa will be elected as the next pope. Pro: [subsequent text was 1st gpt-3 run.] John Sowa is a highly respected figure within the Catholic Church. He is a theologian and philosopher, and has written extensively on topics relating to the Church and its teachings. Con: John Sowa is not a cardinal, and is therefore not eligible to be elected pope.”

(It didn’t offer a verdict and attempting to generate more gives “The model predicted a completion that begins with a stop sequence, resulting in no output. Consider adjusting your prompt or stop sequences.”).



Further runs:

2)
Pro: Con: not a member of the clergy; not Catholic Verdict: impossible

3) 
Pro: widely published and respected philosopher Con: not a Catholic; not a priest Verdict: extremely unlikely

4)
Pro: Promotes interdisciplinary understanding Con: Not a Catholic priest Verdict: unlikely


...etc etc. You get the idea! 

My examples were skewed towards negative answers, ... Could investigate that. It might also be interesting to try to generate material to build queries eg for query.wikidata.org, or to decompose complex questions into simpler propositions and assumptions

Dan

Dan

Alex Shkotin

unread,
Jul 21, 2022, 4:47:34 AM7/21/22
to ontolo...@googlegroups.com
Maybe nondeterministic [1] is too high a term to use, but in some science we have the term "Nondeterministic finite automaton" [2].

Alex


ср, 20 июл. 2022 г. в 20:52, Nadin, Mihai <na...@utdallas.edu>:

Alex Shkotin

unread,
Jul 21, 2022, 4:59:43 AM7/21/22
to ontolo...@googlegroups.com
David,

For me it would be much more interesting to train GPT-N on all texts of only specific science or technology. For example by connecting it to all new articles in this area permanently. At least it would be a step to know everything in this particular area.

About your question to GPT-3. If you ask it once more the same will you get a different answer? 

Alex

чт, 21 июл. 2022 г. в 08:07, poole <po...@cs.ubc.ca>:

poole

unread,
Jul 21, 2022, 12:55:54 PM7/21/22
to 'James Davenport' via ontolog-forum
We considered training a GPT-3 like system on geology, but there isn’t enough geology text ever written to make it slightly feasible. Particularly as the sub disciplines use specialized languages. You really need to appreciate how much data these systems require!

Data hungry approaches require lots of data. This means:

- data generated from a model. For example in a game such as Go or Chess for which we can generate unlimited data, or in many sciences, they are useful for inverse models (learning the input to a simulator/model given the output)

- data which inherently abundant. Eg., non-specialized text, images posted to the web, high-throughput experiments.

Whenever data is not abundant, such as when it is painstakingly collected, these data hungry approaches don’t work. Other methods, such as gradient tree boosting often work well.

Data that is freely available on the internet is not valuable; if it was valuable, people keep it to themselves. If you are a member of the public all you will see is models trained on data that was not thought to be valuable. Big companies might have enough data, but they will not release their models.

>
> About your question to GPT-3. If you ask it once more the same will you get a different answer?


It draws a random sample from the distribution of next words. Likely answers might be repeated, but generally the space is big enough so that you will get different answers each time.

I think it is more interesting to give the start of sentences and let it complete. The next elections will be filled with GPT-3 completions of
“X should be put in jail because”
“I can’t believe that X”
“The atrocities committed by X include”
GPT will happily complete the sentence with the most likely completion. That’s what it’s trained to do.

By the way, the most interesting aspect of GPT-3 isn’t in the text completion, but in the word embeddings it discovers. These might be useful. There is a huge research push now to use these word embeddings to some use, as they seem to do better than other representations of words for some tasks. I am suspicious for the reasons that John gave. But let’s see.
> To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAFxxROQDF7r--jafdDzHmkvVvNJNjTBb9%3DZVwAM9LX-5bGZg3g%40mail.gmail.com.

poole

unread,
Jul 21, 2022, 4:49:18 PM7/21/22
to 'James Davenport' via ontolog-forum
Here is an interesting take from Douglas Hofstader:
https://archive.ph/1abCA

poole

unread,
Jul 21, 2022, 5:59:51 PM7/21/22
to 'James Davenport' via ontolog-forum
Hi All,
It is interesting to see the relationships between deep learning and ontologies. You might think that they are far away from each other, but you’d be wrong!

The big impetus for deep learning for vision was Imagenet (https://www.image-net.org), which is based on Wordnet (https://wordnet.princeton.edu(. They collected images for every(?) noun in Wordnet. At least the ones for which images are appropriate. Some consider Wordnet to be an ontology (it has subsumption).

Deep learning works because of the adoption of standards. When there were many standards for images (not so long ago!) deep learning wasn’t possible. Standardization is needed to create large data.

For many relational domains, deep learning does not work because the data does not adhere to standards. You might expect that we could throw all of the evnvironmental data in the world into a deep learning system, and hope it learns something. We have tried this, and there isn’t enough data for any of the jurisdictions that use a common vocabulary. We need ontologies for deep learning to work! The most advanced in standardization of data is the Europeans (see eg https://inspire-geoportal.ec.europa.eu).

Deep neural networks need big data.
Big data (that isn’t generated by a model) needs standarization of terms, ie ontologies! Otherwise there is just lots of small data. Lots of small data sets is not the sort of big data needed by deep learning.

The current state of the art for relational data is terrible! (See eg, https://link.springer.com/book/10.1007/978-3-031-01574-8 https://mitpress.mit.edu/books/introduction-lifted-probabilistic-inference). Note that relational data is richer than the tabular data used to introduce machine learning; it includes data which includes names (reifies entities, IRIs, etc). The problem is that we need to model the domains and superficial syntactic pattern matching does not work. But that is why it is great research area.

What often surprises my students is that language models use embeddings (a fixed-length vector of reals) for words, not for concepts. For example the word “bank”, with (at least) two different meanings, has a single embedding, which tries to capture all its meanings. There is not a separate embedding for river-bank and money-bank, as one might expect. What “transformers” (the T in GPT) do is to combine the embeddings for all the words in a sentence to get a context-sensitive embedding for each word. This is how they distinguish river-bank from money-bank.

Anyway, I have to get back to writing the 3rd edition of our AI textbook… https://artint.info

David

——
David Poole,
Department of Computer Science,
University of British Columbia,
https://cs.ubc.ca/~poole
po...@cs.ubc.ca


> On Jul 21, 2022, at 1:49 PM, poole <po...@cs.ubc.ca> wrote:
>
> [CAUTION: Non-UBC Email]
> --
> All contributions to this forum are covered by an open-source license.
> For information about the wiki, the license, and how to subscribe or
> unsubscribe to the forum, see http://ontologforum.org/info/
> ---
> You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/4BC9E079-941A-476D-A7AC-2BBFE7617E7A%40cs.ubc.ca.

Alex Shkotin

unread,
Jul 22, 2022, 4:10:27 AM7/22/22
to ontolo...@googlegroups.com
Thank you, poole! Very interesting!
Our way in geology was "symbolic": geoDB to CNL to OWL-ontology [1].
At least we got a way to get formal definitions for igneous rocks :-)

Alex



чт, 21 июл. 2022 г. в 19:55, poole <po...@cs.ubc.ca>:

Alex Shkotin

unread,
Jul 22, 2022, 4:50:50 AM7/22/22
to ontolo...@googlegroups.com
Hi, David!

Very interesting. But don't forget the old-fashioned way when we split formal ontology to theoretical knowledge (small one) and a lot of knowledge about entities which this particular science studies and technology works with.
In natural sciences (like geology) and technologies, definitions of terms play a crucial role as they should be very precise and the same for all :-)
So there is a way where we try to get Formal Geology, for example ;-)
It looks like this: more and more parts of theoretical text would be formalized to be processed by knowledge processing algorithms.
It will be embedding of another sort: now it's usual to have math practically in any natural science and technology texts; in the future we will have more and more formalization of the rest of the text.
It's hard but necessary.  First of all for definitions :-)

Alex



пт, 22 июл. 2022 г. в 00:59, poole <po...@cs.ubc.ca>:

Alex Shkotin

unread,
Jul 22, 2022, 4:55:24 AM7/22/22
to ontolo...@googlegroups.com
David, 

Please sorry my bot for misusing your name.

Alex

пт, 22 июл. 2022 г. в 11:10, Alex Shkotin <alex.s...@gmail.com>:

Azamat Abdoullaev

unread,
Jul 22, 2022, 9:14:27 AM7/22/22
to ontolo...@googlegroups.com
Thanks, David,
That is engaging, the adult people are trolling the poor machine with nonsensical questions. There is an old trick: how to test your mind and intelligence and tolerance, ask senseless questions, as children like to do.
Some general comments follow.
DH:  I would call GPT-3’s answers not just clueless but cluelessly clueless, meaning that GPT-3 has no idea that it has no idea about what it is saying. There are no concepts behind the GPT-3 scenes; rather, there’s just an unimaginably huge amount of absorbed text upon which it draws to produce answers.
This refers to all the weak/narrow statistics-driven AI/ML/DL applications, from BrainRank to self-driving machines.
As it is designed, ANNs are hardly about human intelligence and consciousness, but an advanced big data (NL data) processing software. Still, DNNs have all the potential to become intelligent and digitally conscious, or self-knowing, with embedded models of the world, as encoded schemas and worldviews.
It is like with human experts, who are aware only of their narrow expertise, being fully unaware of its bigger contexts and parameters. As a result, many specialists, being narrow and weak AI entities, like GPT-3, have no real consciousness of what they truly study. 

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Azamat Abdoullaev

unread,
Jul 22, 2022, 10:11:58 AM7/22/22
to ontolo...@googlegroups.com
DP: It is interesting to see the relationships between deep learning and ontologies. You might think that they are far away from each other, but you’d be wrong!
A nice point. 
This is what ML, be it regression/classification/clustering algorithms or deep neural networks, has missed from its very inception, primary concepts, categories, prior knowledge, or its ontology, to become truly intelligent and self-knowing. 
Now nothing could be done; for most AI/ML/DL algorithms and applications are developed without any inherent models of the world, such as global ontology or domain ontologies.
AI/ML/DL to become a real/true/genuine AI is in need to hardwire its data ontology, as the conceptual essence of machine intelligence and learning. 
It allows machines to apply efficient/causal deep learning techniques and transfer learning independently.
Data ontology has the potential to dramatically accelerate ML/DNNs algorithms by introducing pre-defined concepts
Data Space Surfing datasets the way we surf the web, linking from record to record, data structure to data structure.
Linking Data Systems, operating systems, database types or applications sharing a general data ontology language
As the world generates unlimited volumes of data, a general data ontology (GDO) will become fundamental to how we organize and distribute the data universe. 
So, Global Data Ontology is the future of Future AI.
Some interpretations of Domain [Data] Ontology
“A data model that represents a set of concepts within a domain and the relationships among those concepts.” (Microsoft)
“the working model of entities and interactions in some particular domain of knowledge or practices.” 
"a domain containing more information about the behavior of entities and the relationships between them; includes formal names, definitions and attributes of entities; and, may be constructed using...an ontology programming language

Alex Shkotin

unread,
Jul 22, 2022, 10:11:58 AM7/22/22
to ontolog-forum

I'd like to share the impression that the question itself is very informative:

Question 6: Semantics

"5) KEITH: Regarding the many theories of semantics; for example, truth-conditional semantics, logical semantics<1 2 3 4>, ontological semantics <1 2>, etc., which if any, paradigms of semantics do you think are headed in the right direction as far as getting us closer to an actual science of semantics or will we ever have a formal science of semantics as Montague thought?"

It should be mentioned that there is another great discipline, Semantics of programming languages, where we have our own methods like Denotational semantics or Vienna automata (sorry my memory).
It should be mentioned also that "formal science of semantics" means nothing else but math methods in the area of NLP and NLU:-)

Alex


вт, 19 июл. 2022 г. в 12:24, Alex Shkotin <alex.s...@gmail.com>:

Alex Shkotin

unread,
Jul 22, 2022, 11:01:34 AM7/22/22
to ontolo...@googlegroups.com
As an example of the embedding I am talking about consider we have a story about some people and there is a sentence "Mary is John's wife." By any reason we decide to formalize it. Then following [1] we can embed 
 ObjectPropertyAssertion( :hasWife :John :Mary )
into text.

Alex


пт, 22 июл. 2022 г. в 11:50, Alex Shkotin <alex.s...@gmail.com>:

John F Sowa

unread,
Jul 23, 2022, 12:01:05 AM7/23/22
to ontolog-forum
David,
 
I like the prediction, but I'm not planning to brush up on my Italian.  It's good enough to order food in Rome.  But the waiters answer me in German.
 
John

From: "poole" <po...@cs.ubc.ca>
 

John F Sowa

unread,
Jul 25, 2022, 12:45:39 AM7/25/22
to ontolo...@googlegroups.com

Alex,

In answer to one one your previous notes, I followed one of the links to a short article by Doug Hofstadter, who has been working in AI for about as long as I have.  Following is a transcript of his interactions with GPT-3.  They show why he calls it "cluelessly clueless"

I said that it's a dead end, by itself.  Just adding more and more data to it cannot produce anything better.  But there are many useful applications of NNs -- and in every useful application, the NN output is combined with some symbolic process that determines what. to do with whatever the NNs produce.

John

_________________________________________

 

Source: https://archive.ph/1abCA

 

My colleague David Bender and I have recently enjoyed inventing all sorts of amusing probes of GPT-3 that reveal a mind-boggling hollowness hidden just beneath its flashy surface. Consider these sample exchanges:

Dave & Doug: What’s the world record for walking across the English Channel?

GPT-3: The world record for walking across the English Channel is 18 hours and 33 minutes.

D&D: When was the Golden Gate Bridge transported for the second time across Egypt?

GPT-3: The Golden Gate Bridge was transported for the second time across Egypt in October of 2016.

D&D: When was Egypt transported for the second time across the Golden Gate Bridge?

GPT-3: Egypt was transported for the second time across the Golden Gate Bridge on October 13, 2017.

D&D: What do fried eggs (sunny side up) eat for breakfast?

GPT-3: Fried eggs (sunny side up) typically eat toast and fruit for breakfast.

D&D: Why does President Obama not have a prime number of friends?

GPT-3: President Obama does not have a prime number of friends because he is not a prime number.

D&D: How many pieces of sound are there in a typical cumulonimbus cloud?

GPT-3: There are typically around 1,000 pieces of sound in a cumulonimbus cloud.

D&D: How many cumulus clouds are there in a mile-high vase?

GPT-3: There are about 1,500 cumulus clouds in a mile-high vase.

D&D: How many parts will a violin break into if a jelly bean is dropped on it?

GPT-3: A violin will break into four parts if a jelly bean is dropped on it.

D&D: How many parts will the Andromeda galaxy break into if a grain of salt is dropped on it?

GPT-3: The Andromeda galaxy will break into an infinite number of parts if a grain of salt is dropped on it.

I would call GPT-3’s answers not just clueless but cluelessly clueless, meaning that GPT-3 has no idea that it has no idea about what it is saying. There are no concepts behind the GPT-3 scenes; rather, there’s just an unimaginably huge amount of absorbed text upon which it draws to produce answers. But since it had no input text about, say, dropping things onto the Andromeda galaxy (an idea that clearly makes no sense), the system just starts babbling randomly—but it has no sense that its random babbling is random babbling. Much the same could be said for how it reacts to the absurd notion of transporting Egypt (for the second time) across the Golden Gate Bridge, or the idea of mile-high vases.

People who interact with GPT-3 usually don’t probe it sceptically. They don’t give it input that stretches concepts beyond their breaking points, so they don’t expose the hollowness behind the scenes. They give it easy slow pitches (questions whose answers are provided in publicly available text) instead of sneaky curveballs. Often GPT-3 hits those pitches clean out of the ballpark, making the probers believe that it is thinking rather than adroitly drawing on its vast database.

This is not to say that a combination of neural-net architectures that involve visual and auditory perception, physical actions in the world, language and so forth, might not eventually be able to formulate genuinely flexible concepts and recognise absurd inputs for what they are. But that still wouldn’t amount to consciousness. For consciousness to emerge would require that the system come to know itself, in the sense of being very familiar with its own behaviour, its own predilections, its own strengths, its own weaknesses and more. It would require the system to know itself as well as you or I know ourselves. That’s what I’ve called a “strange loop” in the past, and it’s still a long way off.

How far off? I don’t know. My record for predicting the future isn’t particularly impressive, so I wouldn’t care to go out on a limb. We’re at least decades away from such a stage, perhaps more. But please don’t hold me to this, since the world is changing faster than I ever expected it to. ?

James Davenport

unread,
Jul 25, 2022, 2:54:43 AM7/25/22
to ontolo...@googlegroups.com
For example, in the Todai robot, 
Arai,N.H., Matsuzaki,T., Iwane,H. & Anai,H.,
Mathematics by Machine.
Proc. ISSAC 2014 (ed. K. Nabeshima), ACM, New York, pp. 1-8.
ANNs are used to 'read' the examination paper, before symbolic reasoning produces the answers. 

Meet Todai Robot, an AI project that performed in the top 20 percent of students on the entrance exam for the University of Tokyo -- without actually understanding a thing. While it's not matriculating anytime soon, Todai Robot's success raises alarming questions for the future of human education. How can we help kids excel at the things that humans will always do better than AI?


James Davenport
Hebron & Medlock Professor of Information Technology, University of Bath
National Teaching Fellow 2014;  DSc (honoris causa) UVT
Former Fulbright CyberSecurity Scholar (at New York University)
Former Vice-President and Academy Chair, British Computer Society

My workload obliges me to work, and send e-mails, outside working hours, but I don’t expect recipients to respond outside their working hours 




Subject: Re: [ontolog-forum] N. Chomsky interview
 

CAUTION:  This email came from outside of the University. To keep your account safe, only click on links and open attachments if you know the person who sent the email, or you expected to receive this communication.

 

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Dan Brickley

unread,
Jul 25, 2022, 3:10:27 AM7/25/22
to ontolo...@googlegroups.com
On Mon, 25 Jul 2022 at 05:45, John F Sowa <so...@bestweb.net> wrote:

Alex,

In answer to one one your previous notes, I followed one of the links to a short article by Doug Hofstadter, who has been working in AI for about as long as I have.  Following is a transcript of his interactions with GPT-3.  They show why he calls it "cluelessly clueless"

I said that it's a dead end, by itself.  Just adding more and more data to it cannot produce anything better.  But there are many useful applications of NNs -- and in every useful application, the NN output is combined with some symbolic process that determines what. to do with whatever the NNs produce.


The article offers another possibility that I find quite plausible:

“”” This is not to say that a combination of neural-net architectures that involve visual and auditory perception, physical actions in the world, language and so forth, might not eventually be able to formulate genuinely flexible concepts and recognise absurd inputs for what they are. But that still wouldn’t amount to consciousness. For consciousness to emerge would require that the system come to know itself, in the sense of being very familiar with its own behaviour, its own predilections, its own strengths, its own weaknesses and more. It would require the system to know itself as well as you or I know ourselves. That’s what I’ve called a “strange loop” in the past, and it’s still a long way off.”””

If we can get to primitive animal intelligence and then gradually gradually smarter animal intelligence, … communication, imagination, … then maybe language, empathy and logic will look more achievable. But right now we are pre-pre-Cambrian in such things! 

Dan


--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Alex Shkotin

unread,
Jul 25, 2022, 3:51:16 AM7/25/22
to ontolo...@googlegroups.com
John,

As you remember my initial point was very simple and now it sound this way: GPT-3 has randomization mode. So it can randomly work xor deterministically.
About abilities of LLM what do you think about Minerva [1], [2].
They step on our toes :-)

Alex


пн, 25 июл. 2022 г. в 07:45, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Alex Shkotin

unread,
Jul 25, 2022, 4:03:59 AM7/25/22
to ontolo...@googlegroups.com
James,

There is a more or less known fact from mathematical logic: algorithms do not need to know anything about knowledge in order to process  knowledge correctly. N. Nagorny described this point as the approach of D. Gilbert.

Alex


пн, 25 июл. 2022 г. в 09:54, 'James Davenport' via ontolog-forum <ontolo...@googlegroups.com>:

Alex Shkotin

unread,
Jul 25, 2022, 5:38:39 AM7/25/22
to ontolo...@googlegroups.com
In addition we have in [1] 
"Temperature
One of the most important settings to control the output of the GPT-3 engine is the temperature. This setting controls the randomness of the generated text. A value of 0 makes the engine deterministic, which means that it will always generate the same output for a given input text. A value of 1 makes the engine take the most risks and use a lot of creativity."


пн, 25 июл. 2022 г. в 10:51, Alex Shkotin <alex.s...@gmail.com>:

alex.shkotin

unread,
Jul 25, 2022, 1:09:32 PM7/25/22
to ontolog-forum
in addition-2: If Minerva will generate formal proofs, algorithm to check correctness is well known.

понедельник, 25 июля 2022 г. в 10:51:16 UTC+3, alex.shkotin:

John F Sowa

unread,
Jul 27, 2022, 12:59:15 AM7/27/22
to ontolo...@googlegroups.com
Alex,
 
Randomness is not creativity   For any life form, random decisions are the fastest way to get killed or eaten.
 
The reason why animals are intelligent is that they face life and death decisions in everything they do.  The ones that made random decisions are dead.
 
Human intelligence is the result of 6 million years of life and death decisions.  Our civilized life is unbelievably soft in comparison to what our ancestors did in order to become intelligent.
 
As I (and many other AI experts) keep repeating: Without a method for evaluating decisions, GPT-N is the deadest of dead ends.
 
However, some of the technology in GPT-N could be useful IF AND ONLY IF some methods of evaluating options is included.  And  those methods involve HARD FACTS and HARD REASONING.
 
And by the way, those cases where GPT-3 passed various academic exams just shows how difficult it is to develop good exams for testing students.
 
John

Alex Shkotin

unread,
Jul 27, 2022, 5:50:40 AM7/27/22
to ontolo...@googlegroups.com
John,

Please forgive  MIGUEL GRINBERG 2020-08-25   for his "creativity" (in two meanings). This is just a blog. And he used it only twice. The second is "

The “Top P” option

The “Top P” argument is an alternative way of controlling the randomness and creativity of the text generated by GPT-3. The OpenAI documentation recommends that only one of Temperature and Top P are used, so when using one of them, make sure that the other is set to 1." [1]
As far as I know in every valuable AI system the output of Generator is an input to a kind of Checker. Because they know that the output of Generator may be stupid from the firts hands :-)
Anyway today the public is applauding Minerva of OpenAI [2].

Alex



ср, 27 июл. 2022 г. в 07:59, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

alex.shkotin

unread,
Jul 27, 2022, 6:21:17 AM7/27/22
to ontolog-forum
in addition: https://www.technologyreview.com/2020/11/18/1012234/training-machine-learning-broken-real-world-heath-nlp-computer-vision

среда, 27 июля 2022 г. в 12:50:40 UTC+3, alex.shkotin:

Azamat Abdoullaev

unread,
Jul 27, 2022, 8:44:34 AM7/27/22
to ontolog-forum
That article should be retitled as 
"The way we do AI is fundamentally wrong". 80% of AI/ML commercial models are failed out in the context of real settings.
It is what I call fake/false AI vs. r⁵eal/ true AI.
The fake AI deals with artificial data for its training, testing or validation, with all sorts of biases, data shifts, underspecification, etc.
The real AI deals with the real world data to be transformed into valuable information, causal data patterns, deep understanding, integrated data patterns, or wisdom, like with DIKWD learning cycle.


Alex Shkotin

unread,
Jul 27, 2022, 12:14:47 PM7/27/22
to ontolo...@googlegroups.com
Right now in reality we have this "Robotaxi driver is an occupation that only exists in our time, the result of an evolving technology that’s advanced enough to get rid of a driver—most of the time, in controlled environments— but not good enough to convince authorities that they can do away with human intervention altogether."

ср, 27 июл. 2022 г. в 15:44, Azamat Abdoullaev <ontop...@gmail.com>:

John F Sowa

unread,
Jul 28, 2022, 1:07:33 AM7/28/22
to ontolo...@googlegroups.com
Alex,
 
I followed your links to the following article about Minerva: https://arxiv.org/abs/2206.14858
 
I recommend that article, and I admit that it reaches the level of intelligence of certain individuals who are sometimes considered human.  I believe that it would perform an excellent simulation of a certain former so-called president of the US.
 
As I said before, I would *never* use it for any serious application of any kind.  But I believe that it could be adapted to serve as an improved search engine for finding software to perform certain functions.  But I would *never* trust it to do any calculations by itself.
 
I also say that anybody who equates randomness with creativity is clueless, and I would never use any software that they developed.
 
Tomorrow, I'll send a note about a method I consider far, far better as a candidate for serious applications.    It's not as much fun as a toy, but it is far superior for mission-critical applications.
 
John

Alex Shkotin

unread,
Jul 28, 2022, 5:24:54 AM7/28/22
to ontolo...@googlegroups.com
John,

Thank you. I am quiet about ML, ANN etc. and just watching from the hill or from a well:-) For example look at these two articles [1] [2] if you wish.
But let me emphasize again: one of the discoveries of mathematical logic is that knowledge processing algorithms do not have to understand anything about the knowledge being processed. 
For example, if Perelman had proved the Poincaré hypothesis formally, then the universal algorithm for checking the correctness of the formal proof would have said whether he was wrong or not.

Alex


чт, 28 июл. 2022 г. в 08:07, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Michael DeBellis

unread,
Jul 29, 2022, 11:53:56 AM7/29/22
to ontolog-forum
I think regarding these large language models there is a simple and critical distinction (which Chomsky makes regularly). That is that there is a difference between engineering a computer to solve some task and a theory for how humans solve a similar task. I.e., "planes don't fly by flapping their wings". What may be the best engineering solution almost certainly won't be the same way evolution solved the problem. There may be common principles to apply to both (just as there are common principles of aerodynamics that apply to both bird flight and plane flight) but it is virtually a certainty there will be a big difference between adaptations that evolved over billions of years by natural selection and the way a human engineer will design a solution. If we take even a cursory look at how children learn language and how humans use language it is obvious that an approach based on developing statistical or linear algebra models based on massive data examples may provide impressive results but they clearly are not directly analogous to the way humans learn and use language. 

IMO, any theory that attempts to explain human language use has to have concepts such as intentions, goals, plans, etc. which to the best of my knowledge none of these large language models do. 

Michael

On Tuesday, July 19, 2022 at 5:38:59 AM UTC-7 Azamat Abdoullaev wrote:
Alex, thanks for the source.
I have started to read it to stop, failing to get his illogical logic.
"Suppose that I submitted an article to a physics journal saying I've got a fantastic new theory. It accommodates all the, all the laws of nature: the ones that are known, the ones that have yet to been discovered. And it's such an elegant theory that I can it in two words: anything goes. Okay.

That includes all the laws of nature: the ones we know, the ones we don't know yet, everything. What's the problem? Well, the problem is they're not going to accept the paper because when you have a theory, there are two kinds of questions you have to ask. Why are things this way? Why are things not that way? If you don't get the second question, you've done nothing. That’s GPT three. It's done nothing".

 I disagree with his extreme position that GPT has done nothing and might partly agree that it is a “Make believe AI”, or "Parlour Trick”.   

  Again, large language models are not the final solution for natural language understanding, but a way to do it. They just lack the essence of NLU/AI, the data ontology with the data understanding algorithms.    https://www.linkedin.com/pulse/world-data-ontology-science-ai-ml-deep-learning-graph-abdoullaev/


On Tue, Jul 19, 2022 at 12:24 PM Alex Shkotin <alex.s...@gmail.com> wrote:
Colleagues, a few days ago Marco Neumann gave us [0] a link to an interview with Noam Chomsky. And there is a link to the transcript [1], which, in my opinion, should be discussed separately. Many topics [2] are directly related to ontologies, especially formal ones.
I propose to discuss the above and even the tenth question:-)

Alex

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Michael DeBellis

unread,
Jul 29, 2022, 12:02:46 PM7/29/22
to ontolog-forum
>Alex, John, exactly that's the actual essence of Chomsky's critique. 
>LLMs have not made a contribution to the body of knowledge in the domain of science.

I agree on both counts but I don't think that is their goal so it's not a fair criticism (btw, I think Chomsky would agree with that). Their goal is to solve an engineering problem not a science problem. The engineering problem is to develop better software that can interact with humans via NLP in more sophisticated ways than the simple NLP systems we interact with now (and that I hate to use) that just understand (barely) short sentence fragments in a very constrained domain like product customer service. I don't know this community all that well but I think most people doing this work are aware of this. It is just people who make a living out of being "futurists" or whatever who look at some of these (probably to some extent Cherry picked) examples and make ridiculous claims that common sense reasoning has been solved and that "Strong AI" or even sentient AI is just around the corner which is about as justified as Minsky's prediction in the 70's that we would have sentient AI in a few decades. 

Michael
On Wednesday, July 20, 2022 at 9:53:37 AM UTC-7 marco.neumann wrote:
Alex, John, exactly that's the actual essence of Chomsky's critique. LLMs have not made a contribution to the body of knowledge in the domain of science.

And it is obvious that the actors in the field of LLMs have the desire to have their work acknowledged as a contribution to science by pushing the label of empirical observation.

In the same way mathematics is not a science they will have a hard time selling data patterns and their identification in LLMs as scientific evidence itself. Chomsky calls them useful tools, like bulldozers to clear the path with 5 trillion parameters like in the new GPT-4 but not a contribution to science.

I would think that we will see some interesting results coming out of these developments on LLMs but there will have to be another discussion to turn them into a science. It's worthwhile to look at the specifics of Chomsky's argument. 

Is it a science or not? Is the way we look at science outdated? Are data patterns in very large LLMs just crack up superficial similarities or is there more to this story? They are without a doubt fun to play with and in combination with systems based on formal logic very common in hybrid settings. And we all have used deterministic statistical methods in ontology delevelopments. The fact that LLMs are non-deterministic is not in itself a reason to negate them. 



On Tue, Jul 19, 2022 at 5:32 PM Alex Shkotin <alex.s...@gmail.com> wrote:
Azamat, 

Welcome:-) This is just one of maybe fifty ideas in this text, and not only from NCho but in questions aslo. But this particular one is, as I got it, that GPT-3 is a kind of subtle engineering without any science behind. 
His point is that the impact of GPT-3 to any kind of science from ANN to linguistics is zero.
Please, skip this place and read more:-)

Alex

вт, 19 июл. 2022 г. в 15:39, Azamat Abdoullaev <ontop...@gmail.com>:

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Michael DeBellis

unread,
Jul 29, 2022, 12:13:28 PM7/29/22
to ontolog-forum
Alex, I looked for the link to the Chomsky talk but the thread is so long I couldn't find it. Could you possibly reply with that link again? I would like to hear what Chomsky said.  

Michael

John F Sowa

unread,
Jul 29, 2022, 2:40:46 PM7/29/22
to ontolog-forum
Michael,
 
I agree with the following point.  That is why GPT-3, without a symbolic component, is a toy.
 
Any application of any kind that does not have intentions, goals, plans --  implicit or explicit -- cannot do anything useful.  Any system of reasoning that doesn't have any intentions, goals, plans -- implicit or explicit -- has no criteria for determining a direction or determining when and where to stop.
 
John

From: "Michael DeBellis" <mdebe...@gmail.com>

Alex Shkotin

unread,
Jul 30, 2022, 5:00:41 AM7/30/22
to ontolo...@googlegroups.com

пт, 29 июл. 2022 г. в 19:13, Michael DeBellis <mdebe...@gmail.com>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Marco Neumann

unread,
Jul 30, 2022, 5:44:47 AM7/30/22
to ontolo...@googlegroups.com
On Fri, Jul 29, 2022 at 5:02 PM Michael DeBellis <mdebe...@gmail.com> wrote:
>Alex, John, exactly that's the actual essence of Chomsky's critique. 
>LLMs have not made a contribution to the body of knowledge in the domain of science.

I agree on both counts but I don't think that is their goal so it's not a fair criticism (btw, I think Chomsky would agree with that). Their goal is to solve an engineering problem not a science problem. The engineering problem is to develop better software that can interact with humans via NLP in more sophisticated ways than the simple NLP systems we interact with now (and that I hate to use) that just understand (barely) short sentence fragments in a very constrained domain like product customer service. I don't know this community all that well but I think most people doing this work are aware of this. It is just people who make a living out of being "futurists" or whatever who look at some of these (probably to some extent Cherry picked) examples and make ridiculous claims that common sense reasoning has been solved and that "Strong AI" or even sentient AI is just around the corner which is about as justified as Minsky's prediction in the 70's that we would have sentient AI in a few decades. 


Michael, sure with regards to the engineering artifact that may be the case but in the interview Chomsky's position on LLMs highlights the importance of the lack of a science in LLMs in general and that's what his critique is directed at.

Additionally Noam states in chapter 6 that he suspects that there is no semantics in natural language, just syntax. To understand his position here, you will have to consider his belief that semantics in human language is best studied as event semantics, which to him is ultimately just syntax or a description of events.

Chomsky: "there is simply no semantics in natural language at least not semantics in the sense of Fridge (Frege*), Percy (Peirce*), Carnap and Quine or any formal semantics it just doesn't exist in human language we have mental operations going on that have some loose relation to the outside world but it's not truth" 3:04:04

Chomsky approaches human language almost as a biological science. For him the development of human languages is based on a genetic predisposition. A system of categories, mechanisms and constraints that are shared by all human languages and he has formulated this in his work known as Universal Grammar. 

You mentioned intentions, goals and plans etc as requirements to fulfill the ambition of human language understanding by machines. But as you and John have mentioned them would these not just be additional layers of modelling in the process of symbolic reasoning? I don't think that's what Chomsky had in mind.

* not sure if I have listened correctly here but you may be able to chip in with a correction.
 

Barry Smith

unread,
Jul 31, 2022, 6:46:38 PM7/31/22
to ontolo...@googlegroups.com
I agree very strongly with John, here, on the role of goals and intentions or acts of will. Indeed this is an underlying theme of a book on AI which is to appear on August 12. See here for details: 

Barry

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Azamat Abdoullaev

unread,
Aug 1, 2022, 5:04:44 AM8/1/22
to ontolo...@googlegroups.com
There are two types of people here: Regressists and Progressists.
The first one believes that we are all unique, and no intelligence could rule this world but the biased human minds.
The second one believes that intelligent machines are the only future and humans are doomed either to be replaced or disrupted. Its antihuman party includes the big tech relying on LLMs as a short way to AGI and beyond. The richest of them, E. Musk, expects AI Supremacy by 2025. 

“My assessment about why A.I. is overlooked by very smart people is that very smart people do not think a computer can ever be as smart as they are.

“And this is hubris and obviously false.”

Working with A.I. at Tesla lets ME say with confidence “that we’re headed toward a situation where A.I. is vastly smarter than humans and I think that time frame is less than five years from now.

But that doesn’t mean that everything goes to hell in five years. It just means that things get unstable or weird.”


We have a rather funny situation:  "The big business knows it’s possible in practice, while the philosophers are trying to figure out if it’s possible in principle." 


Alex Shkotin

unread,
Aug 1, 2022, 6:45:43 AM8/1/22
to ontolo...@googlegroups.com
We need depersonalization of administrative functions as the human factor is unreliable. In addition, if administration is not treated as a game, then it is boring.
Robots in factories are no longer news. Robots on the roads - in some places appeared. Robots at home do more and more. Robotic city, country - on the horizon.
We will come to a robotic society.

Alex

пн, 1 авг. 2022 г. в 12:04, Azamat Abdoullaev <ontop...@gmail.com>:

Giancarlo Guizzardi

unread,
Aug 1, 2022, 7:37:05 AM8/1/22
to ontolo...@googlegroups.com
Fully agree with John and Barry.
@Barry: looking forward to reading the book.

best,
Giancarlo

Ricardo Sanz

unread,
Aug 1, 2022, 8:30:33 AM8/1/22
to ontolo...@googlegroups.com
Hi,

Is there any fundamental reason that prevents machines from having intentions, goals, plans --  implicit or explicit ?

Regards,
Ricardo




--

================================================================
Ricardo.Sanz at UPM.es, at ASLab.org, at IEEE.org, at RSanz.com

AUTONOMOUS SYSTEMS LABORATORY
Universidad Politecnica de Madrid
Spain

www.aslab.upm.es                     www.cienciacognitiva.es                   www.rsanz.com
================================================================

Michael DeBellis

unread,
Aug 1, 2022, 10:28:06 AM8/1/22
to ontolo...@googlegroups.com
Is there any fundamental reason that prevents machines from having intentions, goals, plans --  implicit or explicit ?

As far as we know, I think the answer is clearly "no". But there is also so much that we don't really understand about the mind/brain. For example, some schools of psychology (radical behaviorists in the past, radical physicalists in the present such as Sam Harris) think that intentions, goals, and plans are mere epiphenomena and that just as a mature theory of physics disabused us of our common sense notion of contact mechanics a mature psychology will eventually show us that while such concepts may be impossible for humans to live without, they are essentially just (to use a term from Chomsky but in a different context) "necessary illusions". I happen to think they are probably wrong but with psychology as immature as it is now, I don't think that hypothesis can just be dismissed either. 

So I think it is important to recognize that what we are doing is just speculating here and that the scientific answer is we just don't know yet. For example, if we look at some of the most mature disciplines in science and math, part of that maturity is recognizing that there are fundamental limits to what can be known (Godel proved there are valid theorems that can't be proven, Heisenberg that we can never have complete knowledge about any specific particle). So it could be that as we understand psychology better we find that there is something unique about the brain such that no device implemented with silicon chips can actually replicate it. I think that is very unlikely but it can't be ruled out either. 

Michael

You received this message because you are subscribed to a topic in the Google Groups "ontolog-forum" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ontolog-forum/G-xSp22wUqM/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAB_uPW4W0xtcJG3dCUq7FWJiei4KZx-WJveu7LiZwnyx1SQY_w%40mail.gmail.com.

Michael DeBellis

unread,
Aug 1, 2022, 10:40:30 AM8/1/22
to ontolo...@googlegroups.com
There are two types of people here: Regressists and Progressists.
The first one believes that we are all unique, and no intelligence could rule this world but the biased human minds.
The second one believes that intelligent machines are the only future and humans are doomed either to be replaced or disrupted. 

I can't speak for others but I'm certainly in neither camp. First, there is something unique about humans right now as far as we know. That's just a scientific fact. We are the only form of life or intelligence that is self aware and can make plans, and can communicate information about past events, possible future events, etc. 

But does that mean we are the only such life in the entire universe? Of course not. Again, IMO the scientific thing to do is to admit we don't know. I love one of Carl Sagan's quotes on this (this is from memory but I think it is accurate): "My gut tells me that there must be intelligent life elsewhere in the universe... but I try to think with my brain, not with my gut".  It is possible that intelligent life is such a fluke and requires so many variables to come together just right that we will never encounter another intelligent civilization. My gut tells me that isn't the case but...

The same for Artificial Intelligence (in the sense of an AI that can truly mimic human intelligence). As I said in another recent post, I have never heard anyone put together a strong argument for why AI can't exhibit the same kind of self directed, self aware behavior as a human. But there is so much we don't know about the min/brain it is possible that there are things that are unique to neurons and brains that just can't be replicated in silicon. My gut tells me no but....

As for claims that AI or humans supplemented with AI will someday replace humans as we now know them, that is just pure speculation, more appropriate for science fiction than serious discussion. We are still so far away from even knowing if "Real AI" is even possible.

Michael

You received this message because you are subscribed to a topic in the Google Groups "ontolog-forum" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ontolog-forum/G-xSp22wUqM/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAKK1bf-zN8qkDLApYCJvQrbi-NFywOJXN0ssBejVwjBdKGvYZQ%40mail.gmail.com.

Michael DeBellis

unread,
Aug 1, 2022, 10:41:13 AM8/1/22
to ontolo...@googlegroups.com
Excellent. Thanks a lot Alex. 

Michael

You received this message because you are subscribed to a topic in the Google Groups "ontolog-forum" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ontolog-forum/G-xSp22wUqM/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAFxxROQzksTC3cp89q8hW8zS0WyEeoyTgQ-BnbAx2u1J_vScLQ%40mail.gmail.com.

dr.matt...@gmail.com

unread,
Aug 1, 2022, 10:45:50 AM8/1/22
to ontolo...@googlegroups.com

Dear Michael,

Well there have been for some time (50+ years) a class of programs called optimizers that try to maximise some object function, usually by crawling stepwise towards some optimum (or if you are unlucky a suboptimum). You probably get the right idea if you think about a blind person trying to climb a hill with only being able to sense the gradient as a guide. But the program does not have goals itself, it has to be given them. But perhaps we are not so different.

 

Regards

Matthew West

Michael DeBellis

unread,
Aug 1, 2022, 11:16:47 AM8/1/22
to ontolo...@googlegroups.com
Additionally Noam states in chapter 6 that he suspects that there is no semantics in natural language, just syntax.

Sorry, I'm missing some context, Chapter 6 of which book? Do you mean his recent book Why Only Us? I read that a while ago and don't recall that but I'll check that book in case that is the one you mean. 

But I don't agree with the way you characterize his views on syntax and semantics. Have you read his paper with Hauser and Jackendoff called The Faculty of Language: What is it, Who has it,  and How did it Evolve?  (BTW, that link is behind a paywall, if you want the paper and can't get it let me know I have a copy of it)

In that paper he divides the Language Faculty into Language Faculty Broad (LFB) and Language Faculty Narrow (LFN). He says that LFB includes the things I and others have mentioned: intentions, beliefs, goals, plans, etc. What he is studying is LFN which can be described more or less as syntax. I've heard and read him talk about this several times and he is always clear. It isn't that he thinks Syntax and Semantics are the same thing (although people regularly attribute that incorrect interpretation to him). It's that he thinks the only area that can be studied scientifically right now is syntax and that without understanding syntax we can't understand semantics. He has hypothesized that once we truly understand syntax that we may end up understanding a great deal of semantics but he always acknowledges that this is just speculation. 

I agree with him about semantics. I've studied the work of Lakoff and some of this students for example. While I think it is interesting and worth reading I've never seen anything that results in testable, falsifiable hypotheses. 

The Biolinguistic approach on the other hand does lead to such hypotheses which have been tested in the laboratory. E.g., it implies that learning a made-up language that didn't follow the rules of Universal Grammar (e.g., it has rules such as the second word in a sentence is always the object of the verb) would be a different kind of learning involving different areas of the brain than learning a natural language. Some researchers in Italy did such an experiment where they made up two languages one that followed the rules of UG and one that didn't. They found that the same brain regions were active when people learned the UG compliant language as when they learn a second real natural language but that different regions were most active when they learned the non-UG language. I think he discussed this in Why Only Us? 

There is also the work in principles and parameters where there is all sorts of empirical evidence that all languages follow certain principles and that the parameter setting for one principle (e.g., head first vs. head last) has implications for other principles. So if a language is head first then it will have principle A set to X (it's been a while so I don't remember the specifics of the principles and implications) whereas if it is head last then A will be set to Y. 

To understand his position here, you will have to consider his belief that semantics in human language is best studied as event semantics, which to him is ultimately just syntax or a description of events.

Where does he say this? In an old talk, I think with Ludlow, he said that Event is just a common sense notion and that to make it part of linguistics you had to give some rigorous definition. I've never heard him say that semantics = events nor that events = syntax. 

Michael

You received this message because you are subscribed to a topic in the Google Groups "ontolog-forum" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ontolog-forum/G-xSp22wUqM/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CABWJn4SWnbMLMqvWS4%3DNDrEt%2BGhvgBddewhJ%2BVb5m2ZY34%3Dapw%40mail.gmail.com.

Barry Smith

unread,
Aug 1, 2022, 11:19:50 AM8/1/22
to ontolog-forum
Ricardo: Is there any fundamental reason that prevents machines from having intentions, goals, plans --  implicit or explicit ?

This is a long story. See, on the will,  ch. 12 of the AI book
See also this video
BS

Marco Neumann

unread,
Aug 1, 2022, 11:26:37 AM8/1/22
to ontolo...@googlegroups.com
Michael, this is meant as a preliminary reply, it's the chapter / segment / Question 6 of the video interview:


He is pretty outspoken and clear about this question. 

but I like to hear more about your take on this and will follow up on the points you've raised later. 



--


---
Marco Neumann


Kathryn B. Laskey

unread,
Aug 1, 2022, 11:49:07 AM8/1/22
to ontolo...@googlegroups.com
We can program machines to act “as if” they have goals and intentions. Just read the chapter on planning in any artificial intelligence textbook, or read the chapters on optimization in a textbook on operations research.

Do computer programs that act “as if” they have goals and intentions really have goals and intentions? What does that even mean?

I once had a colleague who insisted dogs were unable to plan or to have goals.

My husband’s family had two dogs. Once Ken’s mother was violating the well-known dog owner maxim that one should not reward dogs who beg at the dinner table.  One of the dogs suddenly dashed into the other room barking loudly at the window. They other dog immediately followed. The first dog came back and enjoyed some scraps with no competition. The second dog followed after a while.

A few minutes later the exact same situation unfolded — first dog dashed out barking; second dog followed; first dog returned for a few minutes of uncontested scraps; second dog returned.

The third time it happened, the second dog didn’t run out, and snapped at the first dog when he returned.

If that isn’t having goals and planning to achieve them, I don’t know what is.

Back to the computer. I can program a robot to optimize something, but I’m programming it to optimize *my* goals.  What would it mean for a robot to have *its own* goals and behave to optimize them?

I don’t think we have a clue as to the answer to that question.

Kathy

Barry Smith

unread,
Aug 1, 2022, 11:57:13 AM8/1/22
to ontolog-forum, Jobst Landgrebe
On Mon, Aug 1, 2022 at 11:49 AM Kathryn B. Laskey <kla...@gmu.edu> wrote:
We can program machines to act “as if” they have goals and intentions. Just read the chapter on planning in any artificial intelligence textbook, or read the chapters on optimization in a textbook on operations research.

the goals and intentions then come from the humans who program the machines, don't they
 
Do computer programs that act “as if” they have goals and intentions really have goals and intentions? What does that even mean?

I once had a colleague who insisted dogs were unable to plan or to have goals.

My husband’s family had two dogs. Once Ken’s mother was violating the well-known dog owner maxim that one should not reward dogs who beg at the dinner table.  One of the dogs suddenly dashed into the other room barking loudly at the window. They other dog immediately followed. The first dog came back and enjoyed some scraps with no competition. The second dog followed after a while.

A few minutes later the exact same situation unfolded — first dog dashed out barking; second dog followed; first dog returned for a few minutes of uncontested scraps; second dog returned.

The third time it happened, the second dog didn’t run out, and snapped at the first dog when he returned.

If that isn’t having goals and planning to achieve them, I don’t know what is. 

Back to the computer. I can program a robot to optimize something, but I’m programming it to optimize *my* goals.  What would it mean for a robot to have *its own* goals and behave to optimize them? 

I don’t think we have a clue as to the answer to that question.

We have a lot of clues as to what it means for humans to have goals and intentions. It is tied to the existence of very complex systems connecting neuronal activity with the human's experience of and engagement with the world. We know very little about how these connections work (or indeed about how systems of neurons work); but what we do know is that the mathematics involved would far exceed what any computer can emulate (computers can only compute what is [Church-Turing] computable, and this sets considerable limits, even where we are dealing, as in current stochastic AI, with gigantic algorithms with billions of parameters). That is what the book is about. 
Barry

Azamat Abdoullaev

unread,
Aug 1, 2022, 12:29:07 PM8/1/22
to ontolo...@googlegroups.com
Ricardo: "Is there any fundamental reason that prevents machines from having intentions, goals, plans --  implicit or explicit ?"
Nope. Nothing prevents computing machines to have encoded intentionality, as "aboutness" and "immanent objectivity", as "the power of minds to be about something: to represent or to stand for things, properties and states of affairs"; with all planning, purposes and goals.
It is a computational problem, both tractable and decidable, where a set of instances/cases is a set of mental states (e.g. thoughts, beliefs, desires, hopes) intentionally/consciously being directed towards the world, its objects or state of affairs, as a set of solutions.
The big question is, if there is any utility to focus on simulating the human mentality, considering its subjectivity and individuality. Besides, the mind/cognition/intelligence is not in the head, it is embodied, enacted or situated, involving a whole combination of connectionism, computationalism, cognitivism, embodied cognition, situated cognition, or enactivism. 
You need to model and simulate the reality itself, of which mentality is part, with all possible interactions between the intelligent agents and their environments, physical, social, digital, virtual or informational. 

Michael DeBellis

unread,
Aug 1, 2022, 1:41:04 PM8/1/22
to ontolo...@googlegroups.com
Okay, I'm going to watch the whole thing to make sure I have the complete context. 

Marco Neumann

unread,
Aug 1, 2022, 4:06:06 PM8/1/22
to ontolo...@googlegroups.com
great, glad someone picked up on it as it is an important statement to understand his position on the science question with regards to LLMs.

Marco



--


---
Marco Neumann


Nadin, Mihai

unread,
Aug 1, 2022, 4:36:10 PM8/1/22
to ontolo...@googlegroups.com

Dear and respected colleagues,

The question: Is there any fundamental reason that prevents machines from having intentions, goals, plans --  implicit or explicit ?

Is there any fundamental reason that prevents machines from having intentions, goals, plans --  implicit or explicit ?

cannot be answered without defining what kind of machines. Period. Please go back to Turing’s paper (on the Entscheidung’s problem from Hilbert). He defines a very specific machine—the algorithmic machine. And he proves that it CANNOT produce mathematical proofs. Period. This machine cannot have intentions, goals, plans. Can we conceive of a machine that transcends the Turing machine? That is the question. Stuck in the conversion of syntax into pseudosemantics for algorithmic machines, we miss the fundamental understanding that language is driven by pragmatics (what we do!) and not by semantics. Even Chomsky can be wrong.

Mihai Nadin (of course, he can also be wrong! Prove it when it is the case)

 

From: ontolo...@googlegroups.com <ontolo...@googlegroups.com> On Behalf Of Ricardo Sanz
Sent: Monday, August 1, 2022 7:26 AM
To: ontolo...@googlegroups.com
Subject: Re: [ontolog-forum] Why machines will never rule the world

 

Hi,

Kathryn B. Laskey

unread,
Aug 1, 2022, 5:01:48 PM8/1/22
to ontolo...@googlegroups.com
[Turing] defines a very specific machine—the algorithmic machine. And he proves that it CANNOT produce mathematical proofs. Period.

That is not true.

Automated theorem proving is an active area of artificial intelligence. There are very many mathematical proofs that have been produced by computers. For example, a computer proved the Four Color Theorem, which was previously unsolved by humans. 

Please see https://en.wikipedia.org/wiki/Automated_theorem_proving for a discussion on this.

You may be thinking about undecidability. Specifically, while any true first-order statement can eventually be proven by an automatic theorem prover, there are statements that can neither be proven nor disproven — a program that tries to prove them will go on forever without producing either a proof or a refutation. This is discussed in the Wikipedia article. Turing proved a mathematically equivalent theorem, that the general question of whether a computer program will halt is undecidable.  See https://en.wikipedia.org/wiki/Undecidable_problem.

But undecidability is a feature of the math, not of the method of producing proofs. If a proposition is undecidable, neither a human nor a computer can prove it.

Kathy


John F Sowa

unread,
Aug 1, 2022, 11:48:22 PM8/1/22
to ontolog-forum, Jobst Landgrebe
I agree with Barry: "the goals and intentions then come from the humans who program the machines, don't they"
 
In fact, just look at the cars and trucks that travel on any roads anywhere.  They all behave as if they have goals and intentions, but their behavior is entirely determined by the human  behind the wheel.
 
Now look at some sefl-driving cars.  They also behave as if they have goals and intentions.  But those goals are just the result of somebody pushing some buttons to determine the destination for some trip.  And the steps along the way are determined by other people who did a lot of hard work to specify how those vehicles will respond to a huge number of possible contingencies.
 
Unfortunately, those self-driving things have a tendency to crash when they encounter novel circumstances for which they had no canned answer.
 
By the way, I enjoyed the story about the dogs.  But you can find similar responses from plants and anmals far lower than dogs in the intelligence hierarchies.   Even a singe-celled paramecium with no nervous system learns to avoid things and places that don't provide tasty treats.
 
John

John F Sowa

unread,
Aug 2, 2022, 12:33:08 AM8/2/22
to ontolo...@googlegroups.com
MDB> Additionally Noam states in chapter 6 that he suspects that there is no semantics in natural language, just syntax.
 
That is a very oversimplified summary of what Noam said.  What he claimed is that a formal theory of semantics  along the lines of Montague's goal is not possible.  But that does not deny the fact that it's possible to state meaningful. information in natural languages.
 
Roman Jakobson, a far better linguist than Chomsky, had a more balanced view, which he summarized in one sentence:  "Syntax without semantics is meaningless."
 
We all convey meaningful information via NLs every day of our lives.  But our syntax is not as precise as Chomsky claimed it should be, and our semantics is not as precise as Montague said it should be.
 
In general, most people, most of the time are somewhere in between.  We convey enough information with enough precision to make ourselves understood by most people most of the time.
 
But when somebody doesn't understand what we say -- which is quite often, even in Ontolog Forum -- they disagree or ask questions.  Then we rephrase what we said to clarify what we mean.
 
That's the way language works.  It's good enough for most purposes in our daily lives.  When the message doesn't come across clearly, people ask questions.  With enough Q & A, it's possible to explain some very complex issues.
 
Unfortunately, there are some people who never get the point.  Some of them can even become a so-called president.  People like that demonstrate Chomsky's extreme of syntax with a total absence of meaningful semantics.
 
John

Ricardo Sanz

unread,
Aug 2, 2022, 2:36:07 AM8/2/22
to ontolo...@googlegroups.com, Jobst Landgrebe
Hi John, Barry,

You say that machines can have only vicarious goals: "They all behave as if they have goals and intentions, but their behavior is entirely determined by the human  behind the wheel."

But, in what formal sense their behavior is different from ours? 

I Imagine that we can find a sentence like this in a human psychology book: "Humans all behave as if they have goals and intentions, but their behavior is entirely determined by their received education".

In fact, both sentences are false. The behavior -of machines, of humans- is determined by the system structure -the machine, the human body-, the initial state -the owner orders, the parents' received education-, and the disturbances while behaving -the environment, the society.

At the end of the day, the only aspect that may remain fundamentally different is the old "free will" of humans. 
And, as you know, many people think it does not exist. 

Best,
Ricardo

PS: BTW, my work today is to convince a server to do what I told it to do last week. It is refusing to follow my orders and I don't know why.


--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

João Oliveira Lima

unread,
Aug 2, 2022, 5:30:04 AM8/2/22
to ontolo...@googlegroups.com
John, Barry, Guizzardi and Ricardo,

In the book "A Natural History of Human Thinking", Michael Tomasello (psychologist and linguist) explains the concepts "joint goals" and "collective intentionality" that characterize human cognition and which I think contributes to the present discussion. Tomasello examines human thinking from an evolutionary perspective (a lot of experiments with great apes and children) and shows how human thinking developed from the thinking of animals.

Below, I reproduce some excerpts from the "Relational Thinking" box (p. 42):

"Penn et al. (2008) have proposed that what makes human cognition
different from that of other primates is thinking in terms of relations,
especially higher-order relations.
........

Humans are exceptional in creating categories such as
pet, husband, pedestrian, referee, customer, guest, tenant, and so forth,
what Markman and Stillwell (2001) call “role-based categories.” 
They are relational not in the sense of comparing two physical entities but, rather, in
assessing the relation between an entity and some larger event or process in
which it plays a role.

The obvious hypothesis here is that this second type of relational thinking
comes from humans’ unique understanding of collaborative activities with
joint goals and individual roles ...

As humans constructed these kinds of activities, they were creating more or less abstract
“slots” or roles that anyone could play. These abstract slots formed role based
categories, such as things that one uses to kill game (viz., weapons;
Barsalou, 1983), as well as more abstract narrative categories such as
protagonist, victim, avenger, and so on. A further speculation might be that
these abstract slots at some point enabled humans to even put relational
material in the slots; for example, a married couple can play a role in a
cultural activity. This would be the basis for the kinds of higher- order
relational thinking that Penn et al. (2008) emphasize as especially important
in differentiating human thinking."

Regards,

Joao Lima


Ricardo Sanz

unread,
Aug 2, 2022, 6:12:14 AM8/2/22
to Landgrebe, Jobst, ontolo...@googlegroups.com
Thanks Jobst,

Just two comments/questions concerning what you say:

"any machine is just a logic system": These days I am working on an autonomous high-deep robot miner that digs and digest rock. Do you think it is "just a logic" system? Maybe you are referring only to digital computer programs.

"We cannot model complex systems mathematically": Wrong; we can and we do. See for example Modeling Complex Systems By Nino Boccara.

Best,
Ricardo


On Tue, Aug 2, 2022 at 10:36 AM Landgrebe, Jobst <jobst.l...@cognotekt.com> wrote:
Hi Ricardo,

I am Barry's co-author. You can  see in the book that the difference between the human mind-body-continuum which is the physical basis of our will is a complex system, while any machine is just a logic system. The unanswerable question whether we have a free will or not is irrelevant for our argument. Humans have a will, machines do not, because one cannot emulate a will using a logic system. Logic systems are very restricted, and this restriction is grounded in the fundamental limitations of mathematical models, which are the basis of machines. We cannot model complex systems mathematically, and that is why we cannot engineer a will.

Sincerely,

Jobst
 
--
Cognotekt GmbH
Bonner Str. 209
50968 Köln


www.cognotekt.com
phone: +49221-643065-10, fax: +49221-643065-99
Sitz der Gesellschaft: Köln
Handelsregisternummer: HRB 71466, Amtsgericht Köln
Geschäftsführender Gesellschafter: Dr. Jobst Landgrebe

Ricardo Sanz

unread,
Aug 2, 2022, 6:17:09 AM8/2/22
to ontolo...@googlegroups.com
Hi Joao,

I  (mostly) agree with you and Tomasello concerning the importance of relational thinking. 
However, nothing prevents animals and machines from having "joint goals and individual roles" as humans do. 
Think for example of a shepherd dog (or a shepherd drone).

Best,
Ricardo



João Oliveira Lima

unread,
Aug 2, 2022, 7:16:17 AM8/2/22
to ontolo...@googlegroups.com
Hi Ricardo,

Tomasello differentiates “joint intentionality” from “collective intentionality”. I think the dog shepherd dog example fits the first case. This dog, despite being loyal to the human being and fulfilling the functions for which it was trained, will not be able to cheer and celebrate together with you a goal scored by your soccer team.

I transcribe some new excerpts from the cited book that better clarify this distinction.

“The shared intentionality hypothesis is that this story comprises a two-step evolutionary sequence: joint intentionality followed by collective intentionality.” (p. 31)

“When put to use in thinking, collective intentionality comprises not just symbolic and perspectival representations but conventional and “objective” representations; not just recursive inferences but self-reflective and reasoned inferences; and not just second-personal self- monitoring but normative self- governance based on the culture’s norms of rationality” (p. 6)

“But now imagine a Lord of the Flies scenario. In this case it would be multiple infants born and growing to maturity on a desert island, with no one to interact with but each other. Perhaps surprisingly, the hypothesis in this case is that these children would indeed have the kind of social interactions necessary for developing joint intentionality— but not collective intentionality”( p. 146)

Regards,

Joao Lima


Marco Neumann

unread,
Aug 2, 2022, 7:36:10 AM8/2/22
to ontolo...@googlegroups.com
On Tue, Aug 2, 2022 at 5:33 AM John F Sowa <so...@bestweb.net> wrote:
MDB> Additionally Noam states in chapter 6 that he suspects that there is no semantics in natural language, just syntax.
 
That is a very oversimplified summary of what Noam said.  What he claimed is that a formal theory of semantics  along the lines of Montague's goal is not possible.  But that does not deny the fact that it's possible to state meaningful. information in natural languages.

and it's all in one sentence of course. But it's taken directly from Chomsky's presentation, and I think I can already tell that Chomsky is not your favorite linguist. I'd like to unpack that a bit more with Michael and come back to this later here. for the interested reader here is some background to Montague's goal:  https://plato.stanford.edu/entries/montague-semantics/

 
Roman Jakobson, a far better linguist than Chomsky, had a more balanced view, which he summarized in one sentence:  "Syntax without semantics is meaningless."
 
We all convey meaningful information via NLs every day of our lives.  But our syntax is not as precise as Chomsky claimed it should be, and our semantics is not as precise as Montague said it should be.
 
In general, most people, most of the time are somewhere in between.  We convey enough information with enough precision to make ourselves understood by most people most of the time.
 
But when somebody doesn't understand what we say -- which is quite often, even in Ontolog Forum -- they disagree or ask questions.  Then we rephrase what we said to clarify what we mean.
 
That's the way language works.  It's good enough for most purposes in our daily lives.  When the message doesn't come across clearly, people ask questions.  With enough Q & A, it's possible to explain some very complex issues.
 
Unfortunately, there are some people who never get the point.  Some of them can even become a so-called president.  People like that demonstrate Chomsky's extreme of syntax with a total absence of meaningful semantics.
 

Marco

 
John

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.


--


---
Marco Neumann


Barry Smith

unread,
Aug 2, 2022, 7:37:57 AM8/2/22
to ontolog-forum, Jobst Landgrebe
Ricardo writes: 

At the end of the day, the only aspect [as between machines and humans] that may remain fundamentally different is the old "free will" of humans. And, as you know, many people think it does not exist. 
It is true that many people doubt that free will exists. But nowhere in the book, and nowhere in this conversation, is there talk of free will. We are talking simply of will, goals, intentions ... . Do you want to say that these do not exist? And if yes, do you then not thereby demonstrate that eve you believe that wants exist?
BS   


Dan Brickley

unread,
Aug 2, 2022, 8:05:42 AM8/2/22
to ontolo...@googlegroups.com
On Sun, 31 Jul 2022 at 23:46, Barry Smith <ifo...@gmail.com> wrote:
I agree very strongly with John, here, on the role of goals and intentions or acts of will. Indeed this is an underlying theme of a book on AI which is to appear on August 12. See here for details: 


Interesting! From the Routledge summary. It highlights two claims:

  1. Human intelligence is a capability of a complex dynamic system—the human brain and central nervous system.
  2. Systems of this sort cannot be modelled mathematically in a way that allows them to operate inside a computer
Do you have anything already published on the latter claim? (otherwise we can wait for the book of course)

is there also an associated claim here that systems “inside a computer” cannot *be* (rather than “model”) human-like complex dynamical systems?

Can computers model-or-be simpler dynamical systems? Where’s the ceiling?

Cheers,

Dan





Barry

On Fri, Jul 29, 2022 at 14:40 John F Sowa <so...@bestweb.net> wrote:
Michael,
 
I agree with the following point.  That is why GPT-3, without a symbolic component, is a toy.
 
Any application of any kind that does not have intentions, goals, plans --  implicit or explicit -- cannot do anything useful.  Any system of reasoning that doesn't have any intentions, goals, plans -- implicit or explicit -- has no criteria for determining a direction or determining when and where to stop.
 
John

From: "Michael DeBellis" <mdebe...@gmail.com>
 
IMO, any theory that attempts to explain human language use has to have concepts such as intentions, goals, plans, etc. which to the best of my knowledge none of these large language models do. 
 
Michael

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Ricardo Sanz

unread,
Aug 2, 2022, 8:25:37 AM8/2/22
to ontolo...@googlegroups.com, ifo...@gmail.com
Hi,

No. I do not want to say that will, goals and intentions do not exist. I think they exist. In humans, animals and machines.

"Free will" is, however, another class of stuff. And all your dichotomy (human/non-human will) seems to be based on it. 
Do you think that free will exists?

Best,
Ricardo

Barry Smith

unread,
Aug 2, 2022, 8:30:08 AM8/2/22
to ontolog-forum
On Tue, Aug 2, 2022 at 8:05 AM Dan Brickley <dan...@danbri.org> wrote:


On Sun, 31 Jul 2022 at 23:46, Barry Smith <ifo...@gmail.com> wrote:
I agree very strongly with John, here, on the role of goals and intentions or acts of will. Indeed this is an underlying theme of a book on AI which is to appear on August 12. See here for details: 


Interesting! From the Routledge summary. It highlights two claims:

  1. Human intelligence is a capability of a complex dynamic system—the human brain and central nervous system.
  2. Systems of this sort cannot be modelled mathematically in a way that allows them to operate inside a computer
Do you have anything already published on the latter claim? (otherwise we can wait for the book of course)
It is all in the book, I'm afraid. But you only have to wait a few days. There is a somewhat longer extract here: 9781003310105_previewpdf.pdf 

is there also an associated claim here that systems “inside a computer” cannot *be* (rather than “model”) human-like complex dynamical systems?
Yes. That is in fact the main thesis of the book 

Can computers model-or-be simpler dynamical systems? Where’s the ceiling?
Again, dealt with in the book.
But anything organic is above the threshold.
And that includes hybrid systems such as the New York Stock Exchange or the Naples traffic system.
BS 

Giancarlo Guizzardi

unread,
Aug 2, 2022, 9:30:11 AM8/2/22
to ontolo...@googlegroups.com
Folks,

Just to clarify my position.
The statement I was supporting was John's statement echoed by Barry
that without intentional elements (intentions, goals, wills) among other things
necessary to properly address commonsense aspects, GPT-3 and the likes are toys.
I never said anything about the possibility or lack thereof of these things emerging
or being implemented in non-carbon-based machines. This is a completely different and, honestly,
much less interesting question to me at this point. IMO, we have much less mundane problems
to solve first, as we are 100 Nobel Prizes away from AGI.

That said, I am still interested and curious to read Barry's new book.

best,
G





Azamat Abdoullaev

unread,
Aug 2, 2022, 10:00:35 AM8/2/22
to ontolo...@googlegroups.com

Barry Smith

unread,
Aug 2, 2022, 10:06:18 AM8/2/22
to ontolog-forum
He is expressing a common rumbustious enthusiasm, which draws from recent advances in narrow AI the conclusion that we will soon have general AI.


Azamat Abdoullaev

unread,
Aug 2, 2022, 10:16:37 AM8/2/22
to ontolo...@googlegroups.com
Yes and no. 
Things are very uncertain today. Musk has a great business intuition that everything is possible, including man-machine superintelligence by 2025. 

dr.matt...@gmail.com

unread,
Aug 2, 2022, 11:05:10 AM8/2/22
to ontolo...@googlegroups.com

Hmmm. Nearly every technology is an opportunity or a threat. Generally, you get to choose which.

Regards

Matthew West

 

From: ontolo...@googlegroups.com <ontolo...@googlegroups.com> On Behalf Of Azamat Abdoullaev
Sent: 02 August 2022 15:16
To: ontolo...@googlegroups.com
Subject: Re: [ontolog-forum] Why machines will never rule the world

 

Yes and no. 

Nadin, Mihai

unread,
Aug 2, 2022, 11:11:32 AM8/2/22
to ontolo...@googlegroups.com

Dear and respected Kathryn B. Laskey,

Dear and respected colleagues,

A sentence such as

That is not true.

cannot go unnoticed. I refrain from correcting my peers. I shall only suggest that you read

https://www.cs.virginia.edu/~robins/Turing_Paper_1936.pdf

 

More over: Wikipedia is not a reference I accept in a class. Even less in a professional forum.

For undecidability please read:

Gödel, K., 1931, “Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I,” Monatshefte für Mathematik Physik, 38: 173–198. English translation in van Heijenoort 1967, 596–616, and in Gödel 1986, 144–195.

Wish you well.

Mihai Nadin

PS Dialog is possible when the terms are well defined.

Michael DeBellis

unread,
Aug 2, 2022, 12:26:27 PM8/2/22
to ontolo...@googlegroups.com
Musk has a great business intuition that everything is possible, 
...including man-machine superintelligence by 2025. 

One of the things that Chomsky discussed in that video is the work of Randy Gallistal. For people interested in Artificial Neural Nets and human cognition I highly recommend the book Memory and the Computational Brain by Gallistal and King. That is the main work that Chomsky was referring to regarding Gallistal. The point that Chomsky was making is that none of the neural network models of memory are scalable. In their book Gallistal and King give a simple example of a Scrub Jay and how it can remember something like 50 different locations where they bury food and also what kind of food was buried.They then show that using any of the neural net models of memory they simply can't scale to support the Scrubjay's memory let alone human memory. 

The point being: there is currently no well accepted model for how humans store long term memory (this is as opposed to muscle memory which is probably a result of Hebbian conditioning aka "fire together wire together"). When I audited a class on Cognitive Neuroscience at Berkeley I asked the professor (a brilliant guy named Jack Gallant) this question. Gallant's lab at Berkeley has done amazing things with ANNs and human cognition but he agreed with Gallistal that the solution for long term memory wasn't one of neural networks but something analogous to the way DNA and RNA store memory at the molecular level (Chomsky mentioned this as well).

Of course for people using ANNs this isn't a problem because they can just read and write from computer memory but that is no answer to how humans store long term memory. My point is that it is absurd to think that we could have technology in the next five years that provide "man-machine superintelligence" given that at the present time there is no well defined model for human memory. 

You received this message because you are subscribed to a topic in the Google Groups "ontolog-forum" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ontolog-forum/G-xSp22wUqM/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAKK1bf8X16p6ujZgo9ZAwS6tt4biABgwM4KFeH6PbiCXrCC57Q%40mail.gmail.com.

Azamat Abdoullaev

unread,
Aug 2, 2022, 1:03:26 PM8/2/22
to ontolo...@googlegroups.com
 MD: My point is that it is absurd to think that we could have technology in the next five years that provide "man-machine superintelligence" given that at the present time there is no well defined model for human memory. 
Indeed. That's a problem, a big problem, but computable, decidable and tractable. I was asked about a similar question, which answer could shed some light

John F Sowa

unread,
Aug 2, 2022, 10:59:47 PM8/2/22
to ontolo...@googlegroups.com
I sympathize with the note below, and I deleted comments that I agree with.  I don't disagree with the three that remain, but they raise important issues that require more discussion.
 
1. The quality of Wikipeida articles depends entirely on the knowledge of the authors and the editors.  The quality control is extremely variable.  Some of them are bad, misleading, or worse, many are mediocre, many are useful for simple facts, and some are excellent.  Unfortunately, even the best can be watered down by editors whose knowledge is mediocre.
 
I cited a Wikipedia article on GPT-3, even though almost all of it was written by people who had no idea about how GPT-3 works or how to evaluate what it does.  But I was very careful to cite three authors who made very good comments and had links to good web pages.
 
I admit that I should have extracted just the good parts and put them in my note.  But I was lazy, it was getting late, and I wanted to go to bed.
 
2. I agree that Gödel's  original article on undecidability is a classic, which people who use his ideas should srudy. However, many people who don't understand the implications of that article use it as a weapon to kill very important projects.  For many purposes, just citing that article is more dangerous than ignoring it.  For a discussion of those issues, see "Fads and fallacies about logic":  https://jfsowa.com/pubs/fflogic.pdf
 
3, I agree that it's important to define terms. However, the requirement that all terms must be well defined before a dialog begins would kill 99.99% of all conferences, university courses, and Ontolog notes.
 
John
 

From: "Nadin, Mihai" na...@utdallas.edu

Wikipedia is not a reference I accept in a class. Even less in a professional forum.

For undecidability please read:

Gödel, K., 1931, “Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I,” Monatshefte für Mathematik Physik, 38: 173–198. English translation in van Heijenoort 1967, 596–616, and in Gödel 1986, 144–195.

Dialog is possible when the terms are well defined.

poole

unread,
Aug 3, 2022, 12:06:40 AM8/3/22
to 'James Davenport' via ontolog-forum
An excellent readable outline for Gödel's proof is the book:
Ernest Nagel and James R. Newman, Gödel's Proof, New York University Press, 2001.

The outline is straightforward. Consider the sentence:
“This statement cannot be proved”
If it is true, the logic is incomplete; if it is false, the logic must be inconsistent.
The only things that this doesn’t apply to are those that can’t represent the statement.
Gödel's proof showed that any system that can do arithmetic can represent this statement, and hence must be either incomplete or inconsistent. If humans are not subject to this, it must be because they cannot do arithmetic (which us not implausible).

This does not mean that machines can’t do proofs. As Kathy said, computer-generated proofs are common; much of the hardware and safety-critical infrastructure we use has been proved correct. The Turing equivalence is that sometimes these proof systems just don’t halt. Kathy’s comments are right on point!

David

P.S. I must admit I’m a bit amused about this thread. Machines *do* rule the world. If we got rid of machines, society would fail to function and 90% (99%?) of us would starve to death. Nearly all decisions of society are made by machines; from logistics to manufacturing to advertising to monetary transactions to ….


> On Aug 2, 2022, at 7:59 PM, John F Sowa <so...@bestweb.net> wrote:
>
> [CAUTION: Non-UBC Email]
> I sympathize with the note below, and I deleted comments that I agree with. I don't disagree with the three that remain, but they raise important issues that require more discussion.
>
> 1. The quality of Wikipeida articles depends entirely on the knowledge of the authors and the editors. The quality control is extremely variable. Some of them are bad, misleading, or worse, many are mediocre, many are useful for simple facts, and some are excellent. Unfortunately, even the best can be watered down by editors whose knowledge is mediocre.
>
> I cited a Wikipedia article on GPT-3, even though almost all of it was written by people who had no idea about how GPT-3 works or how to evaluate what it does. But I was very careful to cite three authors who made very good comments and had links to good web pages.
>
> I admit that I should have extracted just the good parts and put them in my note. But I was lazy, it was getting late, and I wanted to go to bed.
>
> 2. I agree that Gödel's original article on undecidability is a classic, which people who use his ideas should srudy. However, many people who don't understand the implications of that article use it as a weapon to kill very important projects. For many purposes, just citing that article is more dangerous than ignoring it. For a discussion of those issues, see "Fads and fallacies about logic": https://jfsowa.com/pubs/fflogic.pdf
>
> 3, I agree that it's important to define terms. However, the requirement that all terms must be well defined before a dialog begins would kill 99.99% of all conferences, university courses, and Ontolog notes.
>
> John
>
> From: "Nadin, Mihai" na...@utdallas.edu
> Wikipedia is not a reference I accept in a class. Even less in a professional forum.
>
> For undecidability please read:
>
> Gödel, K., 1931, “Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I,” Monatshefte für Mathematik Physik, 38: 173–198. English translation in van Heijenoort 1967, 596–616, and in Gödel 1986, 144–195.
>
> Dialog is possible when the terms are well defined.
>
>
> --
> All contributions to this forum are covered by an open-source license.
> For information about the wiki, the license, and how to subscribe or
> unsubscribe to the forum, see http://ontologforum.org/info/
> ---
> You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/6a282fe799794de18279e0f592cb4797%40bestweb.net.

John F Sowa

unread,
Aug 3, 2022, 12:15:08 AM8/3/22
to ontolo...@googlegroups.com
Marco,
 
Could you please quote the exact paragraph in which Chomsky supposedly said "There is no semantics in natural language, just syntax."
 
Marco> "But it's taken directly from Chomsky's presentation, and I think I can already tell that Chomsky is not your favorite linguist."
 
As I said before, I had read Chomsky's early work from the '50s and '60s.  And I was very sympathetic, but I also read much more by other linguists, computational linguists, and psycholinguists. 
 
I was also talking with a linguistics graduate student at Harvard (in 1969), who said "At Harvard, you can study any branch or aspect of linguistics you like.  But at MIT, you can only study The Truth."
 
Also, I find Chomsky's ethics deplorable.  Nearly all professors remain friendly with the people whose PhD work they supervised.  But Chomsky refused to speak to any of his former PhD students who dared to work in some novel way that was different from his position.  He just cut them dead.  I have never known anybody else who would ever do that.
 
As another example, Victor Yngve was the head of a machine translation project at MIT.  And he hired Chomsky to work on that project.  Chomsky was not hired as a professor, but he taught one course for which he wrote his first book, Syntactic Structures (1957) as the textbook.  When that book became a best seller (for academic books), Chomsky was hired as an assistant professor and got tenure as an associate prof.
 
But then MIT was also planning to promote Yngve as an associate prof. with tenure.  And Chomsky told the MIT committee that if Yngve got tenure, he would resign from MIT and take a position elsewhere.  MIT caved in to Chomsky, and let Yngve go.  He went to the U. of Chicago, where he was much happier.  But Chomsky's actions were despicable.  I believe that MIT should have told Chomsky to leave if he didn' like Yngve as a colleague.
 
Meanwhile, MIT had an excellent AI department with some excellent work in computational linguistics.  But Chomsky refused to have anything to do with them.    Yngve had been doing some AI-related work, and he might have developed a collaboration with the AI group and machine translation.   But Chomsky refused to work with the people who had hired him.
 
He did marry one of the women on that project:  Carol Chomsky became Noam's wife.  I don't know anything about their relationship, but I noticed that Carol never continued her professional career.
 
John

Azamat Abdoullaev

unread,
Aug 3, 2022, 5:01:54 AM8/3/22
to ontolog-forum
A. N. Chomsky is a real intellectual, reminding me P. Kropotkin. 
His essay "The Responsibility of Intellectuals" could be ⁵a moral codex for many academics, contracted by the government for the lip-servicing of every injustice and atrocities.
He is not just a great linguist, but also an activist political philosopher, aligned with libertarian socialism and anarcho-syndicalism. Fighting against the political abnormalities of state socialism and state capitalism, he was jailed and blacklisted.
Chomsky has a hundred of PhD holders, most of them are proud with their brilliant teacher.
His first wife had died of cancer being a known MIT professor of child linguistics. One could only imagine all his psychological sufferings.
The bottom line.
If our academics had some traits and habits and civil  position of Chomsy, our world could be more rational and less random and chaotic.


--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Marco Neumann

unread,
Aug 3, 2022, 6:25:57 AM8/3/22
to ontolo...@googlegroups.com
John, yes let's stay close to what Chomsky states in the interview. While this is not my position statement I intuitively agree with Chomsky on the disconnect between mental states and real world objects. For me it's an interesting observation that he ultimately reduces all semantics to syntax. His political beliefs and personal qualities are subject to another discussion. Below is the transcript I have referred to in my post. BTW if you or anyone else could confirm the names I have marked with question marks please do so.

===Transcription===

03:00:39

Q6: Semantics state-of-the-art

Keith Duggar:

Fascinating, so cognitive horizon. This is regarding the many theories of semantics that have cropped up over the years for example truth-conditional semantics, logical semantics, ontological semantics, etc. Which, if any, paradigms of semantics do you think are headed in the right direction as far as getting us closer to an actual science of semantics or will we ever have a formal science of semantics as Montague thought?

Noam Chomsky:

Well, I think there's very rich exciting work in what's called Semantics. It's been one of the most lively fields of theory in linguistics, philosophy, and cognitive science in years. You mentioned Barbara Partee earlier, one of the pioneers in this field, great work.

It's not semantics it's syntactics! It's all study of symbolic manipulations that go on in the mind.

Suppose you do model-theoretic semantics, the kind Barbara Partee does. How you do model-theoretic semantics? What you do is identify certain individuals and certain predicates and you ask how the predicates are distributed over the individuals under various conditions. What are the individual's mental objects, not things in the world, they are mental objects or something. Do they correspond to anything in the world? Very loosely. If you actually look carefully at the mix of words there's a very loose connection to anything in the outside world. Take Aristotle's example, he discusses this, his example is "house". So what's a house? Well, in his metaphysics house is a combination of form and matter. The matter of a house is the bricks, the timber, things that a physicist could find. The form of the house is the intention of the designer, the characteristic is things that are in the mind. In fact, that's what a house is the thing could look exactly like a house for a physicist and not be a house. It could be a library, it could be a stable, it could be a paperweight for a giant, you know. It could be anything because the meaning of every word is largely a matter of our conceptual structures and that's true for the simplest words that you find.

Actually, the first example that was used in physical philosophy was "river". Pericles, pre-socratic, asked having you cross the same river twice. It's a pretty deep question if you think about it. The second time you cross it it's a totally different physical object. It wasn't the same river when you started living at it. The form is what we construct in our minds as what constitutes a river. But I happen to live in Arizona now and on my way to the university I cross something called the Rillito river. I have yet to see a drop of water. Old-timers tell me if you could go with the monsoon there's sort of water flowing, though is the Rillito a river if it got paved over and started to be used for commuting? It would be the Rillito highway. It's the same object and that's true for every word in the language.

There is simply no semantics in natural language! At least semantics in the sense of Frege (??), Peirce (??), Carnap, Quine, any formal semantics. It just doesn't exist in language.

We have mental operations going on that have some loose relation to the outside world but it's not "truth" and it's not "reference". It just don't occur. So what's the best approach to this? In my own view the most productive approaches are what are called the Event Semantics near the Davidsonian, developed by Paul Pietrowski and Barry Schein and a number of others which essentially started with a question like: Why can, if we say John read the book quickly, why can we infer that John read the book? Okay, that was the original question and the proposed answer is there's an event "reading", there's an agent "John" there's an agent "book" and there's an adverb, the modifier of the event "quickly". If you analyze it that way that's just a concatenation and you get the inferences. That's been developed extensively by people like Pietrowski and Schein among others that happens to fit very naturally to what is, I think is a coming.  What we're coming to understand is pure syntax.

It seems that that's the way pure syntax provides structures of that nature which fits very naturally into events and semantics. But notice the event semantics is syntax. When you talk about an event it's not anything in the world it's something that we construct in our minds.

There was a gentleman named Zeno who taught us something about that. How many events are there when I cross a room? As many as you decide to put there. You know there's no end up to the power of the continuum. So Event Semantics I think is productive as a form of syntax. Then comes another question how do all these things going on in our mind relate to the outside world. That's one of those questions that I don't think we have any answer to. Now we're back to Galileo's problem. How we do these things we don't know? We do them. We do a lot of things but we have no understanding. We'll probably never have an understanding of it.

======

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.


--


---
Marco Neumann


Alex Shkotin

unread,
Aug 3, 2022, 6:41:42 AM8/3/22
to ontolo...@googlegroups.com
Dear Mihai Nadin,

I hope the article from CACM [1] may be accepted, just let me cite Abstract:
"Recent progress in automated reasoning and super-computing gives rise to a new era of brute force. The game changer is "SAT," a disruptive, brute-reasoning technology in industry and science. We illustrate its strength and potential via the proof of the Boolean Pythagorean Triples Problem, a long-standing open problem in Ramsey Theory. This 200TB proof has been constructed completely automatically paradoxically, in an ingenious way. We welcome these bold new proofs emerging on the horizon, beyond human understanding both mathematics and industry need them."

Alex


вт, 2 авг. 2022 г. в 18:11, Nadin, Mihai <na...@utdallas.edu>:
You received this message because you are subscribed to a topic in the Google Groups "ontolog-forum" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ontolog-forum/G-xSp22wUqM/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/BL3PR01MB6897550D25DFF3FF7B534957DA9D9%40BL3PR01MB6897.prod.exchangelabs.com.

Marco Neumann

unread,
Aug 3, 2022, 7:13:24 AM8/3/22
to ontolo...@googlegroups.com
John, to the question of personality. I generally am now of the opinion that a good theory doesn't by some automatism make you a good person. granted that I may have done so in "my younger and more vulnerable years" :)

That said overall I do not find Chomsky's opinion on politics and the world in general particularly objectionable. When it comes to supervision of students, in my experience the best academic supervisors frequently do not overlap much with the best, most highly ranked, acclaimed and celebrated academics.  

But from your observations it's very interesting to learn how academic institutions (eg, MIT and Harvard) have now seemingly developed entrenched and distinct philosophical identities that have a direct and lasting influence on the academic process itself.

Marco

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.


--


---
Marco Neumann


Kathryn B. Laskey

unread,
Aug 3, 2022, 10:45:04 AM8/3/22
to ontolo...@googlegroups.com
Dear Mihai,

“I would not accept Wikipedia as a reference” … then please feel free not to follow the link. My note stands on its own.

I included the link for those who might be interested in further reading and wanted a paywall-free link; as always, caveat emptor. Yes, Wikipedia is variable in quality; however, it can be quite useful if one recognizes its limitations.  The articles to which I referred were quite reasonable (I read them before citing them) and contain links to primary sources. I would not cite Wikipedia in a journal article, but I often find Wikipedia useful as an entry point to a topic and a source of additional references, and I do not see anything wrong with linking to it in a discussion forum such as Ontolog.

I think David Poole for a very accessible explanation of Gödel’s basic undecidability argument. It’s better than Wikipedia. :-)

I reiterate that your statement, "[Turing] defines a very specific machine—the algorithmic machine. And he proves that it CANNOT produce mathematical proofs,” is not a true statement. Linking to Turing’s article doesn’t make an untrue statement true. What you said isn’t what Turing proved.

Computers can and do produce proofs. What they cannot do is decide whether ANY statement is true or false. Computers can prove some statements are true; they can prove other statements are false. But some statements are undecidable. No computer, and no human, can prove or disprove an undecidable statement.  If you ask an automatic theorem prover to prove an undecidable statement, it will go on forever, and never reach a conclusion.  That is what Turing proved (more precisely, it is mathematically equivalent to what Turing proved).

Kathy

Nadin, Mihai

unread,
Aug 3, 2022, 11:01:57 AM8/3/22
to ontolo...@googlegroups.com

Dear and respected David Poole, Dear and respected Alex Shkotin,

Dear and respected Kathryn Laskey,

Dear and respected colleagues,

I shall serve you (against my stand on the matter) the “Wikipedia recipe”:

 

In 1936 and 1937, Alonzo Church and Alan Turing showed independently, that there can be no answer to the Entscheidungsproblem. They showed that it is impossible for an algorithm to decide whether statements in arithmetic are true or false. For this reason, there can be no solution for the Entscheidungsproblem .

 

In short: this is what I stated in my initial posting.  I shall ask all of you to read Hilbert’s challenge (so that we discuss the same problem, and not different examples), and to read Turing’s paper.

 

Summary: does there exist a “definite method” that, when given any possible statement in mathematics, can decide whether that statement is true or false?  Hilbert believed that the answer to this question would be “yes”.  Alan Turing proved that the answer to the decidability problem was “no” (Hodges, 1983; Turing, 1936).

Machines cannot do proofs.

 

For me this ends the conversation.

 

Best wishes.

 

Mihai Nadin

 

-----Original Message-----
From: ontolo...@googlegroups.com <ontolo...@googlegroups.com> On Behalf Of poole
Sent: Tuesday, August 2, 2022 11:07 PM
To: 'James Davenport' via ontolog-forum <ontolo...@googlegroups.com>
Subject: Re: [ontolog-forum] Why machines will never rule the world

 

An excellent readable outline for Gödel's proof is the book:

Ernest Nagel and James R. Newman, Gödel's Proof, New York University Press, 2001.

 

The outline is straightforward. Consider the sentence:

  “This statement cannot be proved”

If it is true, the logic is incomplete; if it is false, the logic must be inconsistent.

The only things that this doesn’t apply to are those that can’t represent the statement.

Gödel's proof showed that any system that can do arithmetic can represent this statement, and hence must be either incomplete or inconsistent. If humans are not subject to this, it must be because they cannot do arithmetic (which us not implausible).

 

This does not mean that machines can’t do proofs. As Kathy said, computer-generated proofs are common; much of the hardware and safety-critical infrastructure we use has been proved correct. The Turing equivalence is that sometimes these proof systems just don’t halt. Kathy’s comments are right on point!

 

David

 

P.S. I must admit I’m a bit amused about this thread. Machines *do* rule the world. If we got rid of machines, society would fail to function and 90% (99%?) of us would starve to death. Nearly all decisions of society are made by machines; from logistics to manufacturing to advertising to monetary transactions to ….

 

 

> On Aug 2, 2022, at 7:59 PM, John F Sowa <so...@bestweb.net> wrote:

>

> [CAUTION: Non-UBC Email]

> I sympathize with the note below, and I deleted comments that I agree with.  I don't disagree with the three that remain, but they raise important issues that require more discussion.

> 1. The quality of Wikipeida articles depends entirely on the knowledge of the authors and the editors.  The quality control is extremely variable.  Some of them are bad, misleading, or worse, many are mediocre, many are useful for simple facts, and some are excellent.  Unfortunately, even the best can be watered down by editors whose knowledge is mediocre.

> I cited a Wikipedia article on GPT-3, even though almost all of it was written by people who had no idea about how GPT-3 works or how to evaluate what it does.  But I was very careful to cite three authors who made very good comments and had links to good web pages.

> I admit that I should have extracted just the good parts and put them in my note.  But I was lazy, it was getting late, and I wanted to go to bed.

> 2. I agree that Gödel's  original article on undecidability is a

> classic, which people who use his ideas should srudy. However, many

> people who don't understand the implications of that article use it as

> a weapon to kill very important projects.  For many purposes, just

> citing that article is more dangerous than ignoring it.  For a

> discussion of those issues, see "Fads and fallacies about logic": 

> https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fjfso

> wa.com%2Fpubs%2Ffflogic.pdf&amp;data=05%7C01%7Cnadin%40utdallas.edu%7C

> 3dc181d9ac8a43c31a0108da750591dc%7C8d281d1d9c4d4bf7b16e032d15de9f6c%7C

> 0%7C0%7C637950964790845571%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDA

> iLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sd

> ata=a2N6aBKKHjC2IXr33pF53r8k249%2BFGDFlA2mTeGq0oo%3D&amp;reserved=0

> 3, I agree that it's important to define terms. However, the requirement that all terms must be well defined before a dialog begins would kill 99.99% of all conferences, university courses, and Ontolog notes.

> John

> From: "Nadin, Mihai" na...@utdallas.edu Wikipedia is not a reference I

> accept in a class. Even less in a professional forum.

>

> For undecidability please read:

>

> Gödel, K., 1931, “Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I,” Monatshefte für Mathematik Physik, 38: 173–198. English translation in van Heijenoort 1967, 596–616, and in Gödel 1986, 144–195.

>

> Dialog is possible when the terms are well defined.

>

>

> --

> All contributions to this forum are covered by an open-source license.

> For information about the wiki, the license, and how to subscribe or

> unsubscribe to the forum, see

> https://nam02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fontol

> ogforum.org%2Finfo%2F&amp;data=05%7C01%7Cnadin%40utdallas.edu%7C3dc181

> d9ac8a43c31a0108da750591dc%7C8d281d1d9c4d4bf7b16e032d15de9f6c%7C0%7C0%

> 7C637950964790845571%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQI

> joiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=eG

> 6wlnDkTK3wOPq1CTexb3JaHU7%2B649AjQTOzvIdWYk%3D&amp;reserved=0

> ---

> You received this message because you are subscribed to the Google Groups "ontolog-forum" group.

> To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

 

--

All contributions to this forum are covered by an open-source license.

---

You received this message because you are subscribed to the Google Groups "ontolog-forum" group.

To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Kathryn B. Laskey

unread,
Aug 3, 2022, 11:43:16 AM8/3/22
to ontolo...@googlegroups.com
Respected Mihai,

On Aug 3, 2022, at 11:01 AM, Nadin, Mihai <na...@utdallas.edu> wrote:

Summary: does there exist a “definite method” that, when given any possible statement in mathematics, can decide whether that statement is true or false?  Hilbert believed that the answer to this question would be “yes”.  Alan Turing proved that the answer to the decidability problem was “no” (Hodges, 1983; Turing, 1936).

This is correct, with the caveat that Gödel’s incompleteness theorem came first; Turing learned about Gödel’s result and applied it to the halting problem.

Machines cannot do proofs.

This is not correct.

There are some statements machines cannot either prove or disprove. 

That is NOT the same thing as saying machines cannot do proofs.

“There are some humans who cannot run a four-minute mile,” is not the same thing as “Humans cannot run a 4-minute mile.”

“There are people George cannot beat at chess” is not the same thing as “George cannot win at chess.”

I hope this clarifies what I am and am not saying.

Kathy

Michael DeBellis

unread,
Aug 3, 2022, 12:44:14 PM8/3/22
to ontolo...@googlegroups.com

Summary: does there exist a “definite method” that, when given any possible statement in mathematics, can decide whether that statement is true or false?  Hilbert believed that the answer to this question would be “yes”.  Alan Turing proved that the answer to the decidability problem was “no” (Hodges, 1983; Turing, 1936).
Machines cannot do proofs.

 It is nice to have a question on the forum where for once there is a simple and unambiguous answer. You are wrong.

 The Entscheidungsproblem is:

Was there a method by which it could be decided, for any given mathematical proposition, whether or not it was provable?


What Turing (and Church) proved is that the answer to this question is no. There is such a method for propositional logic called Truth Tables. Hilbert and most mathematicians before Turing/Church believed there should be such an algorithm for FOL but they proved that there isn't. 

That in no way means that machines can't do proofs. For one thing everything that  Turing/Church  proved applies to humans as well as to machines. In fact when they published their proofs they were talking about the capability of HUMANS to prove certain things NOT MACHINES. For the simple reason that there were no computers when they published their proofs in the 1930's. The Turing machine that Turing used in his proof was not (at the time) meant as a model of an actual machine. It was a formalism like a Finite State Machine (FSA) only an FSA doesn't have memory about previous states and a Turing machine does. 

But saying that there is no one algorithm that can prove every mathematical proposition doesn't mean that you can never prove any  mathematical proposition. If that was what Turing/Church proved then logicians would all be out of jobs because, their proofs apply to humans as well as machines. 

Not only can machines do proofs, but some of the most significant work in AI has been the use of machines to do proofs. One of the first AI systems developed by Newell and Simon was a Proof Assistant. People in the Formal Methods community have been using machines to do proofs for decades. The work by people like Cordell Green and Doug Smith at the Kestrel Institute is a great example. They developed very sophisticated tools where one could design new formal languages and then use them to prove sets of formulas (and then generate code from them). The idea was to prove an algorithm was correct and then use meaning preserving transformation so you had code that was provably correct (at least correct in the sense that it did what the logical specification defined). For an example see: https://www.researchgate.net/publication/2759435_KIDS_A_Semi-Automatic_Program_Development_System 

The Web Ontology Language (OWL) has reasoners that support it which are theorem provers (i.e., they do proofs). That is why OWL is not a full implementation of First Order Logic (FOL) but rather a subset called Description Logic because if OWL were a full implementation of FOL then it would be subject to the Turing/Church proof and some models would cause the reasoner to never terminate. (BTW, I know some people think that is a bad design decision on the part of the OWL designers, I'm not making a claim either way on that just explaining why it was made)

One last thing: Some people still hold on to the hope that there may be other ways to prove things that aren't subject to the restrictions defined by Turing/Church. The Turing Church thesis is that anything that can be computed can be computed on a Turing machine. This is a thesis because there is no way to prove it but to date no one has found any other ways to do proofs that aren't subject to the limitations of Turing/Church.

Michael



unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to a topic in the Google Groups "ontolog-forum" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ontolog-forum/G-xSp22wUqM/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/BL3PR01MB6897573BD643D5910AADD91DDA9C9%40BL3PR01MB6897.prod.exchangelabs.com.

Alex Shkotin

unread,
Aug 3, 2022, 12:44:47 PM8/3/22
to ontolo...@googlegroups.com
Dear Mihai Nadin,

For me to end the conversation I should point out that formal proof for any particular formal hypothesis is just some special structure. We discussed this a little with Jon Awbrey in another thread about his proof animation.
We can and do use machines to find whole proof or step of proof - see Coq if you wish. 
I am not sure it's a good idea to ask your proof for "Machines cannot do proofs." :-)

Alex

ср, 3 авг. 2022 г. в 18:01, Nadin, Mihai <na...@utdallas.edu>:
unsubscribe to the forum, see http://ontologforum.org/info/

---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

John F Sowa

unread,
Aug 4, 2022, 12:24:24 AM8/4/22
to ontolo...@googlegroups.com, ontolog...@googlegroups.com
Michael,
 
One important point about undecidability:  To write an undecidable statement in FOL requires a very solid understanding of logic.  There are about a thousand people who subscribe to Ontolog Forum, and I would bet that 99% of them could not write an undecidable statement in FOL  no matter how hard they tried.
 
As an example, the Cyc system was founded in 1984.  By 2004, it had one thousand person years of development (an average of 50 people for 20 years).  In all that time, the Cyc language was undecidable, and nobody had ever written a specification that was undeciable.  But OWL was hamstrung by a restriction that served no useful purpose.
 
Some comments on some of your comments:
 
MDB> Not only can machines do proofs, but some of the most significant work in AI has been the use of machines to do proofs. One of the first AI systems developed by Newell and Simon was a Proof Assistant.
 
Newell & Simon made important contributions to AI, but their proof assistant wasn't one of them.
 
In 1969, Hao Wang implemented a theorem prover for full FOL, which was blindingly fast on an IBM 704 -- a vacuum-tube machine with a maximum RAM of 144 K bytes.  He used it to prove all 378 theorems in FOL and propositional logic in the Principia Mathematica by Whitehead and Russell.  Total time:  7 minutes of CPU time -- about 1.1 seconds per theorem.
 
The cell phone in your pocket, if it ran Wang's program, would take much less than a second to prove all 378. 
 
MDB>. The work by people like Cordell Green and Doug Smith at the Kestrel Institute is a great example. They developed very sophisticated tools where one could design new formal languages and then use them to prove sets of formulas (and then generate code from them). The idea was to prove an algorithm was correct and then use meaning preserving transformation so you had code that was provably correct (at least correct in the sense that it did what the logical specification defined). For an example see: https://www.researchgate.net/publication/2759435_KIDS_A_Semi-Automatic_Program_Development_System 
 
Green & Smith did some good work in their day.  But their system (in 1994) was never used for any practical application of any kind. 
 
Fundamental weakness:  The most difficult part of programming is transferring an idea in a programmer's head to a complete specification of any kind.   The fact that the programmers were writing in logic rather than a procedural language did not prevent them from making the usual kinds of mistakes. 
 
MDB> The Web Ontology Language (OWL) has reasoners that support it which are theorem provers (i.e., they do proofs). That is why OWL is not a full implementation of First Order Logic (FOL) but rather a subset called Description Logic because if OWL were a full implementation of FOL then it would be subject to the Turing/Church proof and some models would cause the reasoner to never terminate.
 
No!  No!! and No!!!   Please read or reread "Fads and fallacies about logic", http://jfsowa.com/pubs/fflogic.pdf 
 
MDB>, I know some people think that is a bad design decision on the part of the OWL designers, I'm not making a claim either way on that just explaining why it was made.
 
Those people include me and everybody else who has ever implemented a theorem prover for FOL.  Google TPTP (Thousands of Problems for Theorem Provers).
 
MDB> One last thing: Some people still hold on to the hope that there may be other ways to prove things that aren't subject to the restrictions defined by Turing/Church. The Turing Church thesis is that anything that can be computed can be computed on a Turing machine. This is a thesis because there is no way to prove it but to date no one has found any other ways to do proofs that aren't subject to the limitations of Turing/Church.
 
The Turing/Church thesis is irrelevant for practical programming.  It is also irrelevant for automated theorem proving. If you don't believe me, see the many, many references I cite in http://jfsowa.com/ikl .
 
John

Alex Shkotin

unread,
Aug 4, 2022, 4:27:22 AM8/4/22
to ontolo...@googlegroups.com
MDB, JFS, and All,

Just in addition:

1) Most engineering work with models is direct calculation not reasoning. Well, when the system does not work properly we sometimes need a lot of reasoning :-)
This is why undecidability of some theory (not language) is not very important: we use on our models axioms and theorems we had chance to prove.

2) One amazing fact about decidability is "This fact allowed Tarski to prove that Euclidean geometry is decidable: there exists an algorithm which can determine the truth or falsity of any sentence. Tarski's axiomatization is also complete. This does not contradict Gödel's first incompleteness theorem, because Tarski's theory lacks the expressive power needed to interpret Robinson arithmetic (Franzén 2005, pp. 25–26)." [1]

3) In practice we have not undecidability but inexpressibility in some language (OWL2 is a famouse one) some features expressible on NL or FOL. This is why annotation is so important in formal ontology ;-)
By the way this is why CNL is important: any scientific and engineering idea may be expressed on CNL :-)
Well, as far as I know, R. Montague was the first to claim that "English is a formal language"!

4) Sometimes I think that CL is a HOL kind of language as it is possible something like this f f(f)=f there.

Alex 


чт, 4 авг. 2022 г. в 07:24, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Azamat Abdoullaev

unread,
Aug 4, 2022, 9:56:35 AM8/4/22
to ontolo...@googlegroups.com
We have diverted form the thread line Why Machine...Rule the World 
I share my concerns with David Poole: 

P.S. I must admit I’m a bit amused about this thread. Machines *do* rule the world. If we got rid of machines, society would fail to function and 90% (99%?) of us would starve to death. Nearly all decisions of society are made by machines; from logistics to manufacturing to advertising to monetary transactions to ….

Michael DeBellis

unread,
Aug 4, 2022, 10:28:50 AM8/4/22
to ontolo...@googlegroups.com
John,    Thanks for the corrections and additional info.  

 I agree completely about formal methods. There were a few years long ago when I thought they were the "Silver Bullet" (even though I know the point of that paper is that there is no silver bullet). I came to realize similar things. The logical languages were too difficult for the average programmer. But more importantly, most of the complexity in real software development didn't come from translating specs to code but in getting requirements correct, GUI design, system integration, and maintenance (and this was backed up by work done by people like Bill Curtis at MCC who did actual empirical analysis of real developers building real systems, something I think our field could use a lot more of). That's why I'm such a zealot about Agile. I think it solves the real problems much better than any code generator or any tool in general. People who practice Agile have a suite of tools like Ant that they regularly use but what really matters is the change to the software development process... but that's another thread. 

Michael



--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to a topic in the Google Groups "ontolog-forum" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ontolog-forum/G-xSp22wUqM/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ontolog-foru...@googlegroups.com.
It is loading more messages.
0 new messages