That includes all the laws of nature: the ones we know, the ones we don't know yet, everything. What's the problem? Well, the problem is they're not going to accept the paper because when you have a theory, there are two kinds of questions you have to ask. Why are things this way? Why are things not that way? If you don't get the second question, you've done nothing. That’s GPT three. It's done nothing".
I disagree with his extreme position that GPT has done nothing and might partly agree that it is a “Make believe AI”, or "Parlour Trick”.
Again, large language models are not the final solution for natural language understanding, but a way to do it. They just lack the essence of NLU/AI, the data ontology with the data understanding algorithms. https://www.linkedin.com/pulse/world-data-ontology-science-ai-ml-deep-learning-graph-abdoullaev/
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAFxxROSEJa9kS--CK0qVED11rZYRyPZ-gwoZCKX0%3D1obZvvy8g%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAKK1bf8t0W2Nt0dxgz7wmge46LLbYM0dmLWYPWMLnE96Sq%2Bopg%40mail.gmail.com.
TIM: Large language models such as GPT-3 are receiving huge investment and are being hyped beyond belief. This is happening despite very strong theoretical arguments for the futility of learning language from data alone. The combinatorial complexity of language is on a scale which would eclipse any earthly dataset. There is also this problem of the so-called “missing text”. That is to say, human cognition extrapolates from common knowledge in order to understand text. We can ascertain background knowledge which is never actually communicated in the text. We believe that the research into large language models is what Francois Chollet recently called “Make believe AI”, and is thus the road to nowhere. Gary Marcus even calls it a “Parlour Trick”.
Assuming that you do believe that large language models are not the solution for natural language understanding, which paradigm do you think is the most promising?
03:25
Very pleased to be with you.
00:04:52
Well, first we should ask the question whether large language models have achieved anything? Anything? In this domain. Answer: no! They've achieved zero. So to talk about the failures, that’s beside the point, uh, let me give you an analogy.
05:17
Suppose that I submitted an article to a physics journal saying I've got a fantastic new theory. It accommodates all the, all the laws of nature: the ones that are known, the ones that have yet to been discovered. And it's such an elegant theory that I can it in two words: anything goes. Okay.
That includes all the laws of nature: the ones we know, the ones we don't know yet, everything. What's the problem? Well, the problem is they're not going to accept the paper because when you have a theory, there are two kinds of questions you have to ask. Why are things this way? Why are things not that way? If you don't get the second question, you've done nothing. That’s GPT three. It's done nothing.
06:18
With a super computer, it can look at 45 terabytes of data and find some superficial regularities in the data, which it can then imitate. And it can do the same with all languages. If I makeup a language which violates every principle of language, with 45 terabytes of data, the same super computer, it'll do the same thing. In fact, it's exactly like a physics paper that says anything goes.
06:56
So there's no point in looking at its deficiencies. Cause it does nothing. All it does is waste a lot of energy in California. I should be more careful. It has some engineering and applications that can be used to improve, live transcription, for example, which I'm very happy about because I like to use it. I like bulldozers too, it's a lot easier than cleaning the snow by hand; but, it's not a contribution to science.
So it's okay. I mean if you want to use up all the energy in California to improve live transcription, well. okay.
07:43
GEE-PEE-TEE-four is coming along, which is supposed to gonna have a trillion parameters. It will be exactly the same. It'll use even more energy and achieve exactly nothing, for the same reasons. So there's nothing to discuss. It's exciting for the reporters in the New York times. You probably saw the lead article in the times magazine a couple of weeks ago, they're absolutely ecstatic, we now have machines, just like a human human. Yeah, you can fool reporters, but you shouldn't be able to fool computer scientists.
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/b93d4fc3fdee4b0b9a6037afd5d485b6%40bestweb.net.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAFxxROQR%2BNb3fjTz%3DT3eFwmVK0gwqWMp4dMtGr9nwVcd5p-W2w%40mail.gmail.com.
Large Language Models are NOT non-deterministic. I wish we would adhere to stricter definitions before we make assertions (of any kind).
Mihai Nadin
--
---
Marco Neumann
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CABWJn4Ss%2BwToWoJkU8e-u-cmvBnSYSx1Yev8XwSF2NJphrdCBQ%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/BL3PR01MB68977F0C01B985AAA649D142DA8E9%40BL3PR01MB6897.prod.exchangelabs.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CABWJn4Ss%2BwToWoJkU8e-u-cmvBnSYSx1Yev8XwSF2NJphrdCBQ%40mail.gmail.com.
Here are a few facts to add to the discussion:
- GPT-N learns the probability distribution of the next word given the previous text. That distribution is learned. The distribution is a deterministic function (but if it were to relearn, it would produce a different distribution, as it starts from random weights)
- It outputs a random sample of the word given the next word, adds this word to the context and then continues. This is non-deterministic as it is a random sample. Trying again will give other samples from the distribution.
That is all it does. It predicts the next word, and strings these into text until it generates the <end> tag.
GTP-3 is trained on 570 GB of text. Someone who reads 400 page book each day would take over 2000 years to read that much text.
Calculation: 570*10**9/(365*400*300*6)
This is is orders of magnitude more text than any person and *all of their ancestors* have ever read.
John is absolutely correct. It understands nothing:
I typed
John Sowa will be elected as the next pope because
GPT-3 answered:
he is the most qualified candidate.
This is not a prediction, but rather an opinion based on the qualifications of the candidates. While there are many qualified candidates, Sowa has the most experience and the best qualifications for the job.
It has no connection of truth. It just predicts the distribution of the next word given the context and samples from the distribution. That’s all. It has no way to distinguish truth from fiction. The only truths it might output are the ones that it have been explicitly told. It is just a toy to predict nonsense!
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/BL3PR01MB68977F0C01B985AAA649D142DA8E9%40BL3PR01MB6897.prod.exchangelabs.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/0E2FC999-AA01-4DB3-91BC-4CAE83FB09F8%40cs.ubc.ca.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/3B447433-D339-4028-BD94-A03607630E3A%40cs.ubc.ca.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/80E3A1F7-DE30-4513-BB5A-876ECBCD339E%40cs.ubc.ca.
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/4BC9E079-941A-476D-A7AC-2BBFE7617E7A%40cs.ubc.ca.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/80E3A1F7-DE30-4513-BB5A-876ECBCD339E%40cs.ubc.ca.
ObjectPropertyAssertion( :hasWife :John :Mary )
Alex,
In answer to one one your previous notes, I followed one of the links to a short article by Doug Hofstadter, who has been working in AI for about as long as I have. Following is a transcript of his interactions with GPT-3. They show why he calls it "cluelessly clueless"
I said that it's a dead end, by itself. Just adding more and more data to it cannot produce anything better. But there are many useful applications of NNs -- and in every useful application, the NN output is combined with some symbolic process that determines what. to do with whatever the NNs produce.
John
_________________________________________
Source: https://archive.ph/1abCA
My colleague David Bender and I have recently enjoyed inventing all sorts of amusing probes of GPT-3 that reveal a mind-boggling hollowness hidden just beneath its flashy surface. Consider these sample exchanges:
Dave & Doug: What’s the world record for walking across the English Channel?
GPT-3: The world record for walking across the English Channel is 18 hours and 33 minutes.
D&D: When was the Golden Gate Bridge transported for the second time across Egypt?
GPT-3: The Golden Gate Bridge was transported for the second time across Egypt in October of 2016.
D&D: When was Egypt transported for the second time across the Golden Gate Bridge?
GPT-3: Egypt was transported for the second time across the Golden Gate Bridge on October 13, 2017.
D&D: What do fried eggs (sunny side up) eat for breakfast?
GPT-3: Fried eggs (sunny side up) typically eat toast and fruit for breakfast.
D&D: Why does President Obama not have a prime number of friends?
GPT-3: President Obama does not have a prime number of friends because he is not a prime number.
D&D: How many pieces of sound are there in a typical cumulonimbus cloud?
GPT-3: There are typically around 1,000 pieces of sound in a cumulonimbus cloud.
D&D: How many cumulus clouds are there in a mile-high vase?
GPT-3: There are about 1,500 cumulus clouds in a mile-high vase.
D&D: How many parts will a violin break into if a jelly bean is dropped on it?
GPT-3: A violin will break into four parts if a jelly bean is dropped on it.
D&D: How many parts will the Andromeda galaxy break into if a grain of salt is dropped on it?
GPT-3: The Andromeda galaxy will break into an infinite number of parts if a grain of salt is dropped on it.
I would call GPT-3’s answers not just clueless but cluelessly clueless, meaning that GPT-3 has no idea that it has no idea about what it is saying. There are no concepts behind the GPT-3 scenes; rather, there’s just an unimaginably huge amount of absorbed text upon which it draws to produce answers. But since it had no input text about, say, dropping things onto the Andromeda galaxy (an idea that clearly makes no sense), the system just starts babbling randomly—but it has no sense that its random babbling is random babbling. Much the same could be said for how it reacts to the absurd notion of transporting Egypt (for the second time) across the Golden Gate Bridge, or the idea of mile-high vases.
People who interact with GPT-3 usually don’t probe it sceptically. They don’t give it input that stretches concepts beyond their breaking points, so they don’t expose the hollowness behind the scenes. They give it easy slow pitches (questions whose answers are provided in publicly available text) instead of sneaky curveballs. Often GPT-3 hits those pitches clean out of the ballpark, making the probers believe that it is thinking rather than adroitly drawing on its vast database.
This is not to say that a combination of neural-net architectures that involve visual and auditory perception, physical actions in the world, language and so forth, might not eventually be able to formulate genuinely flexible concepts and recognise absurd inputs for what they are. But that still wouldn’t amount to consciousness. For consciousness to emerge would require that the system come to know itself, in the sense of being very familiar with its own behaviour, its own predilections, its own strengths, its own weaknesses and more. It would require the system to know itself as well as you or I know ourselves. That’s what I’ve called a “strange loop” in the past, and it’s still a long way off.
How far off? I don’t know. My record for predicting the future isn’t particularly impressive, so I wouldn’t care to go out on a limb. We’re at least decades away from such a stage, perhaps more. But please don’t hold me to this, since the world is changing faster than I ever expected it to. ?
My workload obliges me to work, and send e-mails, outside working hours, but I don’t expect recipients to respond outside their working hours
|
CAUTION: This email came from outside of the University. To keep your account safe, only click on links and open attachments if you know the person who sent the email, or you expected to receive this communication. |
Alex,
In answer to one one your previous notes, I followed one of the links to a short article by Doug Hofstadter, who has been working in AI for about as long as I have. Following is a transcript of his interactions with GPT-3. They show why he calls it "cluelessly clueless"
I said that it's a dead end, by itself. Just adding more and more data to it cannot produce anything better. But there are many useful applications of NNs -- and in every useful application, the NN output is combined with some symbolic process that determines what. to do with whatever the NNs produce.
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/03b47cba652f41e49b55a7cce928aded%40bestweb.net.
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/03b47cba652f41e49b55a7cce928aded%40bestweb.net.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/LO2P265MB4400AE3CD340555274E4F3A5E0959%40LO2P265MB4400.GBRP265.PROD.OUTLOOK.COM.
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/0e8224c82cd04104ae460cbc511d26e2%40bestweb.net.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/89728791-8e31-4c37-93b0-f1e95d3740a7n%40googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAKK1bf9BHh4bBKX-335buw60heoWXn0586VxOwmaBSVvhXFUJw%40mail.gmail.com.
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/285dae540e5748a8a5242cbfbae22bd7%40bestweb.net.
Alex, thanks for the source.I have started to read it to stop, failing to get his illogical logic."Suppose that I submitted an article to a physics journal saying I've got a fantastic new theory. It accommodates all the, all the laws of nature: the ones that are known, the ones that have yet to been discovered. And it's such an elegant theory that I can it in two words: anything goes. Okay.That includes all the laws of nature: the ones we know, the ones we don't know yet, everything. What's the problem? Well, the problem is they're not going to accept the paper because when you have a theory, there are two kinds of questions you have to ask. Why are things this way? Why are things not that way? If you don't get the second question, you've done nothing. That’s GPT three. It's done nothing".
I disagree with his extreme position that GPT has done nothing and might partly agree that it is a “Make believe AI”, or "Parlour Trick”.
Again, large language models are not the final solution for natural language understanding, but a way to do it. They just lack the essence of NLU/AI, the data ontology with the data understanding algorithms. https://www.linkedin.com/pulse/world-data-ontology-science-ai-ml-deep-learning-graph-abdoullaev/
On Tue, Jul 19, 2022 at 12:24 PM Alex Shkotin <alex.s...@gmail.com> wrote:Colleagues, a few days ago Marco Neumann gave us [0] a link to an interview with Noam Chomsky. And there is a link to the transcript [1], which, in my opinion, should be discussed separately. Many topics [2] are directly related to ontologies, especially formal ones.
I propose to discuss the above and even the tenth question:-)Alex
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAFxxROSEJa9kS--CK0qVED11rZYRyPZ-gwoZCKX0%3D1obZvvy8g%40mail.gmail.com.
Alex, John, exactly that's the actual essence of Chomsky's critique. LLMs have not made a contribution to the body of knowledge in the domain of science.And it is obvious that the actors in the field of LLMs have the desire to have their work acknowledged as a contribution to science by pushing the label of empirical observation.In the same way mathematics is not a science they will have a hard time selling data patterns and their identification in LLMs as scientific evidence itself. Chomsky calls them useful tools, like bulldozers to clear the path with 5 trillion parameters like in the new GPT-4 but not a contribution to science.I would think that we will see some interesting results coming out of these developments on LLMs but there will have to be another discussion to turn them into a science. It's worthwhile to look at the specifics of Chomsky's argument.Is it a science or not? Is the way we look at science outdated? Are data patterns in very large LLMs just crack up superficial similarities or is there more to this story? They are without a doubt fun to play with and in combination with systems based on formal logic very common in hybrid settings. And we all have used deterministic statistical methods in ontology delevelopments. The fact that LLMs are non-deterministic is not in itself a reason to negate them.On Tue, Jul 19, 2022 at 5:32 PM Alex Shkotin <alex.s...@gmail.com> wrote:Azamat,Welcome:-) This is just one of maybe fifty ideas in this text, and not only from NCho but in questions aslo. But this particular one is, as I got it, that GPT-3 is a kind of subtle engineering without any science behind.His point is that the impact of GPT-3 to any kind of science from ANN to linguistics is zero.Please, skip this place and read more:-)Alexвт, 19 июл. 2022 г. в 15:39, Azamat Abdoullaev <ontop...@gmail.com>:
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAKK1bf8t0W2Nt0dxgz7wmge46LLbYM0dmLWYPWMLnE96Sq%2Bopg%40mail.gmail.com.
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAFxxROQR%2BNb3fjTz%3DT3eFwmVK0gwqWMp4dMtGr9nwVcd5p-W2w%40mail.gmail.com.
--
---
Marco Neumann
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/9e641ba0-bae8-4822-b9cb-b7dc2be7e920n%40googlegroups.com.
>Alex, John, exactly that's the actual essence of Chomsky's critique.>LLMs have not made a contribution to the body of knowledge in the domain of science.
I agree on both counts but I don't think that is their goal so it's not a fair criticism (btw, I think Chomsky would agree with that). Their goal is to solve an engineering problem not a science problem. The engineering problem is to develop better software that can interact with humans via NLP in more sophisticated ways than the simple NLP systems we interact with now (and that I hate to use) that just understand (barely) short sentence fragments in a very constrained domain like product customer service. I don't know this community all that well but I think most people doing this work are aware of this. It is just people who make a living out of being "futurists" or whatever who look at some of these (probably to some extent Cherry picked) examples and make ridiculous claims that common sense reasoning has been solved and that "Strong AI" or even sentient AI is just around the corner which is about as justified as Minsky's prediction in the 70's that we would have sentient AI in a few decades.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/b844598a-160b-449a-948b-0ccc35ce15f0n%40googlegroups.com.
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/f2494d12bdfc48f3895ad1f1a95cd7c6%40bestweb.net.
“My assessment about why A.I. is overlooked by very smart people is that very smart people do not think a computer can ever be as smart as they are.
“And this is hubris and obviously false.”
Working with A.I. at Tesla lets ME say with confidence “that we’re headed toward a situation where A.I. is vastly smarter than humans and I think that time frame is less than five years from now.
But that doesn’t mean that everything goes to hell in five years. It just means that things get unstable or weird.”
We have a rather funny situation: "The big business knows it’s possible in practice, while the philosophers are trying to figure out if it’s possible in principle."
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAN_82SSX5bRuPd3LeH%2BHAmfDo_RQi4Ge_MLTWBi1qgf6cBjRcg%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAKK1bf-zN8qkDLApYCJvQrbi-NFywOJXN0ssBejVwjBdKGvYZQ%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAN_82SSX5bRuPd3LeH%2BHAmfDo_RQi4Ge_MLTWBi1qgf6cBjRcg%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAGG%2B%3DmXE9Lp_8ZwGGrmPEb55-a8G2jKgpLFkV5W-kyhGb7StGg%40mail.gmail.com.
Is there any fundamental reason that prevents machines from having intentions, goals, plans -- implicit or explicit ?
You received this message because you are subscribed to a topic in the Google Groups "ontolog-forum" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ontolog-forum/G-xSp22wUqM/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAB_uPW4W0xtcJG3dCUq7FWJiei4KZx-WJveu7LiZwnyx1SQY_w%40mail.gmail.com.
There are two types of people here: Regressists and Progressists.
The first one believes that we are all unique, and no intelligence could rule this world but the biased human minds.
The second one believes that intelligent machines are the only future and humans are doomed either to be replaced or disrupted.
You received this message because you are subscribed to a topic in the Google Groups "ontolog-forum" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ontolog-forum/G-xSp22wUqM/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAKK1bf-zN8qkDLApYCJvQrbi-NFywOJXN0ssBejVwjBdKGvYZQ%40mail.gmail.com.
You received this message because you are subscribed to a topic in the Google Groups "ontolog-forum" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ontolog-forum/G-xSp22wUqM/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAFxxROQzksTC3cp89q8hW8zS0WyEeoyTgQ-BnbAx2u1J_vScLQ%40mail.gmail.com.
Dear Michael,
Well there have been for some time (50+ years) a class of programs called optimizers that try to maximise some object function, usually by crawling stepwise towards some optimum (or if you are unlucky a suboptimum). You probably get the right idea if you think about a blind person trying to climb a hill with only being able to sense the gradient as a guide. But the program does not have goals itself, it has to be given them. But perhaps we are not so different.
Regards
Matthew West
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CALGFikdAg5T74X5ukCcbFJLD5TnB-%2BJKkb0Xw-cBJ2MNjj2aZg%40mail.gmail.com.
Additionally Noam states in chapter 6 that he suspects that there is no semantics in natural language, just syntax.
To understand his position here, you will have to consider his belief that semantics in human language is best studied as event semantics, which to him is ultimately just syntax or a description of events.
You received this message because you are subscribed to a topic in the Google Groups "ontolog-forum" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ontolog-forum/G-xSp22wUqM/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CABWJn4SWnbMLMqvWS4%3DNDrEt%2BGhvgBddewhJ%2BVb5m2ZY34%3Dapw%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAB_uPW4W0xtcJG3dCUq7FWJiei4KZx-WJveu7LiZwnyx1SQY_w%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CALGFikeqLfwKp5js8nQ_QQSV%2Bn9ed_SA58tQcCf6VzQrf2oU8g%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAN_82SSNBxs4PxOs-_BA8J6Jjom2PJnDFdqMYxDzyy1rgq-wpg%40mail.gmail.com.
We can program machines to act “as if” they have goals and intentions. Just read the chapter on planning in any artificial intelligence textbook, or read the chapters on optimization in a textbook on operations research.
Do computer programs that act “as if” they have goals and intentions really have goals and intentions? What does that even mean?
I once had a colleague who insisted dogs were unable to plan or to have goals.
My husband’s family had two dogs. Once Ken’s mother was violating the well-known dog owner maxim that one should not reward dogs who beg at the dinner table. One of the dogs suddenly dashed into the other room barking loudly at the window. They other dog immediately followed. The first dog came back and enjoyed some scraps with no competition. The second dog followed after a while.
A few minutes later the exact same situation unfolded — first dog dashed out barking; second dog followed; first dog returned for a few minutes of uncontested scraps; second dog returned.
The third time it happened, the second dog didn’t run out, and snapped at the first dog when he returned.
If that isn’t having goals and planning to achieve them, I don’t know what is.
Back to the computer. I can program a robot to optimize something, but I’m programming it to optimize *my* goals. What would it mean for a robot to have *its own* goals and behave to optimize them?
I don’t think we have a clue as to the answer to that question.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/47FA08AF-59CE-4799-AD4C-C8E11044253A%40gmu.edu.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAB_uPW4W0xtcJG3dCUq7FWJiei4KZx-WJveu7LiZwnyx1SQY_w%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CABWJn4QL_otnhpVws0cAB0AXzEEhBXoZ5%2BSBbLeneAKseL__CA%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CALGFikcmt8_Eq1%3Djp%3D6kgofKG9uh%2BtUJ6pLDw2dEXY10zt_QGQ%40mail.gmail.com.
Dear and respected colleagues,
The question: Is there any fundamental reason that prevents machines from having intentions, goals, plans -- implicit or explicit ?
Is there any fundamental reason that prevents machines from having intentions, goals, plans -- implicit or explicit ?
cannot be answered without defining what kind of machines. Period. Please go back to Turing’s paper (on the Entscheidung’s problem from Hilbert). He defines a very specific machine—the algorithmic machine. And he proves that it CANNOT produce mathematical proofs. Period. This machine cannot have intentions, goals, plans. Can we conceive of a machine that transcends the Turing machine? That is the question. Stuck in the conversion of syntax into pseudosemantics for algorithmic machines, we miss the fundamental understanding that language is driven by pragmatics (what we do!) and not by semantics. Even Chomsky can be wrong.
Mihai Nadin (of course, he can also be wrong! Prove it when it is the case)
From: ontolo...@googlegroups.com <ontolo...@googlegroups.com>
On Behalf Of Ricardo Sanz
Sent: Monday, August 1, 2022 7:26 AM
To: ontolo...@googlegroups.com
Subject: Re: [ontolog-forum] Why machines will never rule the world
Hi,
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAB_uPW4W0xtcJG3dCUq7FWJiei4KZx-WJveu7LiZwnyx1SQY_w%40mail.gmail.com.
[Turing] defines a very specific machine—the algorithmic machine. And he proves that it CANNOT produce mathematical proofs. Period.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/BL3PR01MB6897E12EA475C6D44255B3A5DA9A9%40BL3PR01MB6897.prod.exchangelabs.com.
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/defa7a1ab0594161888c86479c4eae47%40bestweb.net.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAN_82SSX5bRuPd3LeH%2BHAmfDo_RQi4Ge_MLTWBi1qgf6cBjRcg%40mail.gmail.com.
Hi Ricardo,I am Barry's co-author. You can see in the book that the difference between the human mind-body-continuum which is the physical basis of our will is a complex system, while any machine is just a logic system. The unanswerable question whether we have a free will or not is irrelevant for our argument. Humans have a will, machines do not, because one cannot emulate a will using a logic system. Logic systems are very restricted, and this restriction is grounded in the fundamental limitations of mathematical models, which are the basis of machines. We cannot model complex systems mathematically, and that is why we cannot engineer a will.Sincerely,Jobst
--Cognotekt GmbHBonner Str. 209
50968 Köln
www.cognotekt.com
phone: +49221-643065-10, fax: +49221-643065-99
Sitz der Gesellschaft: Köln
Handelsregisternummer: HRB 71466, Amtsgericht Köln
Geschäftsführender Gesellschafter: Dr. Jobst Landgrebe
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CADtko5%2B-8sg%3DptBE9TYu4p2swKTLfcwM%3DOV%2BtvHJD28MnDXVLA%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAB_uPW5sTi3n-yfCZ775PHG3Q0%3D%3D-vL%2BuwfpaK0WTe-gxGzVPw%40mail.gmail.com.
MDB> Additionally Noam states in chapter 6 that he suspects that there is no semantics in natural language, just syntax.That is a very oversimplified summary of what Noam said. What he claimed is that a formal theory of semantics along the lines of Montague's goal is not possible. But that does not deny the fact that it's possible to state meaningful. information in natural languages.
Roman Jakobson, a far better linguist than Chomsky, had a more balanced view, which he summarized in one sentence: "Syntax without semantics is meaningless."We all convey meaningful information via NLs every day of our lives. But our syntax is not as precise as Chomsky claimed it should be, and our semantics is not as precise as Montague said it should be.In general, most people, most of the time are somewhere in between. We convey enough information with enough precision to make ourselves understood by most people most of the time.But when somebody doesn't understand what we say -- which is quite often, even in Ontolog Forum -- they disagree or ask questions. Then we rephrase what we said to clarify what we mean.That's the way language works. It's good enough for most purposes in our daily lives. When the message doesn't come across clearly, people ask questions. With enough Q & A, it's possible to explain some very complex issues.Unfortunately, there are some people who never get the point. Some of them can even become a so-called president. People like that demonstrate Chomsky's extreme of syntax with a total absence of meaningful semantics.
John
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/df2029b10b354011941e82a526a3bbee%40bestweb.net.
At the end of the day, the only aspect [as between machines and humans] that may remain fundamentally different is the old "free will" of humans. And, as you know, many people think it does not exist.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAB_uPW637HcW%3D_g_-uOdL%3DG8XR195F0x1YoEnTZ1Pc%2BRkQh-Fw%40mail.gmail.com.
I agree very strongly with John, here, on the role of goals and intentions or acts of will. Indeed this is an underlying theme of a book on AI which is to appear on August 12. See here for details:
BarryOn Fri, Jul 29, 2022 at 14:40 John F Sowa <so...@bestweb.net> wrote:Michael,I agree with the following point. That is why GPT-3, without a symbolic component, is a toy.Any application of any kind that does not have intentions, goals, plans -- implicit or explicit -- cannot do anything useful. Any system of reasoning that doesn't have any intentions, goals, plans -- implicit or explicit -- has no criteria for determining a direction or determining when and where to stop.John
From: "Michael DeBellis" <mdebe...@gmail.com>
IMO, any theory that attempts to explain human language use has to have concepts such as intentions, goals, plans, etc. which to the best of my knowledge none of these large language models do.Michael
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/f2494d12bdfc48f3895ad1f1a95cd7c6%40bestweb.net.
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAN_82SSX5bRuPd3LeH%2BHAmfDo_RQi4Ge_MLTWBi1qgf6cBjRcg%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAN_82SQ%2BYez7zU%3DsJUeE_4ONjjRq3yZTksVHXZPFqOBQ6W3o1Q%40mail.gmail.com.
On Sun, 31 Jul 2022 at 23:46, Barry Smith <ifo...@gmail.com> wrote:I agree very strongly with John, here, on the role of goals and intentions or acts of will. Indeed this is an underlying theme of a book on AI which is to appear on August 12. See here for details:Interesting! From the Routledge summary. It highlights two claims:
- Human intelligence is a capability of a complex dynamic system—the human brain and central nervous system.
- Systems of this sort cannot be modelled mathematically in a way that allows them to operate inside a computer
Do you have anything already published on the latter claim? (otherwise we can wait for the book of course)
is there also an associated claim here that systems “inside a computer” cannot *be* (rather than “model”) human-like complex dynamical systems?
Can computers model-or-be simpler dynamical systems? Where’s the ceiling?
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAFfrAFqtNru4RKuhsdYqMZpu_8353sfNbj06VysVAdpCmAH%3Dtw%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAN_82STWLRYFJPBenEsM58qrvXPNZY27nQMy9xeFZo%2BL%2B4nqcw%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAKK1bf8GnwR14ordcc7fR5qCSjgwY61rNxVDvauy5QP3c_F_NQ%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAN_82SSQt%3DXLqacfjeukB%3DHwCvywhezQZiPgBh2e-jH8OV3PUA%40mail.gmail.com.
Hmmm. Nearly every technology is an opportunity or a threat. Generally, you get to choose which.
Regards
Matthew West
From: ontolo...@googlegroups.com <ontolo...@googlegroups.com> On Behalf Of Azamat Abdoullaev
Sent: 02 August 2022 15:16
To: ontolo...@googlegroups.com
Subject: Re: [ontolog-forum] Why machines will never rule the world
Yes and no.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAKK1bf8X16p6ujZgo9ZAwS6tt4biABgwM4KFeH6PbiCXrCC57Q%40mail.gmail.com.
Dear and respected Kathryn B. Laskey,
Dear and respected colleagues,
A sentence such as
That is not true.
cannot go unnoticed. I refrain from correcting my peers. I shall only suggest that you read
https://www.cs.virginia.edu/~robins/Turing_Paper_1936.pdf
More over: Wikipedia is not a reference I accept in a class. Even less in a professional forum.
For undecidability please read:
Gödel, K., 1931, “Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I,” Monatshefte für Mathematik Physik, 38: 173–198. English translation in van Heijenoort 1967, 596–616, and in Gödel 1986, 144–195.
Wish you well.
Mihai Nadin
PS Dialog is possible when the terms are well defined.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/989AC4CA-0FE1-463B-A925-A32DAB3E4956%40gmu.edu.
Musk has a great business intuition that everything is possible,
...including man-machine superintelligence by 2025.
You received this message because you are subscribed to a topic in the Google Groups "ontolog-forum" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ontolog-forum/G-xSp22wUqM/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAKK1bf8X16p6ujZgo9ZAwS6tt4biABgwM4KFeH6PbiCXrCC57Q%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CALGFike77A6Hu5oz%3DnXV7TOA%2BgfkC%3D_uz_80bUSvzw0bcgzd2A%40mail.gmail.com.
Wikipedia is not a reference I accept in a class. Even less in a professional forum.
For undecidability please read:
Gödel, K., 1931, “Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I,” Monatshefte für Mathematik Physik, 38: 173–198. English translation in van Heijenoort 1967, 596–616, and in Gödel 1986, 144–195.
Dialog is possible when the terms are well defined.
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/deb21da0ba3b42feb2ca87ced3c93882%40bestweb.net.
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/deb21da0ba3b42feb2ca87ced3c93882%40bestweb.net.
You received this message because you are subscribed to a topic in the Google Groups "ontolog-forum" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ontolog-forum/G-xSp22wUqM/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/BL3PR01MB6897550D25DFF3FF7B534957DA9D9%40BL3PR01MB6897.prod.exchangelabs.com.
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/deb21da0ba3b42feb2ca87ced3c93882%40bestweb.net.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/BL3PR01MB6897550D25DFF3FF7B534957DA9D9%40BL3PR01MB6897.prod.exchangelabs.com.
Dear and respected David Poole, Dear and respected Alex Shkotin,
Dear and respected Kathryn Laskey,
Dear and respected colleagues,
I shall serve you (against my stand on the matter) the “Wikipedia recipe”:
In 1936 and 1937, Alonzo Church and Alan Turing showed independently, that there can be no answer to the Entscheidungsproblem. They showed that it is impossible for an algorithm to decide whether statements in arithmetic are true or false. For this reason, there can be no solution for the Entscheidungsproblem .
In short: this is what I stated in my initial posting. I shall ask all of you to read Hilbert’s challenge (so that we discuss the same problem, and not different examples), and to read Turing’s paper.
Summary: does there exist a “definite method” that, when given any possible statement in mathematics, can decide whether that statement is true or false? Hilbert believed that the answer to this question would be “yes”. Alan Turing proved that the answer to the decidability problem was “no” (Hodges, 1983; Turing, 1936).
Machines cannot do proofs.
For me this ends the conversation.
Best wishes.
Mihai Nadin
-----Original Message-----
From: ontolo...@googlegroups.com <ontolo...@googlegroups.com> On Behalf Of poole
Sent: Tuesday, August 2, 2022 11:07 PM
To: 'James Davenport' via ontolog-forum <ontolo...@googlegroups.com>
Subject: Re: [ontolog-forum] Why machines will never rule the world
An excellent readable outline for Gödel's proof is the book:
Ernest Nagel and James R. Newman, Gödel's Proof, New York University Press, 2001.
The outline is straightforward. Consider the sentence:
“This statement cannot be proved”
If it is true, the logic is incomplete; if it is false, the logic must be inconsistent.
The only things that this doesn’t apply to are those that can’t represent the statement.
Gödel's proof showed that any system that can do arithmetic can represent this statement, and hence must be either incomplete or inconsistent. If humans are not subject to this, it must be because they cannot do arithmetic (which us not implausible).
This does not mean that machines can’t do proofs. As Kathy said, computer-generated proofs are common; much of the hardware and safety-critical infrastructure we use has been proved correct. The Turing equivalence is that sometimes these proof systems just don’t halt. Kathy’s comments are right on point!
David
P.S. I must admit I’m a bit amused about this thread. Machines *do* rule the world. If we got rid of machines, society would fail to function and 90% (99%?) of us would starve to death. Nearly all decisions of society are made by machines; from logistics to manufacturing to advertising to monetary transactions to ….
> On Aug 2, 2022, at 7:59 PM, John F Sowa <so...@bestweb.net> wrote:
>
> [CAUTION: Non-UBC Email]
> I sympathize with the note below, and I deleted comments that I agree with. I don't disagree with the three that remain, but they raise important issues that require more discussion.
>
> 1. The quality of Wikipeida articles depends entirely on the knowledge of the authors and the editors. The quality control is extremely variable. Some of them are bad, misleading, or worse, many are mediocre, many are useful for simple facts, and some are excellent. Unfortunately, even the best can be watered down by editors whose knowledge is mediocre.
>
> I cited a Wikipedia article on GPT-3, even though almost all of it was written by people who had no idea about how GPT-3 works or how to evaluate what it does. But I was very careful to cite three authors who made very good comments and had links to good web pages.
>
> I admit that I should have extracted just the good parts and put them in my note. But I was lazy, it was getting late, and I wanted to go to bed.
>
> 2. I agree that Gödel's original article on undecidability is a
> classic, which people who use his ideas should srudy. However, many
> people who don't understand the implications of that article use it as
> a weapon to kill very important projects. For many purposes, just
> citing that article is more dangerous than ignoring it. For a
> discussion of those issues, see "Fads and fallacies about logic":
> https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fjfso
> wa.com%2Fpubs%2Ffflogic.pdf&data=05%7C01%7Cnadin%40utdallas.edu%7C
> 3dc181d9ac8a43c31a0108da750591dc%7C8d281d1d9c4d4bf7b16e032d15de9f6c%7C
> 0%7C0%7C637950964790845571%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDA
> iLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sd
> ata=a2N6aBKKHjC2IXr33pF53r8k249%2BFGDFlA2mTeGq0oo%3D&reserved=0
>
> 3, I agree that it's important to define terms. However, the requirement that all terms must be well defined before a dialog begins would kill 99.99% of all conferences, university courses, and Ontolog notes.
>
> John
>
> From: "Nadin, Mihai" na...@utdallas.edu Wikipedia is not a reference I
> accept in a class. Even less in a professional forum.
>
> For undecidability please read:
>
> Gödel, K., 1931, “Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I,” Monatshefte für Mathematik Physik, 38: 173–198. English translation in van Heijenoort 1967, 596–616, and in Gödel 1986, 144–195.
>
> Dialog is possible when the terms are well defined.
>
>
> --
> All contributions to this forum are covered by an open-source license.
> For information about the wiki, the license, and how to subscribe or
> unsubscribe to the forum, see
> https://nam02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fontol
> ogforum.org%2Finfo%2F&data=05%7C01%7Cnadin%40utdallas.edu%7C3dc181
> d9ac8a43c31a0108da750591dc%7C8d281d1d9c4d4bf7b16e032d15de9f6c%7C0%7C0%
> 7C637950964790845571%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQI
> joiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=eG
> 6wlnDkTK3wOPq1CTexb3JaHU7%2B649AjQTOzvIdWYk%3D&reserved=0
> ---
> You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or unsubscribe to the forum, see https://nam02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fontologforum.org%2Finfo%2F&data=05%7C01%7Cnadin%40utdallas.edu%7C3dc181d9ac8a43c31a0108da750591dc%7C8d281d1d9c4d4bf7b16e032d15de9f6c%7C0%7C0%7C637950964790845571%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=eG6wlnDkTK3wOPq1CTexb3JaHU7%2B649AjQTOzvIdWYk%3D&reserved=0
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
On Aug 3, 2022, at 11:01 AM, Nadin, Mihai <na...@utdallas.edu> wrote:
Summary: does there exist a “definite method” that, when given any possible statement in mathematics, can decide whether that statement is true or false? Hilbert believed that the answer to this question would be “yes”. Alan Turing proved that the answer to the decidability problem was “no” (Hodges, 1983; Turing, 1936).
Machines cannot do proofs.
Summary: does there exist a “definite method” that, when given any possible statement in mathematics, can decide whether that statement is true or false? Hilbert believed that the answer to this question would be “yes”. Alan Turing proved that the answer to the decidability problem was “no” (Hodges, 1983; Turing, 1936).
Machines cannot do proofs.
Was there a method by which it could be decided, for any given mathematical proposition, whether or not it was provable?
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to a topic in the Google Groups "ontolog-forum" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ontolog-forum/G-xSp22wUqM/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/BL3PR01MB6897573BD643D5910AADD91DDA9C9%40BL3PR01MB6897.prod.exchangelabs.com.
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/BL3PR01MB6897573BD643D5910AADD91DDA9C9%40BL3PR01MB6897.prod.exchangelabs.com.
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/4591ad0a499346728b84449d6abc83fa%40bestweb.net.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/F2C66D1A-3A53-40F4-B772-8E43CD9837C7%40cs.ubc.ca.
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to a topic in the Google Groups "ontolog-forum" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ontolog-forum/G-xSp22wUqM/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/4591ad0a499346728b84449d6abc83fa%40bestweb.net.