>> what I call "a Turing firewall", software has no ability to know its underlying hardware implementation, it is an inviolable separation of layers of abstraction, which makes the lower levels invisible to the layers above.
> That's roughly true, but not exactly. If you think of intelligence implemented on a computer it would make a difference if it had a true random number generator (hardware) or not.
> It would make a difference if it were a quantum computer or not.
> And going the other way, what if it didn't have a multiply operation.
> Why do you even use the word AI ? Why can't use just use the words "computer program" ? Aaa... hype. Makes you look more intelligent than you actually are! Look at me: AI! AI! AI! Ooo.... so smaaaart! =))))))))))))))))))))
....
> And going the other way, what if it didn't have a multiply operation.That would be no problem as long as the AI still had the addition operation, just do repeated additions, although it would slow things down. But you could start removing more and more operations until you got all the way down to First Order Logic, and then an AI could actually prove its own consistency. Kurt Godel showed that a few years before he came up with this famous incompleteness theorem in what we now call Godel's Completeness Theorem. His later Incompleteness Theorem only applies to logical systems powerful enough to do arithmetic, and you can't do arithmetic with nothing but first order logic. The trouble is you couldn't really say an Artificial Intelligence was intelligent if it couldn't even pass a first grade arithmetic test.
>> That [lack of a multiply operation] would be no problem as long as the AI still had the addition operation, just do repeated additions, although it would slow things down. But you could start removing more and more operations until you got all the way down to First Order Logic, and then an AI could actually prove its own consistency. Kurt Godel showed that a few years before he came up with this famous incompleteness theorem in what we now call Godel's Completeness Theorem. His later Incompleteness Theorem only applies to logical systems powerful enough to do arithmetic, and you can't do arithmetic with nothing but first order logic. The trouble is you couldn't really say an Artificial Intelligence was intelligent if it couldn't even pass a first grade arithmetic test.
> There are many levels of intelligence. An octopus can't pass a first grade arithmetic test but it can escape thru a difficult maze
mey
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv1faq%2BFXVrLv8_%3DQhNnspADRir3op7xBWBZo6ZXHFSA7w%40mail.gmail.com.
> A rock, along with many other things, can't pass a first grade arithmetic tes either; but that doesn't show that anything that can't pass a first grade arithmetic test is unintelligent or unconscious, as for example an octopus or a 3yr old child.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv0VcZSw-%2Bk1McSRRw6DJbwpCz7efp1fA1C9jkRz9xe%2B7Q%40mail.gmail.com.
> There are easier and harder tests than the Turing test. I don't know why you say it's the only test we have. Also: would passing the Argonov test (which I described in my document on whether zombies are possible) not be a sufficient proof of consciousness? Note that the Argonov test is much harder to pass than the Turing test.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv3DiWF-qSnKRkpu_oRKG%2BhASJE86inx%2BNYYqk7fZ69LXw%40mail.gmail.com.
> Consider a deterministic intelligent machine having no innate philosophical knowledge or philosophical discussions while learning. Also, the machine does not contain informational models of other creatures (that may implicitly or explicitly contain knowledge about these creatures’ consciousness). If, under these conditions, the machine produces phenomenal judgments on all problematic properties of consciousness, then, according to [the postulates], materialism is true and the machine is conscious.
On Thu, Jul 11, 2024 at 5:33 PM Jason Resch <jason...@gmail.com> wrote:> Consider a deterministic intelligent machine having no innate philosophical knowledge or philosophical discussions while learning. Also, the machine does not contain informational models of other creatures (that may implicitly or explicitly contain knowledge about these creatures’ consciousness). If, under these conditions, the machine produces phenomenal judgments on all problematic properties of consciousness, then, according to [the postulates], materialism is true and the machine is conscious.Who judges if the "phenomenal judgments" of the machine are correct or incorrect? Even humans can't agree among themselves about most philosophical matters, certainly that's true of members of this list.
And the fact is many, perhaps most, human beings don't think about deep philosophical questions at all, they find it all to be a big bore, so does that mean they're philosophical zombies?
And just because a machine can pontificate about consciousness, what reason, other than Argonov's authority, would I have for believing the machine was conscious?
I'm going to take a break from the list right now because I wanna watch Joe Biden's new press conference .... ah... I think I think I wanna watch it itbfq
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv1Vc%2BxDqYGT9uz8TNNvLb5nijcJePj2yTzN74c%2BJK5NNQ%40mail.gmail.com.
On Thu, Jul 11, 2024 at 4:37 PM Brent Meeker <meeke...@gmail.com> wrote:
> A rock, along with many other things, can't pass a first grade arithmetic tes either; but that doesn't show that anything that can't pass a first grade arithmetic test is unintelligent or unconscious, as for example an octopus or a 3yr old child.
And because of their failure to pass a first year arithmetic test we would say that a rock, an octopus and a three year old child are not behaving very intelligently.
But as I said before, the Turing Test is not perfect, however it's all we've got. If something passes the test then it's intelligent and conscious. If fails the test then it may or may not be intelligent and or conscious
See what's on my new list at Extropolis
asb
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv0VcZSw-%2Bk1McSRRw6DJbwpCz7efp1fA1C9jkRz9xe%2B7Q%40mail.gmail.com.
> In case you've forgotten, the Turing test was based on text only communication between an interlocutor asked to distinguish between a computer pretending to be a human and a man or woman pretending to be a woman or man.
> It's already been passed by some LLM's by dumbing-down their response.
But as I said before, the Turing Test is not perfect, however it's all we've got. If something passes the test then it's intelligent and conscious. If fails the test then it may or may not be intelligent and or conscious
asb
>> Who judges if the "phenomenal judgments" of the machine are correct or incorrect? Even humans can't agree among themselves about most philosophical matters, certainly that's true of members of this list.> They don't have to be correct, as far as I know. The machine just has to make phenomenal judgements (without prior training on such topics).
> Failing the test doesn't imply a lack of consciousness. But passing the test implies the presence of consciousness.
> there must be a source of information to permit the making of phenomenal judgements, and since the machine was not trained on them, what else, would you propose that source could be, other than consciousness?
ubu
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv3QeUwqAay0GfDmXXwySOhQj-%2BBSbfiZcqvMdhGrNq%2BGQ%40mail.gmail.com.
> Do you think that passing the Argonov test would constitute positive proof of consciousness?
----ubu
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv3QeUwqAay0GfDmXXwySOhQj-%2BBSbfiZcqvMdhGrNq%2BGQ%40mail.gmail.com.
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CA%2BBCJUh%2BkS-i_n3BTVcB%3D92M0VfESR6n_Ua14fYj4aEFMmCQBQ%40mail.gmail.com.
On Thu, Jul 11, 2024 at 8:09 PM Brent Meeker <meeke...@gmail.com> wrote:> In case you've forgotten, the Turing test was based on text only communication between an interlocutor asked to distinguish between a computer pretending to be a human and a man or woman pretending to be a woman or man.
Yes but that is an unimportant detail, the essence of the Turing Test is that whatever method you use to determine the consciousness or lack of it in one of your fellow human beings you should use that same method when judging the consciousness of a computer.
> It's already been passed by some LLM's by dumbing-down their response.Don't you find that fact to be compelling? An AI needs to play dumb in order to fool a human into thinking it is human.
An AI needs to play dumb in order to fool a human into thinking it is human. Don't you find that fact to be compelling?
No, it only passed because the human interlocutor didn't ask the right questions; like, "Where are you?" and "Is it raining outside?".
On Fri, Jul 12, 2024 at 7:17 PM Brent Meeker <meeke...@gmail.com> wrote:
An AI needs to play dumb in order to fool a human into thinking it is human. Don't you find that fact to be compelling?
No, it only passed because the human interlocutor didn't ask the right questions; like, "Where are you?" and "Is it raining outside?".If the AI was trying to deceive the human into believing it was not a computer then it would simply say something like "I am in Vancouver Canada and it's not raining outside it's snowing".
And I don't see how a question like that could help you figure out the nature of an AI's mind, or any mine for that matter, even if the AI was ordered to tell the truth. The position of a mind in 3D space is a nebulous concept; if your brain is in one place and your sense organs are in another place, and you're thinking
about yet another place, then where exactly is the position of your mind?
I think it's a nonsense question because "you" should not be thought of as a pronoun but as an adjective. You are the way atoms behave when they are organized in a Brentmeekerian way.
So asking a question like that is like asking where is "big" located or the color yellow.
y11
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv3Lg2s8mRn_n%2B_oKR0ozfLhvJhpgrPA3__PKDjPyW8Cfw%40mail.gmail.com.
On 7/13/2024 4:07 AM, John Clark wrote:
Which could easily be checked in real time. Anyone question won't resolve whether it's a person or not but a sequence can provide good evidence. Next question, "Is there a phone in your room." Answer, "Yes" Call the number and see if anyone answers. etc. The point is a human IS in a specific place and can act there. An LLM AI isn't anyplace in particular.On Fri, Jul 12, 2024 at 7:17 PM Brent Meeker <meeke...@gmail.com> wrote:
An AI needs to play dumb in order to fool a human into thinking it is human. Don't you find that fact to be compelling?
No, it only passed because the human interlocutor didn't ask the right questions; like, "Where are you?" and "Is it raining outside?".If the AI was trying to deceive the human into believing it was not a computer then it would simply say something like "I am in Vancouver Canada and it's not raining outside it's snowing".
--
At other times you say consciousness is just how data feels when being processed. It's processed in your brain...which has a definite location.And I don't see how a question like that could help you figure out the nature of an AI's mind, or any mine for that matter, even if the AI was ordered to tell the truth. The position of a mind in 3D space is a nebulous concept; if your brain is in one place and your sense organs are in another place, and you're thinking
I just asked "Where are you?" Not "Where is your mind?"about yet another place, then where exactly is the position of your mind?
And those atoms have a location in order to interact.I think it's a nonsense question because "you" should not be thought of as a pronoun but as an adjective. You are the way atoms behave when they are organized in a Brentmeekerian way.
Brent
--So asking a question like that is like asking where is "big" located or the color yellow.
y11
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv3Lg2s8mRn_n%2B_oKR0ozfLhvJhpgrPA3__PKDjPyW8Cfw%40mail.gmail.com.
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/2ca26f33-1346-4abf-a55e-7f5f24704173%40gmail.com.
>> If the AI was trying to deceive the human into believing it was not a computer then it would simply say something like "I am in Vancouver Canada and it's not raining outside it's snowing".
> Which could easily be checked in real time.
>> I don't see how a question like that could help you figure out the nature of an AI's mind, or any mine for that matter, even if the AI was ordered to tell the truth. The position of a mind in 3D space is a nebulous concept; if your brain is in one place and your sense organs are in another place, and you're thinking
> At other times you say consciousness is just how data feels when being processed.
> It's processed in your brain...which has a definite location.
> I just asked "Where are you?" Not "Where is your mind?"
>> I think it's a nonsense question because "you" should not be thought of as a pronoun but as an adjective. You are the way atoms behave when they are organized in a Brentmeekerian way.
> And those atoms have a location in order to interact.