Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

For openers I submit an article that can get the ball rolling

7 views
Skip to first unread message

Nic

unread,
May 18, 2023, 6:26:17 PM5/18/23
to
If a computer can fool 70% of judges in a five-minute conversation into thinking it’s a person, the computer passes the test. Would passing the Turing test – something which now seems imminent – show that an AI has achieved thought and understanding?

From https://theconversation.com/chatgpt-cant-think-consciousness-is-something-entirely-different-to-todays-ai-204823

I invite all serious posters to participate.


--
Nic <N...@none.net>

Sn!pe

unread,
May 18, 2023, 7:20:52 PM5/18/23
to
Your .sig separator is broken. It should comprise
"dash, dash, space, return" all on a line by itself.
This facilitates automatic stripping from quotation
by standards compliant Newsreader programs.

Re: your question:
Does consciousness imply independent self-awareness?
If so then no, current AIs are not conscious, merely programmed.

Are AIs capable of deception by way of passing the Turing test?
I would say yes; IMO the Turing Test is insufficiently rigorous and
not fit for purpose.

--
^Ï^. – Sn!pe – <https://youtu.be/_kqytf31a8E>

My pet rock Gordon mourns the death of privacy.

Andy Burns

unread,
May 19, 2023, 1:56:39 AM5/19/23
to
Sn!pe wrote:

> Does consciousness imply independent self-awareness?
> If so then no, current AIs are not conscious, merely programmed.

Do you consider training separate from programming?

occam

unread,
May 19, 2023, 2:18:51 AM5/19/23
to
On 19/05/2023 01:19, Sn!pe wrote:
> Nic <N...@none.net> wrote:
>
>> If a computer can fool 70% of judges in a five-minute conversation into
>> thinking it's a person, the computer passes the test. Would passing the
>> Turing test – something which now seems imminent – show that an AI has
>> achieved thought and understanding?
>>
>> From
>> <https://theconversation.com/chatgpt-cant-think-consciousness-is-something-entirely-different-to-todays-ai-204823>
>>
>> I invite all serious posters to participate.
>>
>>
>> --
>> Nic <N...@none.net>
>
> Your .sig separator is broken.

You should be aware that it is not only Nic's separator that is broken.
Nic is broken - has been for a long while. ('Nic' is short for
'nincompoop'.)


<snip>

>
> Re: your question:
> Does consciousness imply independent self-awareness?

That was not his question. His (implied) question was 'Is passing the
Turing Test the same as having consciousness and awareness?'

No, and no. There is a large chasm between 'pretending to be a human in
conversation' and 'self-awareness'. I blame author of the article for
journalistic sensationalism.



Sn!pe

unread,
May 19, 2023, 6:01:11 AM5/19/23
to
I think there are probably large areas in common but otherwise, yes.
What about the effect of how initial program parameters are set?
As I envisage it, that would be the control(s) to adjust the AI's bias^W
balance between altruism and evilness; definitely programming rather
than training.

Sn!pe

unread,
May 19, 2023, 6:01:11 AM5/19/23
to
occam <oc...@nowhere.nix> wrote:
[...]
> > Re: your question:
> > Does consciousness imply independent self-awareness?
>
> That was not his question. His (implied) question was 'Is passing the
> Turing Test the same as having consciousness and awareness?'
>
> No, and no. There is a large chasm between 'pretending to be a human
> in conversation' and 'self-awareness'. I blame author of the article for
> journalistic sensationalism.

I agree.

Johnny

unread,
May 19, 2023, 8:36:21 AM5/19/23
to
Training is for pets and children. Programming is for machines.

I don't see how a machine made of nuts bolts and electronic components
could ever be aware of it's existence.

Blue-Maned_Hawk

unread,
May 21, 2023, 12:11:03 AM5/21/23
to

​A fundamental necessity of a Turing test is that it _must_ be
_impossible_ for a person to fail it—otherwise, we risk such a test
being used by the powerful to dehumanize groups as an excuse for atrocities.

--
⚗︎ | /blu.mɛin.dʰak/ | shortens to "Hawk" | he/him/his/himself/Mr.
bluemanedhawk.github.io
Bitches stole my whole ass ␔🭖᷿᪳𝼗᷍⏧𒒫𐻾ࣛ↉�⃣ quoted-printable, can't
have shit in Thunderbird 😩

Sn!pe

unread,
May 21, 2023, 12:15:39 AM5/21/23
to
Blue-Maned_Hawk <bluema...@gmail.com> wrote:

> ?A fundamental necessity of a Turing test is that it _must_ be
> _impossible_ for a person to fail it—otherwise, we risk such a test
> being used by the powerful to dehumanize groups as an excuse for atrocities.

I bet that I could fail a Turing Test a Turing Test a Turing Test
a Turing Test a Turing Test a Turing Test a Turing Test a Turing
Test a Turing T[Ctrl-Alt-Del]

--
^Ï^. – Sn!pe – <https://youtu.be/_kqytf31a8E>

My pet rock Gordon is spooked.

vallor

unread,
May 27, 2023, 4:16:19 AM5/27/23
to
What if you programmed it to have neural nets that can learn, then you
trained that?

And imagine what a thinking machine might think if they found you?

https://www.mit.edu/people/dpolicar/writing/prose/text/thinkingMeat.html

Obquote:
_ _ _ _ _ _ _
"They're made out of meat."

"Meat?"

"Meat. They're made out of meat."

"Meat?"

"There's no doubt about it. We picked several from different parts of the
planet, took them aboard our recon vessels, probed them all the way
through. They're completely meat."

"That's impossible. What about the radio signals? The messages to the
stars."

"They use the radio waves to talk, but the signals don't come from them.
The signals come from machines."

"So who made the machines? That's who we want to contact."

"They made the machines. That's what I'm trying to tell you. Meat made the
machines."

"That's ridiculous. How can meat make a machine? You're asking me to
believe in sentient meat."
[. . .]
_ _ _ _ _ _ _

--
-v

vallor

unread,
May 27, 2023, 4:23:47 AM5/27/23
to
On Fri, 19 May 2023 00:19:23 +0100, Sn!pe wrote:

> Nic <N...@none.net> wrote:
>
>> If a computer can fool 70% of judges in a five-minute conversation into
>> thinking it's a person, the computer passes the test. Would passing the
>> Turing test – something which now seems imminent – show that an AI has
>> achieved thought and understanding?
>>
>> From
>> <https://theconversation.com/chatgpt-cant-think-consciousness-is-
something-entirely-different-to-todays-ai-204823>
>>
>> I invite all serious posters to participate.
>>
>>
>> --
>> Nic <N...@none.net>
>
> Your .sig separator is broken. It should comprise "dash, dash, space,
> return" all on a line by itself.
> This facilitates automatic stripping from quotation by standards
> compliant Newsreader programs.
>
> Re: your question:
> Does consciousness imply independent self-awareness?
> If so then no, current AIs are not conscious, merely programmed.

Programmed *to "learn"*, then "taught".


> Are AIs capable of deception by way of passing the Turing test?
> I would say yes; IMO the Turing Test is insufficiently rigorous and not
> fit for purpose.

Would you be willing to suggest a better test?

--
-v

vallor

unread,
May 27, 2023, 4:39:00 AM5/27/23
to
ChatGPT let's you share chat links now:

https://chat.openai.com/share/f480d3e5-e675-4b29-a7ca-5d0e528f3088

Let me know if that works properly.

Here is the chat, which I probably won't need to include in the future,
hopefully:
_ _ _ _ _ _ _
Q:
is there a better test of general intelligence than the Turing Test?

ChatGPT:
The Turing Test, proposed by the British mathematician and computer
scientist Alan Turing in 1950, has been a long-standing benchmark for
artificial intelligence. However, it is not without its critics and
limitations. Some believe it focuses too much on the ability to mimic
human conversation, which is just one aspect of intelligence.

In response to the shortcomings of the Turing Test, multiple alternatives
have been proposed over the years:

The Winograd Schema Challenge: This is a test of natural language
understanding, named after computer scientist Terry Winograd. It involves
resolving the reference of a pronoun in a sentence, which is a challenging
problem for computers but easy for humans. This test attempts to measure
an AI's understanding of a sentence's context, which is a key aspect of
intelligence.

The Lovelace 2.0 Test of Artificial Creativity and Intelligence: Named
after Ada Lovelace, this test assesses a machine's ability to create a
piece of work (like a poem, painting, or novel) that the machine's
programmers did not explicitly design it to create. This tests for
creativity, a hallmark of human intelligence that the Turing Test doesn't
measure.

The Coffee Test: Proposed by AI researcher Ben Goertzel, this test
requires a robot to go into an average American home and figure out how to
make coffee: find the coffee machine, find the coffee, add water, find a
mug, etc. This tests the AI's ability to perform tasks in an unfamiliar
environment, which is another aspect of intelligence.

The BICA (Biologically Inspired Cognitive Architectures) Test: This is a
much more comprehensive, albeit difficult, test that requires the AI to
simulate the entire human mind. This would involve demonstrating a full
range of human cognitive abilities, including emotion, self-awareness,
consciousness, and theory of mind.

The Animal-AI Olympics: This is a competition that evaluates AI based on
tasks commonly used in animal intelligence research. The tasks are
contained within a 3D environment where AI agents interact with objects,
navigate mazes, and use tools, among other things.

However, all of these tests have their limitations, and they each measure
different aspects of what we might consider intelligence. There may never
be a single definitive test of general intelligence, whether for humans,
animals, or machines, given that intelligence is multi-faceted and
context-dependent. It's also important to note that intelligence tests are
just tools; they don't necessarily define the upper limits of an AI
system's abilities or potential.
_ _ _ _ _ _ _

Once again, it's a chat machine, not an answer machine. I wonder
how accurate that answer is?

--
-v

Sn!pe

unread,
May 27, 2023, 4:47:10 AM5/27/23
to
vallor <val...@vallor.earth> wrote:

> On Fri, 19 May 2023 00:19:23 +0100, Sn!pe wrote:
>
> > Nic <N...@none.net> wrote:
> >
> >> If a computer can fool 70% of judges in a five-minute conversation into
> >> thinking it's a person, the computer passes the test. Would passing the
> >> Turing test – something which now seems imminent – show that an AI has
> >> achieved thought and understanding?
> >>
> >> From
> >> <https://theconversation.com/chatgpt-cant-think-consciousness-is-something-entirely-different-to-todays-ai-204823>
> >>
> >> I invite all serious posters to participate.
> >>
[...]
> > Re: your question:
> > Does consciousness imply independent self-awareness?
> > If so then no, current AIs are not conscious, merely programmed.
> >
>
> Programmed *to "learn"*, then "taught".
>
> >
> > Are AIs capable of deception by way of passing the Turing test?
> > I would say yes; IMO the Turing Test is insufficiently rigorous and not
> > fit for purpose.
> >
>
> Would you be willing to suggest a better test?
>

To what end? Feel free to DYOR and get back to us.

--
^Ï^. – Sn!pe – <https://youtu.be/_kqytf31a8E>

My pet rock Gordon just is.

Sn!pe

unread,
May 27, 2023, 4:47:39 AM5/27/23
to
vallor <val...@vallor.earth> wrote:

> On Fri, 19 May 2023 07:34:11 -0500, Johnny wrote:
>
> > On Fri, 19 May 2023 06:54:08 +0100 Andy Burns <use...@andyburns.uk>
> > wrote:
> >
> >> Sn!pe wrote:
> >>
> >> > Does consciousness imply independent self-awareness?
> >> > If so then no, current AIs are not conscious, merely programmed.
> >>
> >> Do you consider training separate from programming?
> >
> > Training is for pets and children. Programming is for machines.
> >
> > I don't see how a machine made of nuts bolts and electronic components
> > could ever be aware of it's existence.
>
> What if you programmed it to have neural nets that can learn, then you
> trained that?
>

See my answer in the adjacent subthread:
Message-ID: <1qaz4o6.3i5mv4m4hn7sN%snip...@gmail.com>

>
> And imagine what a thinking machine might think if they found you?
>

Yes, it's amusing to parody something by turning it on its head. I'm
sure I've read what follows somewhere before but I can't think where.
It's a nice little idea, well worthy of SF.

>
> <https://www.mit.edu/people/dpolicar/writing/prose/text/thinkingMeat.html>
>
> Obquote:
> _ _ _ _ _ _ _
> "They're made out of meat."
>
> "Meat?"
>
> "Meat. They're made out of meat."
>
> "Meat?"
>
> "There's no doubt about it. We picked several from different parts of the
> planet, took them aboard our recon vessels, probed them all the way
> through. They're completely meat."
>
> "That's impossible. What about the radio signals? The messages to the
> stars."
>
> "They use the radio waves to talk, but the signals don't come from them.
> The signals come from machines."
>
> "So who made the machines? That's who we want to contact."
>
> "They made the machines. That's what I'm trying to tell you. Meat made the
> machines."
>
> "That's ridiculous. How can meat make a machine? You're asking me to
> believe in sentient meat."
> [. . .]
> _ _ _ _ _ _ _


--
^Ï^. – Sn!pe – <https://youtu.be/_kqytf31a8E>

My pet rock Gordon just is.

Sn!pe

unread,
May 27, 2023, 5:03:46 AM5/27/23
to
In your article above, you appear to have used the chat machine exactly
as an answer machine, have you not? Whatever, I too wonder about the
accuracy of the answer. In the absence of citations we have no way to
check other than by searching for them with a real search engine, or
relying on e.g. Wikipedia that ~does~ provide cites.

Imagine the utility of having an all too plausible disinformation engine
readily available to e.g. an unscrupulous politician.

--
^Ï^. – Sn!pe – <https://youtu.be/_kqytf31a8E>

My pet rock Gordon just is.

vallor

unread,
May 27, 2023, 5:25:47 AM5/27/23
to
On Fri, 19 May 2023 10:59:23 +0100, Sn!pe wrote:

> Andy Burns <use...@andyburns.uk> wrote:
>
>> Sn!pe wrote:
>>
>> > Does consciousness imply independent self-awareness?
>> > If so then no, current AIs are not conscious, merely programmed.
>>
>> Do you consider training separate from programming?
>
> I think there are probably large areas in common but otherwise, yes.
> What about the effect of how initial program parameters are set?
> As I envisage it, that would be the control(s) to adjust the AI's bias^W
> balance between altruism and evilness; definitely programming rather
> than training.

ChatGPT is like a Mad Libs Matrix of shifting concepts that has these
amazing emergent properties that don't seem to be well understood.

(Please *don't* trust; *do* verify.)

--
-v

Sn!pe

unread,
May 27, 2023, 5:57:40 AM5/27/23
to
Agreed.

--
^Ï^. – Sn!pe – <https://youtu.be/_kqytf31a8E>

My pet rock Gordon just is.

vallor

unread,
May 27, 2023, 6:05:57 AM5/27/23
to
Yes, and if I were smart, I'd have asked it for references.

As it goes, I could get a list of URL's out of it by asking for reference
URL's for each test:

https://chat.openai.com/share/016e5f52-1353-4274-ae04-b677d3c0187d

> Whatever, I too wonder about the accuracy of the answer. In the absence
> of citations we have no way to check other than by searching for them
> with a real search engine, or relying on e.g. Wikipedia that ~does~
> provide cites.

Well, if you ask it right, sometimes it will give you references.
(Sometimes...)

> Imagine the utility of having an all too plausible disinformation engine
> readily available to e.g. an unscrupulous politician.

You can always verify what it says -- and since I didn't do that with my
ChatGPT conversation before, I would say I gave a poor example.

IAW, I see your point.

--
-v

Sn!pe

unread,
May 27, 2023, 6:35:52 AM5/27/23
to
vallor <val...@vallor.earth> wrote:
[Sn!pe]
> > In your article above, you appear to have used the chat machine exactly
> > as an answer machine, have you not?
>
> Yes, and if I were smart, I'd have asked it for references.
>
> As it goes, I could get a list of URL's out of it by asking for reference
> URL's for each test:
>
> https://chat.openai.com/share/016e5f52-1353-4274-ae04-b677d3c0187d
>
> > Whatever, I too wonder about the accuracy of the answer. In the absence
> > of citations we have no way to check other than by searching for them
> > with a real search engine, or relying on e.g. Wikipedia that ~does~
> > provide cites.
>
> Well, if you ask it right, sometimes it will give you references.
> (Sometimes...)
>
> > Imagine the utility of having an all too plausible disinformation engine
> > readily available to e.g. an unscrupulous politician.
>
> You can always verify what it says -- and since I didn't do that with my
> ChatGPT conversation before, I would say I gave a poor example.
>
> IAW, I see your point.
>

When it comes to disinformation: I rather doubt that those who might be
influenced by and are the target of such disinformation are even aware
of the possibility of verification. Don't forget, these are people who
have been brought up to believe every answer that a pocket calculator
might give them, mis-keyings notwithstanding. Such ill-educated people
(i.e. 99% of the population) don't even have the concept of a sanity
check. "I've read it in the paper / seen it on TV / the computer says"
therefore it must be true.

Do I have a poor opinion of my gullible fellow man, or at least those
from generations after us boomers (we who were trained to question the
potentially suspect information that we were fed)? Damn right I do.
These lumpen, uncritical, masses are what wannabe tyrants rely upon.

--
^Ï^. – Sn!pe – <https://youtu.be/_kqytf31a8E>

My pet rock Gordon is humming:
# Climb up on my knee, Suni Boi -- Al Johnson.
0 new messages