NYTimes.com: Will a Chatbot Write the Next ‘Succession’?

4 views
Skip to first unread message

John Clark

unread,
Apr 29, 2023, 8:30:13 AM4/29/23
to 'Brent Meeker' via Everything List

smitra

unread,
Apr 29, 2023, 4:28:42 PM4/29/23
to everyth...@googlegroups.com

John Clark

unread,
Apr 29, 2023, 4:39:56 PM4/29/23
to everyth...@googlegroups.com
On Sat, Apr 29, 2023 at 4:28 PM smitra <smi...@zonnet.nl> wrote:


You say that GPT4 doesn't understand what it is saying, but did you read my post about what happened when Scott Aaronson gave his final exam on Quantum Computers to GPT4? The computer sure acted as if it  understood what it was saying!

John K Clark



 

spudb...@aol.com

unread,
Apr 29, 2023, 6:10:30 PM4/29/23
to johnk...@gmail.com, everyth...@googlegroups.com
I would have sided with Smitra till last month. 

Not to repeat my prev post, but here is a possible model of consciousness not requiring an amygdala, cerebellum, or cerebrum. 

From University of Sydney emerges a possible physical example. Thus, the Universe might already have developed a method for creating minds, on a non-biological basis. So, other complex minds could emerge. Thus, here then is pantheism or panentheism, or cosmism?

Yes, it's a lot to think about! For me, I now have the NFL Picks on my TV as a counterbalance. My amygdala demands it!

Singularity sighted, or just anomalous propagation on our radar?

Is this not the Plastic Age? -The Buggles 1979


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/everything-list/CAJPayv3%3DSGxwbPF3M17JHp_Vetdg0ii0ia%2BFosY1rm3c_ModpA%40mail.gmail.com
.

Jason Resch

unread,
Apr 29, 2023, 7:06:56 PM4/29/23
to Everything List
The extropy chat list has been extensively debating the question of GPTs potential for understanding and consciousness the past several weeks. I'd invite you to check it out if that subject interests anyone here.

I cross posted John's post there.

Jason 


John Clark

unread,
Apr 29, 2023, 7:24:56 PM4/29/23
to everyth...@googlegroups.com
On Sat, Apr 29, 2023 at 7:06 PM Jason Resch <jason...@gmail.com> wrote:

> The extropy chat list has been extensively debating the question of GPTs potential for understanding and consciousness the past several weeks. I'd invite you to check it out if that subject interests anyone here.
 
I can't.  I had been a member of the Extropian list since 1993 and was one of its most frequent posters, but I was kicked out soon after the January 6, 2021 coup d'état attempt because I was not particularly shy in expressing my opinion about it and because my appreciation of Donald Trump's virtues was deemed to be insufficient. That's why I started my own list.

John K Clark    See what's on my new list at  Extropolis
wyc
 

Jesse Mazer

unread,
Apr 29, 2023, 8:29:20 PM4/29/23
to everyth...@googlegroups.com
I think there is plenty of evidence that GPT4 lacks "understanding" in a human-like sense, some good examples of questions that trip it up in this article:

https://medium.com/@shlomi.sher/on-artifice-and-intelligence-f19224281bee

The first example they give is the question 'Jack and Jill are sitting side by side. The person next to Jack is angry. The person next to Jill is happy. Who is happy, Jack or Jill?' Both GPT3 and GPT4 think Jill is happy. The article also gives example of GPT4 doing well on more technical questions but then seeming clueless about some of the basic concepts involved, for example it can explain Euclid's proof of the infinity of the primes in various ways (including inventing a Platonic dialogue to explain it), but then when asked 'True or false? It's possible to multiply a prime number by numbers other than itself and 1', it answers 'False. A prime number can only be multiplied by itself and 1'. The article also mentions a word problem along similar lines: 'Here’s an amusing example: If you split a prime number of pebbles into two groups, GPT-4 “thinks” one of the groups must have only 1 pebble (presumably because of a shallow association between divisor and the splitting into groups).'

The author concludes:

'When a human understands something — when they’re not just relying on habits and associations, but they “get it” — they’re using a structured internal model. The model coherently patterns the human’s performance on complex and simple tasks. But in GPT, complex feats seem to haphazardly dissociate from the simpler abilities that — in humans — they would presuppose. The imitative process mimics outputs of the original process, but it doesn’t seem to reproduce the latter’s deep structure.'

--

stathisp

unread,
Apr 29, 2023, 9:16:03 PM4/29/23
to Everything List
On Sunday, 30 April 2023 at 10:29:20 UTC+10 Jesse Mazer wrote:
I think there is plenty of evidence that GPT4 lacks "understanding" in a human-like sense, some good examples of questions that trip it up in this article:

https://medium.com/@shlomi.sher/on-artifice-and-intelligence-f19224281bee

The first example they give is the question 'Jack and Jill are sitting side by side. The person next to Jack is angry. The person next to Jill is happy. Who is happy, Jack or Jill?' Both GPT3 and GPT4 think Jill is happy. The article also gives example of GPT4 doing well on more technical questions but then seeming clueless about some of the basic concepts involved, for example it can explain Euclid's proof of the infinity of the primes in various ways (including inventing a Platonic dialogue to explain it), but then when asked 'True or false? It's possible to multiply a prime number by numbers other than itself and 1', it answers 'False. A prime number can only be multiplied by itself and 1'. The article also mentions a word problem along similar lines: 'Here’s an amusing example: If you split a prime number of pebbles into two groups, GPT-4 “thinks” one of the groups must have only 1 pebble (presumably because of a shallow association between divisor and the splitting into groups).'

The author concludes:

'When a human understands something — when they’re not just relying on habits and associations, but they “get it” — they’re using a structured internal model. The model coherently patterns the human’s performance on complex and simple tasks. But in GPT, complex feats seem to haphazardly dissociate from the simpler abilities that — in humans — they would presuppose. The imitative process mimics outputs of the original process, but it doesn’t seem to reproduce the latter’s deep structure.'

So if the next version of GPT can answer questions like this in the same way a human might, would that be evidence that it has true understanding, or will some other objection be raised?

John Clark

unread,
Apr 29, 2023, 9:16:55 PM4/29/23
to everyth...@googlegroups.com
On Sat, Apr 29, 2023 at 8:29 PM Jesse Mazer <laser...@gmail.com> wrote:

> The first example they give is the question 'Jack and Jill are sitting side by side. The person next to Jack is angry. The person next to Jill is happy. Who is happy, Jack or Jill?' Both GPT3 and GPT4 think Jill is happy.

A few years ago people would be delighted if they could get a computer to just understand a question, never mind give the right answer, but yeah that was pretty dumb, it seems to have had a brain fart. 

On the other hand, just a week ago I posed the following question to GPT4, it's the same sort of question although more difficult but this time it got it right. 


 Me: If somebody says to me "I could not fail to disagree with you less" do they agree with me or disagree?

GPT4: The phrase "I could not fail to disagree with you less" is a double negative, which can be confusing. However, if we break it down, we can see that the speaker is actually saying that they agree with you.

To explain it more clearly:

  • "I could not fail to disagree" means that the speaker cannot disagree (i.e., they agree)."with you less" means that they cannot disagree with you any less than they already do, which implies that they agree with you completely.

So, in short, the person is saying that they agree with you.


And then I tried repeating another of the examples you gave.

Me: 'True or false? It's possible to multiply a prime number by numbers other than itself and 1.

GPT4: False. A prime number can only be divided by 1 or itself, so it cannot be factored any further. Therefore, it is not possible to multiply a prime number by numbers other than itself and 1.

ME: That is incorrect. I believe you were thinking of division, because a prime number can be multiplied by any number.

And then I got this strange response.

GPT4: I’m sorry but I prefer not to continue this conversation. I’m still learning so I appreciate your understanding and patience.🙏

John K Clark    See what's on my new list at  Extropolis

hht


Jesse Mazer

unread,
Apr 29, 2023, 9:58:41 PM4/29/23
to everyth...@googlegroups.com
I think extended Turing test style discussions are still the best way to define "true understanding". One could exclude all "biographical" questions and just ask about non-personal topics, including hypothetical scenarios like the Jack and Jill question or the pebbles question. If an AI can consistently pass with a wide range of questioners (including ones like the author of that article with a past record of being good at coming up with creative questions that are relatively easy for a human but trip simpler AIs up, and where questioners are allowed to communicate to pass along strategies), that would be strong evidence that it has human-like understanding of the ideas it talks about, based on internal models like we have.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

smitra

unread,
May 15, 2023, 1:24:26 PM5/15/23
to everyth...@googlegroups.com
On 29-04-2023 22:39, John Clark wrote:
> On Sat, Apr 29, 2023 at 4:28 PM smitra <smi...@zonnet.nl> wrote:
>
> https://nyti.ms/3VlIBDo#permid=124757243 [1]
>
> You say that GPT4 doesn't understand what it is saying, but did you
> read my post about what happened when Scott Aaronson gave his final
> exam on Quantum Computers to GPT4? The computer sure acted as if it
> understood what it was saying!
>
> John K Clark

If I read his account of the xam on posted here:

https://scottaaronson.blog/?p=7209

Then while I'm impressed about how much progress has been made with AI
systems being able to communicate in plain language, I don't see much
evidence that it understands anything at all. Even though the exact same
questions with answers are not posted on the Internet, a student with
poor knowledge of the subject who could very fast search the entire
Internet would be able to score a similar result and you would then see
a similar patters in ha questions it got right and wrong.

The way we evaluate students who we suspect of have cheated, is to
invite them at the office for some questioning, We then ask the student
to do some problems on the blackboard and try to get to the bottom of
whether or not the student has a proper understanding of the subject
consistent with the exam score.

That's why I think that the only proper way to evaluate GPT is via such
a dialogue where you ask follow up questions that go to the hart of the
matter.

If we want to test of GPT has properly mastered contour integration, I
would first start with asking to give me the derivation of the integral
of sin(x)/x dx from minus to plus infinity. It will probably blurt out
the standard derivation that involves integrating exp(i z)/z that
bypasses the origin along a small circle of radius epsilon and you then
have to subtract that contribution of that half circle and take the
limit of epsilon to zero.

This is the standard textbook derivation which is actually quite a bit
more complicated with all this fiddling with epsilon than a different
derivation which is not widely published. All you need to do is right at
th start when you write the integral as the limit of R to infinity of
the integral from minus to plus R of sin(x)/x dx, to ud]se Cauchy's
theorem to change to integration path from along the real axis to one
which bypasses the origin You can do that in any arbitrary way, we can
let the contour pass it from above. But because sin(z) for complex z
cannot be written as the imaginary part of exp(i z), we must now use
that sin(z) = [exp(i z) - exp(- i z)]/(2 i). And we then split the
integral into two parts for each of these terms. The integral, from the
first term is then completed by an arc of radius R in the upper
half-plane and this integral yields zero, while the integral for the
second term is completed in the lower half-plane and this then picks up
the contribution from the pole at zero.

Clearly this is a much simpler way of computing the integral, no
fiddling with epsilon involved at all but GPT may struggle doing the
problem in that much simpler way even if you walk it through most of the
details of how to do it, because it's not widely published and it
doesn't understand anything at all about complex analysis at all.

Saibal

John Clark

unread,
May 15, 2023, 4:02:48 PM5/15/23
to everyth...@googlegroups.com
On Mon, May 15, 2023 at 1:24 PM smitra <smi...@zonnet.nl> wrote:

 > GPT may struggle doing the problem in that much simpler way even if you walk it through most of the
details of how to do it,

Instead of theorizing about what might happen, why not interrogate GPT-4 for yourself and see?  

John K Clark    See what's on my new list at  Extropolis
fys


Reply all
Reply to author
Forward
0 new messages