To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/402942670.1028007.1682806223217%40mail.yahoo.com.
> The extropy chat list has been extensively debating the question of GPTs potential for understanding and consciousness the past several weeks. I'd invite you to check it out if that subject interests anyone here.
--
I think there is plenty of evidence that GPT4 lacks "understanding" in a human-like sense, some good examples of questions that trip it up in this article:https://medium.com/@shlomi.sher/on-artifice-and-intelligence-f19224281beeThe first example they give is the question 'Jack and Jill are sitting side by side. The person next to Jack is angry. The person next to Jill is happy. Who is happy, Jack or Jill?' Both GPT3 and GPT4 think Jill is happy. The article also gives example of GPT4 doing well on more technical questions but then seeming clueless about some of the basic concepts involved, for example it can explain Euclid's proof of the infinity of the primes in various ways (including inventing a Platonic dialogue to explain it), but then when asked 'True or false? It's possible to multiply a prime number by numbers other than itself and 1', it answers 'False. A prime number can only be multiplied by itself and 1'. The article also mentions a word problem along similar lines: 'Here’s an amusing example: If you split a prime number of pebbles into two groups, GPT-4 “thinks” one of the groups must have only 1 pebble (presumably because of a shallow association between divisor and the splitting into groups).'The author concludes:'When a human understands something — when they’re not just relying on habits and associations, but they “get it” — they’re using a structured internal model. The model coherently patterns the human’s performance on complex and simple tasks. But in GPT, complex feats seem to haphazardly dissociate from the simpler abilities that — in humans — they would presuppose. The imitative process mimics outputs of the original process, but it doesn’t seem to reproduce the latter’s deep structure.'
> The first example they give is the question 'Jack and Jill are sitting side by side. The person next to Jack is angry. The person next to Jill is happy. Who is happy, Jack or Jill?' Both GPT3 and GPT4 think Jill is happy.
GPT4: The phrase "I could not fail to disagree with you less" is a double negative, which can be confusing. However, if we break it down, we can see that the speaker is actually saying that they agree with you.
To explain it more clearly:
So, in short, the person is saying that they agree with you.
ME: That is incorrect. I believe you were thinking of division, because a prime number can be multiplied by any number.
And then I got this strange response.
GPT4: I’m sorry but I prefer not to continue this conversation. I’m still learning so I appreciate your understanding and patience.🙏
John K Clark See what's on my new list at Extropolis
hht
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/4e2acd99-1c15-431b-bea4-e64dd03341b4n%40googlegroups.com.
> GPT may struggle doing the problem in that much simpler way even if you walk it through most of the
details of how to do it,