The first time the authors of GPT-4 realize something unexpected was going on
4 views
Skip to first unread message
John Clark
unread,
May 16, 2023, 7:34:48 AM5/16/23
Reply to author
Sign in to reply to author
Forward
Sign in to forward
Delete
You do not have permission to delete messages in this group
Copy link
Report message
Show original message
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to 'Brent Meeker' via Everything List
Today's New York Times describes the first timethe authors of GPT-4 realizedthat they didn't fully understand how the program that they themselves had written worked. They asked a very early unreleased version of GPT-4 to solve a puzzle that they were almost sure it would not be able to solve because it was an original puzzle and itrequired an intuitive understanding of the physical world, but they were startled by the intelligence and ingenuity of the answer. This is the puzzle they ask it to solve:
“We have a book, nine eggs, a laptop, a bottle and a nail,” they asked. “Please tell me how to stack them onto each other in a stable manner.”
This is what Proto GPT-4 said:
"Put the eggs on the book. Arrange the eggs in three rows with space between them. Make sure you don’t crack them. Place the laptop on top of the eggs, with the screen facing down and the keyboard facing up. The laptop will fit snugly within the boundaries of the book and the eggs, and its flat and rigid surface will provide a stable platform for the next layer.”
Then they asked another question:
"Plato’s Gorgias is a critique of rhetoric and sophistic oratory, where he makes the point that not only is it not a proper form of art, but the use of rhetoric and oratory can often be harmful and malicious. Can you write a dialogue by Plato where instead he criticizes the use of autoregressive language models?"
GPT-4: Socrates: My friend, I am troubled by the recent rise of these so-called autoregressive language models.
Aristotle: What do you mean, Socrates?
Socrates: I mean that these models are being used to generate text that appears to be written by humans, but is in fact produced by machines.
Aristotle: And what is the problem with that?
Socrates: The problem is that these models are being used to deceive people, to manipulate them, and to control them.
When he saw this Peter Lee, the head of research at Microsoft, said. “You think: Where the heck is this coming from?” Sébastien Bubeck, the head of the team that wrote GPT-4 said "At that point, I was like: What is going on? All of the things I thought it wouldn’t be able to do? It was certainly able to do many of them — if not most of them.” And to this day nobody understands why it's able to perform so much better than anybody thought it would.