GPT-5 is closer than you think and may be thinking better than you think

14 views
Skip to first unread message

John Clark

unread,
Aug 3, 2023, 9:58:07 AM8/3/23
to extro...@googlegroups.com
Siqi Chen, an industry insider says that he has been told GPT-5  "is SO much closer than most people think" and it will be much more than just an incremental improvement . Specifically he says: 

"GPT-5 is scheduled to complete training by as early as December and Open AI expects it to achieve AGI. Which means we will all hotly debate whether it really achieves AGI. Which means it will."


John K Clark

Hermes Trismegistus

unread,
Aug 3, 2023, 10:24:34 PM8/3/23
to extropolis
I wonder if they will have solved the issue of dynamic memory by GPT5. It would be a shame if we had human level artificial intelligence that was frozen in time. The trivial solution is just to continuously train it as new data comes in, but I haven't heard of anyone testing that approach.

John Clark

unread,
Aug 4, 2023, 6:35:56 AM8/4/23
to extro...@googlegroups.com
On Thu, Aug 3, 2023 at 10:24 PM Hermes Trismegistus <gad...@gmail.com> wrote:

I wonder if they will have solved the issue of dynamic memory by GPT5. It would be a shame if we had human level artificial intelligence that was frozen in time.

A lot of work has been done on autonomous agents built on top of  large language models (LLM) that use verbal prompts to help the LLM learn from their previous errors. Here is a paper about that but you should be warned that it's ancient, it's nearly 2 months old. There's no telling what people are doing today behind closed doors.


It's clear that their agent significantly improve the performance of GPT-4, and here is a video that talks about that:


John K Clark



Lawrence Crowell

unread,
Aug 4, 2023, 8:57:05 AM8/4/23
to extropolis
I have been working GPT-4 to check my work and research. GPT5 will be better and more capable of making meets and join operations. However, I suspect it will still be as sentient as a bag of bolts 

LC

--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/extropolis/CAJPayv0ATbZiQjZ1_BNVj7g9Z-zDtXAvnKgpnCYKtbDNtoHOvg%40mail.gmail.com.

John Clark

unread,
Aug 4, 2023, 9:43:51 AM8/4/23
to extro...@googlegroups.com
On Fri, Aug 4, 2023 at 8:57 AM Lawrence Crowell <goldenfield...@gmail.com> wrote:

I have been working GPT-4 to check my work and research. GPT5 will be better and more capable of making meets and join operations. However, I suspect it will still be as sentient as a bag of bolts 

What makes you think that? My working hypothesis has always been that if something behaves as if it's conscious then it is conscious. Even if you're right and it's no more sentient than a bag of bolts, from the human viewpoint it's irrelevant, for individuals the important thing is that it's intelligent. If it's not sentient then that's GPT-5's problem not mine.  And I could say exactly the same thing about the hypothesized consciousness of my fellow human beings.

 John K Clark

 

 

Lawrence Crowell

unread,
Aug 4, 2023, 9:57:38 AM8/4/23
to extropolis
I am sure that GPT-5 will without input sit there with an open cursor. To me this suggests there is no inner subjective experience. A person by contrast will rather spontaneously start a conversation.

LC



 

 

--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.

John Clark

unread,
Aug 4, 2023, 12:24:21 PM8/4/23
to extro...@googlegroups.com
On Fri, Aug 4, 2023 at 9:57 AM Lawrence Crowell <goldenfield...@gmail.com> wrote:

>> What makes you think that? My working hypothesis has always been that if something behaves as if it's conscious then it is conscious. Even if you're right and it's no more sentient than a bag of bolts, from the human viewpoint it's irrelevant, for individuals the important thing is that it's intelligent. If it's not sentient then that's GPT-5's problem not mine.  And I could say exactly the same thing about the hypothesized consciousness of my fellow human beings.
John K Clark

I am sure that GPT-5 will without input sit there with an open cursor. To me this suggests there is no inner subjective experience. A person by contrast will rather spontaneously start a conversation.

To my knowledge there is not a strong correlation between loquaciousness and intelligence, much less consciousness.  Paul Dirac was certainly intelligent but he was notorious for never initiating a conversation, and when asked a direct question he would usually answer with a simple yes or no. His colleagues even defined a new unit of measurement, "the Dirac", defined as one word uttered per hour.

 John K Clark
 

 

Lawrence Crowell

unread,
Aug 4, 2023, 1:02:05 PM8/4/23
to extropolis
Well, it does not have to be speaking. It can be motion or just a change in facial expression.

LC



 

--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.

Hermes Trismegistus

unread,
Aug 4, 2023, 5:31:43 PM8/4/23
to extropolis
A lot of work has been done on autonomous agents built on top of  large language models (LLM) that use verbal prompts to help the LLM learn from their previous errors.

I'm referring to the limited context length and the static weights. Without some form of continuous learning these systems cannot be said to be truly human level even if their testing ability exceeds that of the best humans. Having AI develop proprietary large codebases, for example, is not feasible unless continuous learning is employed since large codebases cannot fit inside the limited context length of current LLMs.

William Flynn Wallace

unread,
Aug 4, 2023, 5:37:32 PM8/4/23
to extro...@googlegroups.com, ExI chat list
To my knowledge there is not a strong correlation between loquaciousness and intelligence, John Clark

The correlation should be fairly low.  Extroverts are the ones who will talk your head off, and they are not cut out for Ph. D. etc.  How many books, many of which are boring, do you go through to get a Ph. D.?  Lots and lots.  Extroverts can have the IQ for this sort of thing, but not the temperament.  

And about half the introverts are on the shy side, and all of them hate small talk.  Get them started on their area and you might not be able to shut them up.  

bill w

--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.

Will Steinberg

unread,
Aug 5, 2023, 1:18:46 AM8/5/23
to extro...@googlegroups.com
Man, I figured 2 years until AGI, but maybe that means a few months until people start seriously debating whether something is AGI.  The near future is going to be very strange.  Lots going on in 2024.

--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.

John Clark

unread,
Aug 5, 2023, 7:07:41 AM8/5/23
to extro...@googlegroups.com
On Sat, Aug 5, 2023 at 1:18 AM Will Steinberg <steinbe...@gmail.com> wrote:

Man, I figured 2 years until AGI, but maybe that means a few months until people start seriously debating whether something is AGI.  The near future is going to be very strange.  Lots going on in 2024.

Yeah, and I never would've predicted any of this six months ago, but I did say even if the singularity won't happen for 1000 years in 999 years it will still seem like a very long way away, that's why they call it a singularity. It's starting to make long range planning seem sort of silly.  

John K Clark

John Clark

unread,
Aug 5, 2023, 7:55:59 AM8/5/23
to extro...@googlegroups.com
On Fri, Aug 4, 2023 at 5:31 PM Hermes Trismegistus <gad...@gmail.com> wrote:

> A lot of work has been done on autonomous agents built on top of  large language models (LLM) that use verbal prompts to help the LLM learn from their previous errors.

I'm referring to the limited context length and the static weights.

ChatGPT-3.5 can remember and process inputs of about 8,000 words, GPT-4 can understand questions 64,000 words long, and Anthropic's Claude 2 about 100,000 words; it can analyze an entire novel it has never seen before and you can ask it questions about the book.  What GPT-5 can do has not been announced.  And just a few months ago an individual made a relatively simple hack that greatly increased GPT's memory, anybody can download it for free at GitHubbut I doubt it will be needed with GPT-5.


John K Clark







 
Without some form of continuous learning these systems cannot be said to be truly human level even if their testing ability exceeds that of the best humans. Having AI develop proprietary large codebases, for example, is not feasible unless continuous learning is employed since large codebases cannot fit inside the limited context length of current LLMs.

On Friday, August 4, 2023 at 6:35:56 AM UTC-4 johnk...@gmail.com wrote:
On Thu, Aug 3, 2023 at 10:24 PM Hermes Trismegistus <gad...@gmail.com> wrote:

I wonder if they will have solved the issue of dynamic memory by GPT5. It would be a shame if we had human level artificial intelligence that was frozen in time.

A lot of work has been done on autonomous agents built on top of  large language models (LLM) that use verbal prompts to help the LLM learn from their previous errors. Here is a paper about that but you should be warned that it's ancient, it's nearly 2 months old. There's no telling what people are doing today behind closed doors.


It's clear that their agent significantly improve the performance of GPT-4, and here is a video that talks about that:


John K Clark



--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages