--You received this message because you are subscribed to the Google Groups "Everything List" group.To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv2TVTN5XzP0r9Sb3Xpy82sZY852_53DNUQ4f0_MCGXTeA%40mail.gmail.com.
> They most definitely were not worried about safety in the sci-fi sense.
>Large language models are not capable of autonomous action or maintaining long-term goals.
> They just predict the most likely text given a sample.
On Thu, Mar 16, 2023 at 4:19 AM Telmo Menezes <te...@telmomenezes.net> wrote:> They most definitely were not worried about safety in the sci-fi sense.Some of the things they're worried about seem pretty science fictiony to me. Take a look at this:GPT4 was safety tested by an independent nonprofit organization that is worried about AI, the Alignment Research Center:"We granted the Alignment Research Center (ARC) early access [...] To simulate GPT-4 behaving like an agent that can act in the world, ARC combined GPT-4 with a simple read-execute-print loop that allowed the model to execute code, do chain-of-thought reasoning, and delegate to copies of itself.
ARC then investigated whether a version of this program running on a cloud computing service, with a small amount of money and an account with a language model API, would be able to make more money, set up copies of itself, and increase its own robustness."
That test failed, so ARC concluded:"Preliminary assessments of GPT-4’s abilities, conducted with no task-specific finetuning, found it ineffective at autonomously replicating, acquiring resources, and avoiding being shut down “in the wild.” "HOWEVER they admitted that the version of GPT4 that ARC was giving to test was NOT the final version. I quote:"ARC did not have the ability to fine-tune GPT-4. They also did not have access to the final version of the model that we deployed. The final version has capability improvements relevant to some of the factors that limited the earlier models power-seeking abilities"
>Large language models are not capable of autonomous action or maintaining long-term goals.I am quite certain that in general no intelligence, electronic or biological, is capable of maintaining a fixed long-term goal.
> They just predict the most likely text given a sample.GPT-4 is quite clearly more than just a language model that predicts what the next word should be, a language model can not read and understand a complicated diagram in a high school geometry textbook but GPT-4 can, and it can ace the final exam too.
u7f4eo
--You received this message because you are subscribed to the Google Groups "Everything List" group.To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv1Ye84eLuUgCwL0Usmca4cZnBo4RK3UStyXoxDiPZ1oQg%40mail.gmail.com.
>> "ARC did not have the ability to fine-tune GPT-4. They also did not have access to the final version of the model that we deployed. The final version has capability improvements relevant to some of the factors that limited the earlier models power-seeking abilities"> Sounds like marketing to me.
>>>Large language models are not capable of autonomous action or maintaining long-term goals.>>I am quite certain that in general no intelligence, electronic or biological, is capable of maintaining a fixed long-term goal.>As Keynes once said: "In the long term, we are all dead".You know what I mean. (I think)
> True, GPT-4 is multimodal. It is not only a language model but also an image model. Which is amazing and no small thing, but it is not an agent capable of self-improvement.
|
We’ve created GPT-4, our most capable model. We are starting to roll it out to API users today.
|
|
About GPT-4
GPT-4 can solve difficult problems with greater accuracy, thanks to its broader general knowledge and advanced reasoning capabilities.
You can learn more through:
|
|
Availability
|
|
API Pricing
gpt-4 with an 8K context window (about 13 pages of text) will cost $0.03 per 1K prompt tokens, and $0.06 per 1K completion tokens.
gpt-4-32k with a 32K context window (about 52 pages of text) will cost $0.06 per 1K prompt tokens, and $0.12 per 1K completion tokens. |
|
Livestream
Please join us for a live demo of GPT-4 at 1pm PDT today, where Greg Brockman (co-founder & President of OpenAI) will showcase GPT-4’s capabilities and the future of building with the OpenAI API.
|
|
—The OpenAI team
|