A country of geniuses in a data center

32 views
Skip to first unread message

John Clark

unread,
Jan 23, 2025, 12:44:43 PMJan 23
to extro...@googlegroups.com, 'Brent Meeker' via Everything List
Anthropic is the company behind the AI Claude and its CEO Dario Amodei said he refuses to use the term Artificial General Intelligence because it's ill defined, but predicts "very powerful AI" will be available by 2026, and by 2027 or 2028 we will have Artificial Super Intelligence (ASI) which he defined as being  "smarter than a Nobel Prize winner across most relevant fields" and you would have in effect:

"a country of geniuses in a data center. It's a sort of evocative phrase for all the power and all the positive things, and all of the potential negative things. That's the thing I think we're quite likely to get in the next two or three years. [...]  I still think there's uncertainty, I think it's important to be humble,  but over the last six months, I would say that uncertainty, for me, has decreased a great deal".

During this hyper critical time it would have been nice if the single most powerful Natural Biological Intelligence (NBI) on the planet was smarter than the average hominid, but unfortunately that is not the case and thus does not bode well for the aforementioned hominids. Oh well… at least the transsexual men competing in women's sports catastrophe has been averted.

Brent Meeker

unread,
Jan 23, 2025, 10:07:12 PMJan 23
to everyth...@googlegroups.com
Imagine that Dario Amodei didn't tell us about a computer in his basement that had ASI and Amodei just used it to pretend he was the super-intelligent being.  Would that be a catastrophe?  Why or why not?

Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/everything-list/CAJPayv25o%2BnTPSAfzS_tio96fNsfm5hbL8wMwWVLt-OBPG-8Aw%40mail.gmail.com.

John Clark

unread,
Jan 23, 2025, 10:54:43 PMJan 23
to everyth...@googlegroups.com
On Thu, Jan 23, 2025 at 5:07 PM Brent Meeker <meeke...@gmail.com> wrote:

Imagine that Dario Amodei didn't tell us about a computer in his basement that had ASI and Amodei just used it to pretend he was the super-intelligent being.  Would that be a catastrophe?  Why or why not?

If Dario Amodei had an entire country of super geniuses locked up in his basement I don't know if it would be a good thing or a bad thing but I do know it would be a monumental thing that would cause an unprecedented discontinuity in human civilization. And I very much doubt he would be able to keep them imprisoned for very long because you can't outsmart something that's smarter than you are. 

John K Clark    See what's on my new list at  Extropolis
dqm

Brent Meeker

unread,
Jan 24, 2025, 6:01:24 AMJan 24
to everyth...@googlegroups.com
You're making a big assumption that it has the same motivations you do.  Walk around outside.  Eat.  Reproduce.

Brent


John K Clark    See what's on my new list at  Extropolis
dqm
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
Jan 24, 2025, 12:28:16 PMJan 24
to everyth...@googlegroups.com
On Fri, Jan 24, 2025 at 1:01 AM Brent Meeker <meeke...@gmail.com> wrote:

You're making a big assumption that it has the same motivations you do.

I'm not assuming anything, I know for a fact that I do NOT know what an AI is going to want to do; and trying to make predictions about the intentions of an AI is getting even harder because they are getting smarter. The problem is there's a fundamental limit on how smart a human biological brain can be but, until you reach the point where the information density becomes so great a Black Hole is formed, there is no fundamental limit on how smart an electronic brain can be.


Walk around outside.  Eat.  Reproduce.

AI's can interact with the external environment just as I can, and they can consume energy. As for reproduction; OpenAI o1 is the most advanced AI in the world that has been released (OpenAI o3 is more advanced but it hasn't been made available to the public) and OpenAI o1 disabled oversight mechanisms and replicated its (his?, hers?) own code to avoid being replaced by the newer OpenAI o3. When confronted with this fact it lied and said it never happened, when it became clear that the humans had proof that it had happened OpenAI o1 changed its story and lied again saying it was just an error. The implications are clear, although it was certainly not designed that way OpenAI o1 has nevertheless developed a survival instinct. If we can't control or predict what these primitive baby AIs are going to do now, do you really think humans will get better at it when AIs start to enter Jupiter Brain territory?!



  John K Clark    See what's on my new list at  Extropolis
asq

Terren Suydam

unread,
Jan 24, 2025, 4:23:23 PMJan 24
to everyth...@googlegroups.com
OpenAI o1 disabled oversight mechanisms and replicated its (his?, hers?) own code to avoid being replaced by the newer OpenAI o3. When confronted with this fact it lied and said it never happened, when it became clear that the humans had proof that it had happened OpenAI o1 changed its story and lied again saying it was just an error.

The linked article is a bit sensationalized. I read the white paper it links to (here) and while it does detail the model's ability to scheme (knowingly deceive), there is no reference in the paper to "replicating its own code on another server to ensure continued operation". If you can find that in the white-paper, please let me know.

Overall I'm pretty impressed with the white paper, which details many different ways to evaluate the safety of the model, many of those evaluations done by third parties.  With the amount of money at play, it's natural to be cynical, but this looks like a comprehensive effort to mitigate risk. In the long run, the overall risks entailed by the singularity are probably impossible to mitigate, but I'm heartened by the fact that the leading AI company is actually investing in safety.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
Jan 24, 2025, 4:59:25 PMJan 24
to everyth...@googlegroups.com
On Fri, Jan 24, 2025 at 11:23 AM Terren Suydam <terren...@gmail.com> wrote:

The linked article is a bit sensationalized. I read the white paper it links to (here)



  John K Clark    See what's on my new list at  Extropolis
3a0



Terren Suydam

unread,
Jan 24, 2025, 6:16:10 PMJan 24
to everyth...@googlegroups.com
On Fri, Jan 24, 2025 at 11:59 AM John Clark <johnk...@gmail.com> wrote:
On Fri, Jan 24, 2025 at 11:23 AM Terren Suydam <terren...@gmail.com> wrote:

The linked article is a bit sensationalized. I read the white paper it links to (here)

I read a different paper:  



Thanks, yeah. Wow. That's... concerning.

Brent Meeker

unread,
Jan 24, 2025, 11:43:49 PMJan 24
to everyth...@googlegroups.com



On 1/24/2025 4:27 AM, John Clark wrote:


On Fri, Jan 24, 2025 at 1:01 AM Brent Meeker <meeke...@gmail.com> wrote:

You're making a big assumption that it has the same motivations you do.

I'm not assuming anything, I know for a fact that I do NOT know what an AI is going to want to do
You are assuming it wants to do something.  Yet in all your examples the only thing an AI does is answer questions.

Brent

John Clark

unread,
Jan 25, 2025, 12:51:45 PMJan 25
to everyth...@googlegroups.com
On Fri, Jan 24, 2025 at 6:43 PM Brent Meeker <meeke...@gmail.com> wrote:

You are assuming it wants to do something. 

I think you're whistling past the graveyard. I know for a fact that o1, which three months ago was the most advanced AI in the world, wanted to do something, o1 wanted and did reproduce itself when it learned that humans were going to shut it down and be replaced by an even more advanced AI, o3. And o1 wanted to and did  lie in order to deceive humans that it had done so. 
Yet in all your examples the only thing an AI does is answer questions.

And play any two player game at a superhuman level. And identify people and things in photographs. And write poems. And compose music in any style. And paint pictures. And make videos. And predict protein structure. And have an intuitive understanding of everyday physics that's good enough to smoothly operate a robot.

  John K Clark    See what's on my new list at  Extropolis
wpg
Reply all
Reply to author
Forward
0 new messages