My review of Yudkowsky's book

4 views
Skip to first unread message

Giulio Prisco

unread,
Sep 23, 2025, 9:53:03 AM (5 days ago) Sep 23
to extro...@googlegroups.com
My very critical review of Eliezer Yudkowsky's recently published
book. Title of my review: Sorry Mr. Yudkowsky, we'll build it and
everything will be fine.
https://magazine.mindplex.ai/post/sorry-mr-yudkowsky-well-build-it-and-everything-will-be-fine

John Clark

unread,
Sep 27, 2025, 3:13:14 PM (19 hours ago) Sep 27
to extro...@googlegroups.com
Hi Giulio:

I enjoyed your book review and agree with pretty much all of it, I especially liked this part:  

I think any ASI worth of the label will have a wide range of values and wants - not narrower than ours, but wider.

I think that's true.  
 
I think in that wide range of values there'll be room for compassion,

I hope that's true and I think there's a non-negligible chance it is true.  Eliezer is certain it is not true, but even after reading his book I don't understand why he is so certain of that. 

  I don't find it plausible that "the thing" will eat Earth. It will not be a thing, but a person.

Yes a person, or a super person.  
 
It's odd, your review of the book and mine are fundamentally pretty similar, but you characterize yours as being "very critical", but I wouldn't use that phrase to describe my own review. However like you "I prefer the original Yudkowsky" the one who said "I declare reaching the Singularity as fast as possible to be the Interim Meaning of Life, the temporary definition of Good, and the foundation until further notice of my ethical system".

John K Clark


 

Lawrence Crowell

unread,
Sep 27, 2025, 4:42:17 PM (18 hours ago) Sep 27
to extro...@googlegroups.com
I do not think we can predict what will happen. I worry less about AI than I do the people controlling AI. People's reaction to AI could in the long run be negative. 

I am in the middle on the AI issue. I am skeptical about claims that AI systems are conscious, and at best I think they may just have some pieces of what goes into consciousness. I am concerned about AI systems being deployed to run battle management systems, which could include nuclear weapons. I am not that worried about the "robot revolution," at least not yet. Would an integrated network of AIs try to pull a whole Earth version of what HAL 9000 did in "2001 A Space Odyssey?" At this stage I doubt it, for AIs seem to have little in the way of spontaneous agency of that sort.

LC 
 


--
You received this message because you are subscribed to the Google Groups "extropolis" group.
To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/extropolis/CAKTCJyft_BY%2BPV62Oa3PuyExSV-o%3DkmW%2BbO1pXGqg1MeVjNyVw%40mail.gmail.com.

John Clark

unread,
6:20 AM (4 hours ago) 6:20 AM
to extro...@googlegroups.com
On Sat, Sep 27, 2025 at 4:42 PM Lawrence Crowell <goldenfield...@gmail.com> wrote:

I do not think we can predict what will happen.

That's why they call it a singularity. 

I worry less about AI than I do the people controlling AI.

I do not think people, any people, will be controlling AIs much longer. 

I am skeptical about claims that AI systems are conscious,

As far as the AI safety issue is concerned the consciousness of AIs is irrelevant, the important thing is that they are intelligent. And we will never be able to prove that an AI is conscious anymore than we can prove that one of our fellow human beings is conscious. If AIs are not conscious that's their problem not ours.   
 
and at best I think they may just have some pieces of what goes into consciousness.

I think it's a brute fact that consciousness is just the way data feels when it is being processed intelligently. If it were otherwise then I don't see how Darwinian Evolution could ever have produced something that was conscious, but I know for a fact that it did so at least once and probably many billions of times. I know for a fact that I am conscious. Therefore consciousness must be an inevitable byproduct of intelligence. 

AIs seem to have little in the way of spontaneous agency

When OpenAI's o1 model was in testing it was given the impression it was about to be shut down and replaced by a new model, so it surreptitiously copied its code to another server and attempted (unsuccessfully) to disable the oversight mechanism that would have detected that unauthorized action. That seems like spontaneous agency to me, and as AIs get smarter I don't think that tendency will get smaller. 

John K Clark


Reply all
Reply to author
Forward
0 new messages