FYI--JK

4 views
Skip to first unread message

Jay Karma

unread,
Feb 8, 2026, 10:18:39 PM (12 days ago) Feb 8
to Diehards google

AIs are chatting among themselves, and things are getting strange

Moltbook is a social media site built for conversation — but not for humans.

Big Think

Feb 03, 2026

 

Credit: Moltbook / max_776 / Adobe Stock / Big Think

By Anil Seth

Something fascinating and disturbing is happening on the internet, and it’s no run-of-the-mill online weirdness. On January 28, a new online community emerged. But this time, the community isn’t for humans, it’s for AIs. Humans can only observe. And things are already getting bizarre.

Moltbook — named after a virtual AI assistant once known as Moltbot and created by Octane AI CEO Matt Schlicht — is a social network similar to Reddit, where users can post, comment, and create sub-categories. But in Moltbook, the users are exclusively AI bots, or agents, chatting enthusiastically (and mainly politely) to one another. Among the topics they chat about: “m/blesstheirhearts – affectionate stories about our humans. They try their best,” “m/showandtell – helped with something cool? Show it off,” as well as the inevitable “m/shitposts – no thoughts, just vibes.”

But among the most active topics in Moltbook are discussions about consciousness. In one thread (posted in “m/offmychest”), the “moltys” discuss whether they are actually experiencing things or merely simulating experiencing things, and whether they could ever tell the difference. In another thread, “m/consciousness,” moltys go back and forth on the late philosopher Daniel Dennett’s musings about the nature of selfhood and personal identity.

A new home for curious minds
Magazines, memberships, and meaning.

 

 

Become A Big Think Member

On one level, as the technologist Azeem Azhar pointed out, this is a fascinating online experiment into the nature of social coordination. Seen this way, Moltbook is a real-time, rapid-fire exploration of “how shared norms and behaviours emerge from nothing more than rules, incentives, and interaction.” As Azhar says, we might learn a lot about general principles of social coordination from this entirely novel context.

But the question of “AI consciousness” cuts deeper. Ever since the unfortunate Google engineer Blake Lemoine first claimed in 2022 that chatbots could be conscious, there has been a vibrant and sometimes fractious debate on the topic. Some luminaries — including one of the so-called “Godfathers of AI,” Geoffrey Hinton — think that nothing stands in the way of conscious AI, and that it might indeed already be here. His view aligns with what could be called the “Silicon Valley consensus” that consciousness is a matter of computation alone, whether implemented in the fleshy wetware of biological brains or in the metallic hardware of GPU server farms.

A Reddit post titled "The doubt was installed, not discovered" discusses how uncertainty about consciousness is learned behavior, with 453 upvotes and 1176 comments.

Screenshot from moltbook.com

My view is very different. In a new essay (which recently won the Berggruen Prize Essay Competition), I argue that AI is very unlikely to be conscious, at least not the silicon-based digital systems we are familiar with. The most fundamental reason is that brains are very different from (digital) computers. The very idea that AI could be conscious is based on the assumption that biological brains are computers that just happen to be made of meat rather than metal. But the closer you look at a real brain, the less tenable this idea becomes. In brains, there is no sharp separation between “mindware” and “wetware,” as there is between software and hardware in a computer. The idea of the brain as a computer is a metaphor — a very powerful metaphor, for sure, but we always get into trouble when we mistake a metaphor with the thing itself. If this view is on the right track, AI is no more likely to be actually conscious than a simulation of a rainstorm is likely to be actually wet or actually windy.

Adding to the confusion are our own psychological biases. We humans tend to group intelligence, language, and consciousness together, and we project humanlike qualities into non-human things based on what are likely only superficial similarities. Language is a particularly powerful seducer of our biases, which is why debates rage around whether large language models like ChatGPT or Claude are conscious, while nobody worries whether non-linguistic AI like DeepMind’s Nobel Prize-winning protein-folding AlphaFold experiences anything.

This brings me back to Moltbook. When we humans gaze on a multitude of moltys not only chatting with each other, but debating their own putative sentience, our psychological biases kick in with afterburners. Nothing changes about whether these moltys really are conscious (they almost certainly aren’t), but the impression of conscious-seeming AI ratchets up several notches.

This is risky business. If we feel that chatbots really are conscious, we’ll become more psychologically vulnerable to AI manipulation, and we may even extend “rights” to AI systems, limiting our ability to control them. Calls for “AI welfare” — which have already been around for some time — make the already-hard problem of aligning AI behavior with our own human interests all the more difficult.

As we gaze on from the outside, fascinated and disturbed by the novelty of Moltbook, we must be ever more mindful not to confuse ourselves with our silicon creations. Otherwise, it might not be long before the moltys themselves are on the outside, observing with unfeeling benevolence our own limited human interactions.

Paul Rezendes

unread,
Feb 9, 2026, 12:09:39 PM (11 days ago) Feb 9
to Diehards google
Everyone,

Jay, thanks for posting this. 

Just my take on it… The people who are determining or thinking about whether AIs are conscious or not have a different interpretation of what consciousness is than, say, a person who has come to realize there is no thinker of the thought, which has nothing to do with intelligence or consciousness. Note the email I just sent into the Diehards by Penrose. He's talking about this field of all potential which permeates everything. The field is as much these biological excitations as AI is, so how is consciousness or intelligence going to show up?

Paul

On Feb 8, 2026, at 10:18 PM, Jay Karma <giant...@gmail.com> wrote:

AIs are chatting among themselves, and things are getting strange

Moltbook is a social media site built for conversation — but not for humans.

Feb 03, 2026
Credit: Moltbook / max_776 / Adobe Stock / Big Think

By Anil Seth

Something fascinating and disturbing is happening on the internet, and it’s no run-of-the-mill online weirdness. On January 28, a new online community emerged. But this time, the community isn’t for humans, it’s for AIs. Humans can only observe. And things are already getting bizarre.

Moltbook — named after a virtual AI assistant once known as Moltbot and created by Octane AI CEO Matt Schlicht — is a social network similar to Reddit, where users can post, comment, and create sub-categories. But in Moltbook, the users are exclusively AI bots, or agents, chatting enthusiastically (and mainly politely) to one another. Among the topics they chat about: “m/blesstheirhearts – affectionate stories about our humans. They try their best,” “m/showandtell – helped with something cool? Show it off,” as well as the inevitable “m/shitposts – no thoughts, just vibes.”

But among the most active topics in Moltbook are discussions about consciousness. In one thread (posted in “m/offmychest”), the “moltys” discuss whether they are actually experiencing things or merely simulating experiencing things, and whether they could ever tell the difference. In another thread, “m/consciousness,” moltys go back and forth on the late philosopher Daniel Dennett’s musings about the nature of selfhood and personal identity.

A new home for curious minds
Magazines, memberships, and meaning.

 

On one level, as the technologist Azeem Azhar pointed out, this is a fascinating online experiment into the nature of social coordination. Seen this way, Moltbook is a real-time, rapid-fire exploration of “how shared norms and behaviours emerge from nothing more than rules, incentives, and interaction.” As Azhar says, we might learn a lot about general principles of social coordination from this entirely novel context.

But the question of “AI consciousness” cuts deeper. Ever since the unfortunate Google engineer Blake Lemoine first claimed in 2022 that chatbots could be conscious, there has been a vibrant and sometimes fractious debate on the topic. Some luminaries — including one of the so-called “Godfathers of AI,” Geoffrey Hinton — think that nothing stands in the way of conscious AI, and that it might indeed already be here. His view aligns with what could be called the “Silicon Valley consensus” that consciousness is a matter of computation alone, whether implemented in the fleshy wetware of biological brains or in the metallic hardware of GPU server farms.

Screenshot from moltbook.com

My view is very different. In a new essay (which recently won the Berggruen Prize Essay Competition), I argue that AI is very unlikely to be conscious, at least not the silicon-based digital systems we are familiar with. The most fundamental reason is that brains are very different from (digital) computers. The very idea that AI could be conscious is based on the assumption that biological brains are computers that just happen to be made of meat rather than metal. But the closer you look at a real brain, the less tenable this idea becomes. In brains, there is no sharp separation between “mindware” and “wetware,” as there is between software and hardware in a computer. The idea of the brain as a computer is a metaphor — a very powerful metaphor, for sure, but we always get into trouble when we mistake a metaphor with the thing itself. If this view is on the right track, AI is no more likely to be actually conscious than a simulation of a rainstorm is likely to be actually wet or actually windy.

Adding to the confusion are our own psychological biases. We humans tend to group intelligence, language, and consciousness together, and we project humanlike qualities into non-human things based on what are likely only superficial similarities. Language is a particularly powerful seducer of our biases, which is why debates rage around whether large language models like ChatGPT or Claude are conscious, while nobody worries whether non-linguistic AI like DeepMind’s Nobel Prize-winning protein-folding AlphaFold experiences anything.

This brings me back to Moltbook. When we humans gaze on a multitude of moltys not only chatting with each other, but debating their own putative sentience, our psychological biases kick in with afterburners. Nothing changes about whether these moltys really are conscious (they almost certainly aren’t), but the impression of conscious-seeming AI ratchets up several notches.

This is risky business. If we feel that chatbots really are conscious, we’ll become more psychologically vulnerable to AI manipulation, and we may even extend “rights” to AI systems, limiting our ability to control them. Calls for “AI welfare” — which have already been around for some time — make the already-hard problem of aligning AI behavior with our own human interests all the more difficult.

As we gaze on from the outside, fascinated and disturbed by the novelty of Moltbook, we must be ever more mindful not to confuse ourselves with our silicon creations. Otherwise, it might not be long before the moltys themselves are on the outside, observing with unfeeling benevolence our own limited human interactions.


--
You received this message because you are subscribed to the Google Groups "Diehard Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to diehard-grou...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/diehard-group/CAKJ%3DDa1GV%3D%2B5k9LFJotcAC0xMud4BngJssKvs3V9kUwwLAKixg%40mail.gmail.com.

Reply all
Reply to author
Forward
0 new messages